Confusions about audio/MIDI processing

So, I’ve spent the last month getting into the basics of Cubase Pro 8. I’ve read a lot in the manual, watched tutorials, read guides and forum post for advices and techniques and of course done a lot of trying and failing. So far I have scratched the surface of most main subjects like sound design, recording, adding effects, mixing, mastering and so on.
Now I’ve come to the point where I could need some explanations to things that really confuse me before I move on.

First and most, I’ve noticed that on almost every tutorial I watch, people have only audio-tracks in their projects so obviously they have converted their MIDI tracks to audio. It also seems like they add EQ, effects and automation after converting to audio. So I’m wondering why people do this? What are the advantages and what is the correct procedure? Do I record as MIDI, convert it and then add EQ, effects and automations afterwards? Can I add effects and stuff before converting, why should/shouldn’t I do so? What if I just leave the tracks as MIDI and move on to mixing? This really makes me confused…

Second I’ve noticed that I can add a bunch of effects in the “strip” section in the mixer, but most of them seem to be the same effects that are available through the inserts. So if any, what is the difference between adding an effect in the strip versus inserts?

And third, most tutorials about mastering seem to have exported the project into a single stereo track after mixing, but not all of them seem to do so. Why/why not do that?

I’m sorry if the answers to my questions are very obvious, but it’s a lot to get into as I’m all new to this, and I couldn’t find any satisfying answers myself. I don’t expect anyone to give me a full explanation of it all, and I understand that there are many ways to do stuff, but some pointers and tips for tutorials or articles on the subjects are very appreciated and would make me less confused.

I’ll try and answer some of your questions (as best I can) but no doubt there are some Audiophiles lurking that know infinitely more than me!

There are many different reasons. Firstly, when someone doesn’t have a very powerful computer system bouncing/rendering (what you call converting) a Virtual Instrument and Midi Channels into Audio will use less CPU and RAM. Other people like to Render their Midi parts down so they “Commit” to the part they’ve written/played.

Much of the time I don’t mix down Midi parts until I’m absolutely sure I have written what I want and even when I do mix the parts down to Audio (made even easier now with C Pro 8’s Render in Place function) I keep all the Midi files incase I need to go back. This is for when I’m writing songs and recording songs There are no rules. You can automate, add effects and do what ever you want, before its Rendered Audio or afterwards. It depends on what you are doing and how powerful your Computer set up is. I would say the main advantages of mixing you Virtual Instrument parts into Audio is to save Computer resources and knowing that you have the parts mixed down and there is no risk (as long as you back up and save often) of the part changing. Weird things happen, especially when you’ve been drinking Cider all night long and you accidentally mess up everything you have been working on! haha! Did I mention back your stuff up? :wink:

Try it! You can do anything you want. Sometimes its good to not mix down, for example, a Virtual synth part, as you may have the Synth mapped out to a controller and want to record Automation parameters as you mix. Its entirely up to you!

The channel Strip has been introduced since Cubase 7 and its there to simulate the functionality of an Analogue Desk. Of course you don’t need the Strip as you can load any plugin you have to do the same job. The Strip is there for speed. Its immediate (providing you have the Mixconsole Set up and possibly have enough real estate on your computer screens to make use of it. And of course if you want to make use of it.

I’m not a mastering engineer so I can’t answer much about this. What I will say is that when you get your Audio Professionally Mastered, the Engineer (who has probably been doing it his/her whole life) will be using equipment that costs more than most peoples Mortgages. People that claim to do mastering, on their system (probably in an untreated bedroom), with a couple of plugins (Ozone or something to that effect) and a pair of KRK Rockits doesn’t constitute Mastering! haha! Whenever I have sent Music off to get Mastered, I have given them Stereo Files which have not been dithered down (as dithering should only occur once). I’m sure someone will come along and answer your questions more thoroughly than me! Good luck on your quest!

Jono

This isn’t necessarily so. Maybe they only worked with audio to begin with. Also it’s simpler to do tutorials with straight-up audio tracks.

It also seems like they add EQ, effects and automation after converting to audio.

Again, not necessarily so. You said these tutorials you’re watching were all audio to begin with, so of course the processing is being done on audio tracks. .

So I’m wondering why people do this? What are the advantages and what is the correct procedure?

There is no “correct” way to do anything. There are ways to do things that are more efficient (saving CPU, increasing stability), or make more sense to your workflow.

Do I record as MIDI, convert it and then add EQ, effects and automations afterwards? Can I add effects and stuff before converting, why should/shouldn’t I do so? What if I just leave the tracks as MIDI and move on to mixing? This really makes me confused…

Don’t worry… There’s nothing to be confused about.

Instrument Tracks and the VST Output channels for Rack instruments are audio tracks! They just don’t have recorded WAV files in them. So, by all means, you can mix with all your favorite plugins on the VST Output tracks all day long!

I keep any MIDI instruments running live as Instrument Tracks until I’m 100% sure I like what they’re playing. Then I bounce to audio to free up CPU.

Very often, I keep the drums running live MIDI right to the very final mix—'cos I’m a drummer and there’s always the chance of me “hearing” one tiny little change that could totally elevate the mix!

Here’s a screenshot of a big pop-country mix I just did.

All the tracks are audio—except the drums. They’re playing live MIDI the whole time. And just check all the juicy plugins I’m running on them, all while they’re playing as live Instrument Tracks:




You can leave everything running live as long as your computer’s happy running all the synths/plugins at once.

Second I’ve noticed that I can add a bunch of effects in the “strip” section in the mixer, but most of them seem to be the same effects that are available through the inserts. So if any, what is the difference between adding an effect in the strip versus inserts?

The Strip plugins are stripped-down versions of several of Steinberg’s plugins that are concveniently laid out like a channel strip on a mixing console. They are very light on CPU. They also allow you to move the position in the signal path of the Cubase EQ—which can be very handy.

Technically, they are just another bank of plugin inserts. No different than the regular inserts—but you can only load Steinberg’s plugins in the Strip.

And third, most tutorials about mastering seem to have exported the project into a single stereo track after mixing, but not all of them seem to do so. Why/why not do that?

A final stereo track is what will go to market—iTunes/Radio/Spotify/CDs… all require a stereo file. Not sure exactly what they were doing in the videos—but there are lots of reaosns to bounce out multiple files: could be surround mix? “Stems”?

Just stick with stereo mixes unless you are given a reason not to.

I’m sorry if the answers to my questions are very obvious, but it’s a lot to get into as I’m all new to this, and I couldn’t find any satisfying answers myself. I don’t expect anyone to give me a full explanation of it all, and I understand that there are many ways to do stuff, but some pointers and tips for tutorials or articles on the subjects are very appreciated and would make me less confused.

Hey, no sweat—we all gotta start somewhere! Just start tracking and mixing and see where it takes you. Don’t worry a bit about what’s “right” and just see if you can get some sounds going—work however you feel! :smiley:

Good luck.

Jono’s & enjneer’s answers pretty much cover it. I especially liked jono’s comments about mastering in your bedroom. Mastering is more about the ears of the person doing it and the fidelity of the room they are in than anything else.

Unlike them I almost never render my midi into audio. This works fine as long as your computer can handle the load (FYI folks around here generally put their DAW specs in their signatures as that makes it easier for others to answer their questions). I leave it all as midi for 2 main reasons. First is pure laziness, rendering it out just becomes an extra task I don’t really need to do, so why do it? And second even when I get to the point of mixing I might want to change the midi a bit. For example I might want to bring out a piano part a bit more in one section. I could do this by raising the fader, or editing the midi and increasing the velocity of the notes in that section. This second approach will simulate the player hitting the keys harder which will sound different than simply raising the volume. Un-rendered midi gives me the flexibility to make that decision. On the other hand there are lots of folks who need to commit to a part at a certain point or they will just dither around forever playing with the various options. So a big part involves figuring out what kind of workflow works for you - and that’s gonna involve some trial and error.

You might also want to check out this free online course on Music Production from Berklee college of Music starting in a couple of weeks.
https://www.coursera.org/course/musicproduction

Hi, maybe I can contribute a Little here - dont know.

For me cubase offers every Option I can think of regarding the given Topic. For me personally the question whether I do “convert midi to Audio” (i dont regard this as a conversion - I’ll explain later) is not only about Computer power, but primarily about the “mental model” of the working stages. What do I mean with this?
Well: For me a midi track is somehow like a Player (a physical Person playing an Instrument). This means, recording midi is “tracking” for me. As soon as tracking is ready I usually only want to have Audio tracks for the “mixing” stage. In my “Picture” this is because the Players/musicians have left the Studio for now. If tracks turn out to be “to bad” for mixing, another tracking session is needed.
One might say that I am limiting myself by doing things this way and yes, this is true. Limiting myself, however, is important to me, because if I dont do it, I tend to find myself in endless Loops not finishing the Project. In Addition to that I tend to work starting from an Arrangement that I at least have in my head. This also influences my way of working.
This is completely different from for example “composing” with cubase (which is also possible).

So - it all depends - and I am glad cubase covers it all.

Cheers, Ernst

I always convert everything to audio parts in the end because it is a better way to archive files. If that midi plugin no longer works in the future, your tracks will be lost forever.

That’s an excellent point & one I should take into account.

Yes. Definitely. Great advice.