fastest way to convert an item to audio

I think hes maybe confusing (timebase) resolution with (bitdepth) resolution.

Yeah i think he’s talking timebase resolution. In this case, yes. But it doesn’t really matters in the real world, because average ear begins to notice differences at a 20ms delay at least…

Concerning precedence effect maybe, but 20 ms off timing especially at a fast tempo is a lot, and does matter in the real world IMHO.

Here’s the test. Pick a VSTi with a sharp attack. Drums or piano work well.

Load it into the VSTi rack.

Create a midi track, assign it to the VI

Create a stereo group channel. Assign the VI output there.

Create a new audio track. Have the INPUT source, be the buss you created.

Record enable BOTH the midi track and the audio track. Play stuff. Make it at least 8 bars. Doesn’t matter if it’s good-that’s beside the point.

Once done, you will have an audio and midi track that recorded the same performance. Right?

Mute the audio for now.

Create another audio track. The same as above, only this time, the midi you recorded will “play for you” and record another audio track. Record that in real time.

Now, mute that one…and go into export audio mix down. Select the same buss we’ve been using as the source. Render it OFFLINE selecting for it to go into the pool and a new track.

Now, you have three audio tracks of the performance. The original “live” audio you performed…the midi of the same performance rendered in real time…and rendered offline. And the three will be different. The first will be the most different, as it is your performance-not dumbed down by midi resolution. You should be ale to clearly SEE how different it is. It’s that far. The two midi rendered tracks will be closer to each other. You will need to flip the phase on one to hear the differences.

Again, note: this is NOT a Steinberg “issue” (save maybe the variance of offline)…the great difference is because midi recording does not have the resolution of audio. The audio recording IS the performance, as the performer heard it. The midi is always “something kind of close to the performance”. Always has been. It’s what 40 years old now? 35? It’s just a flaw in the protocol.

Like I said earlier-this isn’t an “issue”…it’s not debatable–EXCEPT in how much you care about the recorder reproducing what the performer played. Buy, it’s why, IMO/IME, audio should be the way everything is recorded that is actually played by a player of an instrument. MIDI is for building and shaping bits of performances into something that wasn’t playable. Example, if a drummer is playing an eKit, you record audio-to record the drummer’s actual timing…if someone is tapping kick patterns, and overdubbing a snare and tons, etc, to build a drum part, you use midi. Also, keep in mind, you can do the above to record BOTH simultaneously…so, if you have a perfect performance save one passage, you can use the audio, fix the midi on the botched passage, and re-render JUST that passage as audio and cross fade it into the original audio. Best of both worlds. Or, decide the timing changes of recording midi don’t matter to your music. That’s completely up to you.

I don’t understand the above. If it’s midi that’s causing the problem, then the real-time playing (also via midi!) should be just as much out of time right?

No…it’s not real time midi that’s the issue. It’s the way it’s recorded/sequenced.

Are you familiar with PPQN pulses per quarter note? Is the defaul in cubase 980 or 460? However high you set it, it will not map to sample rate directly. Always take more audio snapshots at a regular, non variable timebase. And not a trivial amount.

But, I get what you’re saying…in theory. Steinberg could break all conventions and “fix” the timing. To me, is seems like you should have the option to use linear timebase and have the midi “grid” follow the digital clock one to one. But, again-been doing this a quarter century, and no one has (that I’m aware of). And so, every chance I get, I use midi for less, and the music gets better for it.

I find it ironic that in a day of unlimited digital audio tracks with crazy editing capability on it…that so many new engineers are relying on midi…

just wanted to bump :wink:

imo a quick way to do this would be the ONLY thing missing in cubase…in reaper you can “apply fx to take” and it practically instantly makes a audio file out of midi data and also any vst on the chain gets rendered into that, imo it should be modeled like the record button…all daws have a record button. This imho would make cubase the most powerful daw out bar none.

Now the audio rendered in reaper does drift off from where the midi notes were, by maybe 5-10 samples or more forward or back but its still audio, and still glued together and still sounds nice.

Reaper is horribly limited in how you can work with VIs, IMO, which is how they allow it.

How would you implement it for a VI I’m sending 5 midi track to? It only is effective for instrument tracks, which are horribly limiting even for simple keyboard tracks.

Multichannel drums…will have multiple midi tracks pointed to the same VI with multiple audio out into the mixer. How is that “one button converted”?

Strings and horns are almost always going to be a many midi to one relationship.


So, they already have freezing, which converts it to audio…but, I understand you want the visual wav to edit…which you can do by selective exporting to track. I think this kind of feature is something you kind of have to institute limitations on how you work with VIs in order to do it. The lack of those limitations…and the most mature external IO compensation are the two reasons I use Cubase. Maybe you would just be better suited with another DAW.

pfft. im insulted by your suggestion popmann. that is not a solution to this problem. this is a feature request that we are completely entitled to make without being told to go to some other DAW. your suggestion is just immature.

It’s not immature at all. Be offended if you want. Wasn’t the intention.

I’ve explained why others can implement it and Cubase can’t without fundamentally changing the thing that makes it better for VI hosting/usage, IMO.

what you can do, and this is what i’ll usually do, is to split the arrange horizontaly, and place your ‘in-place bounce’ track on top. this way, your record track is always at the top, always visible, no matter where you’re zoomed to in the lower part of the arrange. recording is then a matter of arming and pressing record. what is a hassle though, is that ideally you’ll want to bypass any 2-buss processing you have set up, so that after placing the newly recorded audio in the project, it doesn’t get treated by the 2-buss processes for the second time. i suppose the master bus bypass could be addressed by some clever PLE macro (search for the track name ‘master’ [or however you call it], have PLE select that track, then follow that with a ‘bypass inserts on selected channel’). alternatively, route all your tracks to a ‘pre-2-buss’ group channel with no inserts on it whatsoever, and then select that group as the input for your ‘in-place bounce’ track.