This is my second try to post here. I am recent user of Cubase, so far I like a lot of things about its workflow. Question, how do users typically work with VST plugins of guitar amp models and track them live…then render down (without using FREEZE or other faster than realtime options)?
I can’t easily route a track output to input a new blank track for real time rendering.
The best practice is to record the dry signal. Then you can change all settings while mixing. So you apply the plug-ins to the Audio effect as an Insert effect, while recording. You have to count with the latency then.
Then you can edit it, mix it… In the Export > Audio Mixdown, you can decide to export in realtime or in offline mode (which is enabled by default).
You don’t need to have the plug-in on the input channel.
If you put it on the Track you’re recording to instead you’ll still be able to hear the effect of the plugin whilst recording but can then change the settings afterwards as you’ve only recorded the dry guitar.
Which version of Cubase do you have (Elements/Artist/Pro)? Not being able to route tracks back in to others is a limitation of the lower versions.
How do you easily record a track to another one? In Studio One, lets say I have one track that has Amplitube 4 as insert effect and I have recorded guitar to that track. I can then add a second track, set its input to the first track and record on…click transport record and sit back and make sure I don’t hear any pops/dropouts/anomalies during the bounce.
I can’t figure out how to easily do this in Cubase. Thanks.
I think there is some confusion here. I create a new audio track, add Amplitube plugin as insert effect to the track, set record and monitor to ON. Then click transport record. I can then hear the guitar as it goes through the Amplitube plugin while I play/record.
Exactly the same in Cubase but you don’t need to indicate specific input if you put Solo your tracks (guitar and audio which will record) you’ll be able to record only your guitar.
But seriously I don’t see the advantage to record in real time it’s so simple to render or mixdown and listen after the result …
I hate to bring this up, but when you play the track back, have you remembered to turn Input monitoring OFF, if you don’t you won’t hear a thing. I mention this because it seems you are doing everything right otherwise.
I don’t understand the first sentence? I have outlined two scenarios in this thread, first - recording a guitar track using a VST insert amp modeler plugin like Amplitube…and the second - using a second track to bounce the first track to. With the first scenario the input is my audio interface channel with my guitar coming in via a radial direct box. With the second scenario, the input to the original track would not matter, but the input to the new bounce track would be the original guitar track that I just recorded. Make sense now?
And for your second comment, its a matter of saving time. When I bounce a lot of tracks over the course of a project that extra time adds up.
If you route your track to a group channel, you can use that group channel as a input for a second audio track.
You can also instead create a Output Bus that is not connected to any output, and use that for output of the first guitar track and input for the second track.
I would not bother, just do a “Render In Place” when you want to commit the changes to a new track.
I read about the group channel option in another post while researching, the Output Bus option is new but sounds like same amount of work. I hope Cubase will get updated to allow better routing…easier/quicker routing of tracks as inputs to other tracks (common feature in DAWS).
Btw - Render In Place is faster than real time and can cause dropout/pops/clicks/anomolies. I don’t trust faster than realtime anymore. Been burned too many times in the past. Thanks.
It may sound like a lot of work but it really doesn’t take long to set up (a minute?). If it’s something you’re going to use regularly you can have it set up in your templates, then it’s there waiting for you.
Unlikely this is very high on their list but more likely if you either add your +1 to an existing suggestion (it does come up occasionally from new users so may be there) or create one in the Suggestions bit of the forum.
We could debate this back and forth. All I can do is go by my experience thus far and when I bounce with decent buffers on my high powered system in real time, I never have any issues in any DAW I use. Thanks.
So last night did a test with R.I.P. using EastWest Play VSTi. Cellos ended up having crackles after the render. So…I can spend my short life chasing down the perfect settings within the entire signal chain for this one VSTi track to fix this issue, or I can just bounce in real time and be done with it.
Note that I spent time writing the cello piece, listening to it in context with entire mix, then doing the R.I.P, THEN listening back to it so that I can verify it was a clean render…noticing the crackles that didn’t previously exist…undo the render, re-bounce in real time and finally listening again and all is good. My point being R.I.P adds more time as I have to listen to it and verify that it was indeed a good bounce before moving on in the mix.