I’m considering to upgrading from Cubase to Nuendo for post-production and game audio purposes, but there’s a massive issue/doubt in those Steinberg DAWs.
At least in Cubase, if I try to import an audio file that has a different sample rate than the currently working project it always asks me this
I was always like “alright” and hit OK until now. But this might be where the issue starts because…
What if I wanted to polish samples recorded in 192kHz with plugins and preserve all the high frequency information. I HAVE TO set my project sample rate at 192kHz!
Reaper casually handles any audio sample rates in any project sample rate and exports as it is without losing information.
On the other hand Cubase cuts everything above the frequency range of the current project sample rate unless I set my project sample rate to the same as the imported samples and it requires my machine to use a ridiculous amount of CPU power.
Is it also how Nuendo works? Is there any work around without being forced to change project sample rate to very high?
If you’re working with ultrasonic sounds in Nuendo for post-production or game audio, you might need to handle frequencies beyond the standard hearing range. Nuendo supports high-resolution audio, so ensure your project sample rate is set high enough (e.g., 96kHz or 192kHz) to preserve ultrasonic details. Also, consider using spectral editing in the SpectraLayers extension for precise manipulation. Are you facing specific playback or processing issues with ultrasonic content?
My audio interface is not compatible with 192kHz playback so it’s a big problem. Also, my machine power wise Nuendo will likely to crash very frequently.
I usually work on 48kHz sessions, however in case I create some awesome sounds from those ultrasonic sounds in a 48kHz project, I won’t be able to re-use/re-manipulate the sounds in other projects because they contain only upto 48kHz and I won’t be able to pitch them down if I really need to.
The inflexibility of converting sample rates is probably what I’m afraid of the most.
In Reaper, the only perfect “export” (It’s not called that in Reaper anyway) is offline rendering, but the biggest concern here is playback. If you want to process 192k files at their native sample rate, even in Reaper, you cannot monitor your changes with your current setup. Playback resampling will resample on-the-fly, yes, and it is non-destructive (to the file) since it’s done in real-time, but the processing you monitor is at 48k. So whatever “polish” you’re hoping to add to the 192k file, you can’t even monitor it to know what you’re modifying. This is what I would be afraid of most, not being able to guarantee work because I can’t actually hear it.
Out of interest. What kind of 192khz source material are you working with? Metal sounds or something else that you recorded or downloaded?
How do you treat these sounds / what kind of manipulation are you referring to?
I assume it’s not about “polishing”, but keeping the ultra-HF content intact for extreme sound design fx, e.g. transposing a sample down for three or four octaves.
I see “polish […] with plugins” which is certainly open to interpretation, but again, processing with plugins (plural) implies making modifications that ideally one would want to monitor before delivering to someone. At least, that’s how I operate, everyone is different.
You don’t have to convert the file when you import. Unselect Sample Rate and it will preserve the 192khz. It will adjust the speed though and playback based on your project setting. You then can use the resample tool to adjust the sample rate as sort of a information stamp on the file to playback at the appropriate speed without effecting the integrity of the original file content. Though when working with 192khz recordings it might be good practice to actually work in 192khz session if for processing and exporting etc. So your saying that with reaper you can be in a 96khz session and render out the file back to 192khz? How does the plugins delivery the audio information to 192khz if not in that setting?
okay interesting to note= Reaper does have some interesting methods that I think Steinberg needs to continue to pay attention too. Nuendo essentially can work with different sample rates as I do all of the time as outlined above. Although not 100% sure about being able to render them back out of a session not in 192khz possibly and I’ll have to investigate that one of these days.
I also very much prefer Reaper’s behavior with higher sample rate files. One possible workflow improvement for Nuendo without changing too much could simply be showing the import sample rate conversion as a Process on the Direct Offline Process Window.
So, you import a 192kHz file…
SRC it to 48kHz (which now shows up in DOP)…
Realize you want to apply processing to the 192kHz version…
Either remove the SRC process or put your desired processes prior to it…
Right now, being stuck to one sample rate is a pretty annoying inconvenience for the scenario above. You have to delete the clip… remove the audio file from the pool… then re-import.
Working with a 192kHz session is of course another solution but has it’s own drawbacks. Now every 48kHz file has to be converted to 192… every video’s audio you import becomes 192.
Other than Reaper, I am unaware of any other DAW handling multiple sample rates inside of one session. Maybe then, use Reaper for those instances? Or maybe ensure you are prepared to use 192kHz sample rate in a separate session for those times that you need it? Although it may seem convenient, and can be, using different sample rate inside the same session might cause other issues (aliasing, unwanted artifacts popping up at random).
I am a big fan of keeping things stable, and at the same sample rate / bit rate et al.
Mixed samplerates is an absolute hell as soon as you have to exchange projects with other people. At he price of a tiny quality-improvement, your collaborators are pulling their hair out from frustration. It is really not worth it …
I think there are many different ways in which users with ultrasonic content/audio files.
In general i think the option is great to have it in N15/16. I think it could be great for sound effects producers and sound designers who create assets for games. For linear work (film etc) i see no use and a lot of possible headaches indeed.
I am uncertain however if SB/nuendos ASIO framework is capable of providing it though. Reaper is a different beast entirely.
I’ve been following this thread with interest, and I’m a bit confused about the overall use case here, particularly when it comes to Nuendo as the solution for the specified stage of production.
If someone could kind of “fill in the blanks” for me, I’d appreciate it. I adore experimental sound design and living in the moments of “happy accidents.” I’ve kicked around with ultrasonic, but outside the novelty, I haven’t seen or heard anything I can’t already create while working towards “intent-oriented” goals. I understand how ultrasonic sources algorithmically “stretch” and “pitch” better, but I don’t understand how that fits into what seems to be the “ask” here. Is the request that Nuendo support an entire post-production project at 192k while also supporting multiple, disparate sample rates within the project so that “yet to be determined” opportunities of manipulating ultrasonic sources can be explored?
I’m even more confused about game-audio applications where most of those creative and dev decisions should have been made long before? Not to mention the general application for audio artifacts created from ultrasonic sources in the first place?
I guess I’m grappling with scope here, and wondering what the real-world applications are. If this post is too far off subject, then I apologize.
No worries:
Imagine the following. You have recorded metallic scrapes and impacts with a sanken c100k in a studio and also used some contact mics and dynamic mics. You recorded them with multiple recorders at different sample rates. The sanken was recorded at 192khz, the rest at 96khz (since the mics don’t go beyond 20khz).
Now you want to edit these recordings and master them into a set of assets for a game or a sound effects library.
No one is going to inherit the session, just export the sounds at the specified rate (Deliverables are needed at 96khz 24bit and 48khz 16bit).
You now have a choice:
resample the 192khz files to 96 and destroy any spectral content above 40khz.
resample your 96khz files to 192khz and edit everything in a 192khz session. A waste of space but it is possible.
Now imagine you have some great old metal impacts you recorded at 48khz with your iphone (or a DAT recorder). You also want to add those to the mix. What do you do now? Resample them as well? More wasted space.
Ofcourse the argument can be held that space is cheap, but why not have the choice to do it differently.
As you can see there are reasons to support mixed samplerate sessions. There can be numerous other scenarios where this is useful. A necessity it is not, but I’d love it.
Hope this helps understand the request by giving more context.
Ah, thank you. This isn’t for the project itself, this example is building the library in the first place. This makes more sense to me. This whole time I was considering “wouldn’t it be ‘better’ (the subjective ‘better’) to use something like Wavelab for this?” but now have adjusted my scope accordingly Thank you for taking the time to reply!