Nuendo in 23

I know people use this, but I never have, never missed it, and the last time I ran into another person (mis)using it it was a nightmare undoing their crappy ‘automation’. The PT implementation is (was?) awful in the sense that once that clip gain automation has been written it’s not subject to the same editing tools/keycommands as normal editing. So a 90 degree angle in automation causing a click can’t be solved by highlighting a range across those nodes and then hitting “x” to create a ‘ramp’ in PT, instead I had to use the pencil tool to edit. Much, much slower and infinitely more annoying.

I’m generally all for options though so if people get it I won’t complain.

You lost me at several places at once.
But very probably it is me being in my own zone.

OK, try this:
Take that file and drop it into your project “at origin” (or move it @ origin).
The file must snap into the correct place, no!?
Then you can trim all you want so it matches the gaps or whatever in your project.
What am I missing?

It must be that Protools rounds the number to the closest frame, because origin time is expressed in samples within the Bwave Header.
And for what it is worth (not bashing PT here) frame accurate is absolutely not good enough for me.

I must be missing something, but where the event starts and/or ends is not part of the Broadcast Wave header. The placement and length of an event is included in an EDL. (Or AAF/XML)

Origin Time IN / Origin Time OUT / Destination IN / Destination OUT

Without above information, it is impossible to know where the event starting point is.
If we simplify above EDL, and simply keep the Origin Time.
Then the event can be placed at the correct time/place within your project.
But the information what “trim” of the starting point is simply not present in the Bwav header.

So there must be an additional step or additional information in your workflow.
Again, don’t get me wrong, I am not trying to prove you wrong.
I just try to understand.

And once again, the way PT imports & coverts the files upon importing is great.

Greetz
Fredo

2 Likes

@Fredo no worries, it might be hard to follow how I explained it.

Take this scenario: DX tracks in a 2hr docu

We’re past picture lock. They want to switch out the narration, and it’s new words, so there’s no wave form matching. They give me the clip from the new VO recording and say the the replacement clip starts at Audio SRC TC 1:02:01:13. So I bring the clip into the track below. I can see the origin being 0:59:58:595 and offset 0. Now I need to trim the head so the event starts at origin 01:02:01:13. And I also know that in the NLE the clip started at 1:24:43:21 on the timeline (both data points may have come from a re-conform EDL or similar information).

Absent information that gives me the SRC TC of the event start, I can see that I need to trim 2:02:23 from the front - after using two calculators outside Nuendo (.595 is 14 frames @ 23.976). So I drag the front until offset reads 2:02.959, and then align that with snapping to the grid at timeline TC 1:24:43:21.

And I agree that frame boundary is not precise enough for the final edit. But I want to be frame aligned when I bring the clip in so I keep sync. Once I know it’s lined up properly the same way it was in the NLE, then I can refine the event start subframe to improve on that.

If Nuendo were to display the converted SRC TRC in the event description or in the status line, then I could simply trim, until that number matches what editorial gave me.

The math for what is displayed should be simple: Take the Origin TC, add the offset, and round to the nearest frame rate boundary (either based on audio meta data or project frame rate, or last resort override).

Also worth noting that the meta data of BWAV does contain the frame rate in the description field, so the display conversion could use that to round to the nearest frame:

Description : sSPEED=023.976-ND / sTAKE=01 / sUBITS=$00000000 / sSWVER=8.04.1993 / sSCENE=1 / sFILENAME=1T01_01.wav / sTAPE=AKGEN / sCIRCLED=FALSE / sNOTE= /

To clarify, I don’t need Nuendo to automate that trim based on information. But I’d like it to help me avoid having to math every time (by head or calculator). As long as it can display the effective Source Start TC of the event based on current trim, then I have enough information to efficiently trim the clip by hand (yes, if it’s an hour-long clip that may be a bit unwieldy, but still just a few clicks/drags).

I know you’re asking for some sort of technical help by Nuendo, but I can’t help but feel your workflow is needlessly complicated. If I were in your shoes I’d first of all like you know that the time the editor used as a reference was 1hr, not the actual start TC of the file. So what I’d do is just place the file at 1hr and then type in 01:02:01:13. Then trim start of event to locator and then just snap it to 01:24:43:21 or wherever it was that the VO is supposed to start.

I would also not care about one frame here or there because nobody is going to hear that difference in context of the actual mix. To the extent it’s important to be close to the first version I would simply eyeball it - visually line it up with the previous take.

And if this is for some reason really important to the other part of production, i.e. either the editor or director or producer, then they should sit down either with me so that we can time it correctly or, more commonly, sit down with an editor and place it to their liking and then give me an AAF and not an audio file.

So while I ‘get’ what you’re asking for I really don’t see a particularly large time saving based on it. I’ve done this for a couple of decades now and the way I do it has been fine every single time. No math.

1 Like

@MattiasNYC Totally appreciate that perspective and you are right in many ways. Can we do it without this? Absolutely.

This is likely a case of the ‘this how another tool does it’ tunnel vision, where then you want to work it the same way here, rather than looking at a blank slate and see what’s the best solution?

It would be a quality of life improvement in the context of ‘what would I love to see in Nuendo 23’. And we’re 95% there since the ‘origin time’ to description already exists, we just need a variation of the simple math. But there will be bigger fish to fry for Nuendo 23. So we can put this to rest now. But thanks for considering the possibilities :slight_smile:

‘‘If I were in your shoes I’d first of all like you know that the time the editor used as a reference was 1hr, not the actual start TC of the file’’.

Should this not be the given norm due to the inconsistency of starting at 00:00:00:00

Well, at least here the “norm” doesn’t really exist. There are a bunch of different ways engineers record VO and a bunch of different ways directors or editors mark takes. Unless specified ahead of time there’s no telling what you get.

In Nuendo clip gain isn’t available as an automation parameter, its clip attribute. It is valuable if you have to deal with audio files that were recorded way too low, when you need more than 12db to level them out and don’t want to do it through a plugin or direct offline process (easier to see than having to pull up DOP and re-edit).

I like using it during DX edit to get all DX peaks around -12db and into even territory. Then use volume automation after that just for the mixing part. Keeps those two processes separate, easy to re-adjust, and you see along the volume lane what’s going on.

And yes, it would be nice if the ‘x’ worked in Nuendo, which has been a long running FR. There are other solutions, so another one of those tunnel vision things.

I know, that’s why I used ‘’ around the word "automation. I know how it works. And I do the same - set clip gain to get the level ballpark and then automate the rest. I just split clips and adjust that way.

I was actually referring to PT. But yeah, same thing.

I would love to see:

  1. The ability to render multiple video cues as we can with audio file tasks. Put our video cue jobs in the job window and let the renders begin, instead of doing each individually.

  2. An optional “Key Signature” track for those needing it for specific midi exports for sheet music prep. Some of us composers change key signatures from time to time, and there is no function yet to support this.

  3. The ability to organize our expression maps within Nuendo either by category or alphabetically

Nuendo supports all native M1 UAD plugins from Spark

A frigging working mute button the the VCA masters.

4 Likes

Yes, mute for VCA masters…and please let the slaves remember their individual mute status (even if I dare to save and close the project with the VCA master muted), without the need to write automation for it first.

A little love for:
EUCON
ARA
Automation
AAF Import (source TC for mxf?)
Flexible channel with or more reliable convert function ( it seems real world use cases are very often “too complex” )

Cheers,
Dirx

2 Likes

Second that.

Some of that could be solved by allowing in-place conversion. Since you can have a stereo clip in a mono track, I’d love to be able to select two+ mono events (usually the mono pairs of a stereo track when you come from Avid MC), and just have them convert in-line (not a new track!), have it be placed on whatever the uppermost track is (can even overlap the original), and then I can drag it to the appropriate spot.

That way I wouldn’t have to bulk-do a whole track full of clips, get an error, fix fades, try again 15 times until it’s happy, then drag the clips to where I want them and delete the temp tracks. This is way more work that it needs to be.

Short of that, add an option in the current convert function to just remove any incompatible fades automatically and/or match the length of the longer of the two clips if possible. The most common error are fades being off 1 frame on one track, or one track having a minor trim inconsistency, an in some cases an event being split on one track, but not the other. If it’s a continues segment of the original file, just join them.

1 Like

Excuse my jumping in.

Include all or most of Dorico’s traditional scoring capabilities in Nuendo so there might be an app which a composer who has needs similar to a Hollywood composer - someone who needs both scoring capabilities and also needs to create high quality audio masters of their work - can use instead of having to shuffle data in and out only so they can add four bars to the score - or so they can make last minute edits to an audio track.

As part of making an all in one app introduce some kind of audio/MIDI/notation universal file format which builds on existing individual formats for each of the three - which adds the ability to specify links between audio, MIDI, and notation components. For example one could in the file format say that this audio track is the playback of this notation. And one could also have dependencies between the three types of data - full or partial linking between audio, MIDI, and notation. For example an audio track could be set so that after or during audio recording a MIDI version of the track was created - and/or a notation track - or both (notation differing from MIDI in having less rests). Or when editing notation one could choose to change only the pitches of the MIDI data but not the velocities or durations. Or disconnect the notation once it is how desired so that it isn’t changed by audio and MIDI editing components. That kind of thing.

And then completely redo the interface of the mega app - the current Nuendo and Cubase interfaces feel kind of art deco more than pro app to me (these are of course contentious areas - others may disagree).

I recognise that these are two huge suggestions - but from reading here my impression is that people aren’t looking for a whole lot of new audio and MIDI features - but instead for workability. I hope that people can at least see the reasoning that in 2023 processing power isn’t an obstacle to having apps that do it all - and that it would be great if there was one company (who else but Steinberg has full featured notation, audio and MIDI apps?) who created an app which didn’t require endless importing and exporting of data.

This development wouldn’t stop Steinberg from being able to sell subsets of the combined functionality at prices reflective of what is included.

I also have suggestions for how edit and arrange windows can function so as to have no bias to those who are working in minutes, seconds and frames - or bars and beats (when one imports or adds particular data the timeline changes to reflect the data that has been added). So my ideas don’t in being pro musician have to be anti audio engineer non-musician. You use the features you want and ignore the rest (by the way the mega app is designed).

I vaguely remember a MIDI enabled coffee maker in the 90s. It would brew coffee when there was no more input on MIDI for a certain time, and when the coffee was ready, it would just play all notes on all MIDI channels to wake you up.

Now I‘m not sure anymore if this was only a proof of concept or even just an idea, too long ago :wink: (but no drugs involved apart from caffeine!)

The holy grail?

Audio is not the only field chasing that. The hunt has been going on in video for years. A current close contender is DaVinci Resolve. And there are some learnings that may temper our desire.

While there are obvious workflow benefits, there is also significant cost. Mostly that the complexity of the everything app becomes such that it is hard to keep it consistent from most aspects of UI design, and it becomes almost impossible to test a new release, because there are so many use cases that have to be run through. The result of that being slower release cycles, more bugs, much more inconsistencies, constantly changing keyboard shortcuts as multiple merged apps compete with overlapping meanings. The compute power is not the limitation that holds these back, it’s the ability to make it usable and reliable that’s the boundary you face.

That said both audio and video workflows suffer greatly from too many brittle conforms and data interchanges. That fact that we still use EDLs which go back decades is mind boggling. So what I would love to see is not a mega app, but several apps that share a common data model, and database where all the work is stored at a higher level of integration. That makes it easier to work on a project from different apps without undue overhead and risk, but keeps the apps themselves focused on the tasks at hand. Opening a project from both apps could be as simple as opening a Google Sheet, with live data updates (were desired). That is something that is possible in several video apps these days.

My $0.02 (as someone who had a prior career building big apps)

@allklier I consider your reply helpful - since you are addressing the problem even if the solution you recommend is different. I’m not saying that I definitely have the answer to the problem - I’m saying that there is a problem that needs addressing for a number of professional people - and shuffling data between apps isn’t ever going to address it. But as you point out - if apps shared a common file structure people could change apps like they change windows in one app.

But let me also say this - while the common data model you suggest might ultimately prove to be the better way to proceed I am concerned by the way in which some people think about the problem we are discussing. My gut feeling is that the needs of a Hollywood composer don’t suddenly cause app developers to face a set of insurmountable circumstances which are unique - substantially different in terms of level of complexity to those which others developing complex pro creative apps (3D, architectural, video etc) don’t already have to solve. I hope that people can see why I am suspicious of any reasoning presented for why projects like I am suggesting aren’t viable.

Maybe it isn’t that they are unviable but rather require the most outstanding people to make happen - and enormous effort. Maybe for apps like this to work the people developing the app must be of the highest calibre - really brilliant technically - and also very much into the field in which they choose to develop. On that very issue I’m glad you bring up Davinci Resolve because I believe this app - which seems to be extraordinarily successful - gaining in popularity and influence by the day - demonstrates what a mega app can do. `It shows how integration of features can have positive results and be workable. I cannot see why even if for example the various components (particularly the Fairlight component of that app?) were to really increase their number of features why this has to cause the app to become unworkable or unstable or unusable. My sense is that Blackmagic Design - with their hands on technical CEO (with all the benefits that brings and issues it raises) - are the perfect kind of company to deliver that kind of product.

This is why it seems to me that Steinberg - who has managed to deliver both Nuendo and Dorico - two powerhouse apps - is maybe the only company who has both the desire and capacity to make such an app. I also believe that there would be advantages for Steinberg going forward as they seek to position themselves in relation to other DAW companies - and those making increasingly capable free audio and notation apps.

When I suggested these things on the Dorico forum the reaction from those who replied was that they would be willing to pay money if any company is able to deliver a product which really does change the game for them.

The advantage of Steinberg’s position (I only realised this the other day - I had been thinking that if Dorico had existed inside a DAW from the beginning that it would have been better) is that Dorico and Nuendo have each had the chance to become powerful and stable separately. Dorico as I understand it already has some integrations with Cubase Nuendo - for example I believe it uses the Cubase Nuendo audio engine.

This all makes sense.

From experience I can say: It’s tempting to jump in and suggest the solution. However, as a user we sometimes are better off focusing on the requirements and leaving it open ended to what solution gets us there.

What I take away from your description is that the primary use case that is currently unmet is to make score changes rapidly and with little friction, such that it does not interfere or unduly slow down your workflow, while using both features in Dorico and Nuendo.

That may be achievable in different ways, one is the mega app, one is a live database (not files) that is shared and can update real-time with both apps open. And there may be others yet neither you or I have articulated here that may be more feasible, benefit more users, etc. The product and tech guys at Steinberg can take that, prioritize it, brainstorm it, and hopefully pleasantly surprise you.

Yes - let’s hope the development teams of Dorico and Nuendo have better ideas than me!

A thought that comes to mind (I already suggested a three in one file format - audio, midi, notation) is that Steinberg make contact with Blackmagic Design and have a frank conversation:

“Would we be correct in saying that you have no aspirations to make Davinci Resolve into either a MIDI or notation editor? Because we have no plans to make Nuendo a video editor. In the light of this is there any way that we can work on a kind of quad file format for audio, MIDI, notation, and video - enabling audio professionals to work more closely with video pros?”

And then each company imports only some of the elements from the quad file format (the quad file format might only contain references to external video files - instead of including the files. And optionally the same caper with audio).

Then the separate app with common file format idea that you suggested @allklier is that a video and an audio app are able to work alongside each other (instead of two audio/music apps).