outside of Melodyne, how will this be useful to the rest of my plugins or cubase?
As the universe currently exists it won’t. But the whole point is Melodyne.
I know this will sound flippant, but if you don’t already know you need it, then it won’t make any difference to you. I also realise that sounds a bit like rule number one of Fight Club. Think if it like this: regular plugins stream audio in one end and out the other – we’re all accustomed to that concept, which really dates back to the dawn of audio processing with wires connecting boxes and waiting in realtime for stuff to get from A to B.
ARA allows a plugin to “see” all of the audio and process it coherently and in context.
Melodyne is the perfect example because it’s not possible to seperate and process individual notes in realtime (that’s also why VariAudio is not a plugin). You have to “transfer” the audio in realtime to the Melodyne plugin, process it in the plugin, and then “transfer” it back. With ARA, you could just process a track with Melodyne (or any other ARA plugin) just as easily as you can process a track with VariAudio today.
As another example, Steinberg have acquired SpectraLayers and are releasing it as an ARA2 plugin so check out this video from NAMM – at 3:50 Robin (the developer of SpectraLayers) talks about ARA2 integration in Cubase and actually shows a demo of it in the lower zone.
About as useful as petrol is for a push bike.
…And no that doesn’t mean you can set fire to Cubase.
… or the color red to a blind man.
There’s a number of scenarios where ARA2 could be useful:
- Melodyne (ok, you don’t want to hear about that - but remember Melodyne does more than just tune vocals)
- Auto-tune (ARA2 in Studio One - could be in Cubase too someday - still too similar?)
- VocAlign (ah…but Cubase also has something similar)
- Spectalayers - spectral editing of audio
- Drum Replacement - Sonar had (might still) an ARA drum trigger/replacement built into the DAW
Think of where ARA could be useful: any scenario where you may want to edit the audio in a track, in a way a plugin alone couldn’t facilitate. The more DAWs that support ARA2, the more likely we’ll see even more great uses for it that we haven’t seen yet. For instance, another usage could be Izotope RX. Perfect plugin for ARA processing. Hoping they add it.
RX itself, although not a plugin, does provide an actual plugin to get around the limitations of VST (RX Connect) but it’s been broken on Windows for years. Although a hack can get it working, it’s still a clunky “transfer” process. ARA (should Izotope choose to implement it) would allow Izotope to offer RX as an in-process addition to Cubase – select a track and see it instantly in the Cubase lower zone open in RX (should Steinberg choose to implement that).
Needless to say I expect WaveLab 10 to be ARA-compatible, giving us back the tight integration into Cubase that we haven’t had since SX3. I expect therefore that we will see lower zone integration of WaveLab and the upcoming SpectraLayers Pro rather than RX.
That Spectralayers is insane. As if all this time we’ve been doing it wrong. Didn’t quite know what he meant by the “photoshop of audio” but he ain’t lying. That cutting and pasting is the ultimate kind of sidechain ducking. Working in graphics for a while, kind of EQing in this way would of made a lot more sense to me.
I always loved those visualisers.
Just when you thought everything is the way it is and will be, new tech comes along that in ten years from now, will have a new breed of mixers that do everything completely different. I don’t fix the track with EQ, I stroke an eraser over the bright part a few times til it’s the shade I need it to be to sound right.
Just don’t expect your eyes to be able to replace your ears. A color shade is only going to give an approximation.
Not arguing with what you say as i agree, but in regards to accuracy it ‘could’ be argued that an objective fixed scale is more precise than a subjective ear placed within a dynamic environment (i.e. room).
You could never create a real life listening environment (i.e. sound proofed room, calibrated speakers etc.) that matches the absolute neutrality that benefits digital processing tools - incredibly high accuracy way beyond our ears… But doesn’t make it right in a musical context, of course. But it could viably be used as a very reliable tool in future.
As humans we build up a memory of what we hear, and therefore use A-B to comparisons to verify a mix, the longer you’re listening to that track the more fatigued the ear, the worse (potentially) the result - a computer doesn’t suffer of any of these issues.
Yes I don’t think we are disagreeing. For certain surgical work it’s more useful than any other tool. I mostly focused on how we’re only able to perceive sound using our eyes in a very limited way. For example, it’s near impossible to understand what a voice is saying by just looking at a spectogram of it, but our ears can pick it up no problem. In other words, I wouldn’t start mixing with it.
I was more impressed by the ability to select 1 and have it delete the clashes from the other, when used on separate tracks… And in most cases for us normal folk, the color shade is more accurate than what we can hear. Obviously sound trumps but our ears lie, and the room alters. Another way to describe that point is we subconsciously rebalance things and adjust what we/how we hear, what we are hearing. It’s like our mind has it’s own mixer desk.
Check out the digital noise experiment where it plays what sounds like pure noise, they then tell you some words, now you can’t unhear those words, even though before you don’t hear them. It can take weeks to reset, for some it never resets. If you forget the words you might only make out some of it but it does pop back, which is normal. This has always lead me to believe the myth of golden ears lies in those who are broken, and their brain doesn’t fill in the gaps, they always hear what they hear, as if it’s the first time, no previous knowledge of the sound, room etc etc, is accounted for (to some extent). if your brain had this forgetfulness to sound more than normal people, you’d be able to nail the perfect mix much quicker.
I could go on but yes, it would be silly to think ones eyes would replace their ears, it’s a moot point (if I can use that term here).
I doubt many of us will, we’re fader boys… but imagine a future where they listen and instead of dragging down a fader, they rub over an eraser, they’re still using their ears to know when it’s right. Everything about how people used to mix could be flipped. No twiddling a virtual dial to EQ etc etc!
Cubase-Wavelab integration came back a few versions ago…but different.
My problem is I loved the SX3 and prior versions integration, but it’s been such a long time, I can’t remember exactly why I liked it so much better than the existing.
I do remember double clicking on an audio part and Wavelab opens instead of the Cubase Sample editor. Make the edits in Wavelab, and they were automatically reflected in Cubase correct? With the existing, there are more clicks i.e. open in Wavelab etc but it seems there is more. It’s been a very long time, and over those years the Cubase sample editor vastly improved. These days I don’t use the current integration very much and maybe it’s because it doesn’t feel like the old.
I was told the reason for abandoning the original Cubase-Wavelab integration had to do with the introduction of unlimited Undos.
Currently the “integration” of WaveLab in Cubase is very basic and really only intended to aid workflow by tossing you into WaveLab after you render your project. If WaveLab gets ARA2 then it would be possible to see the WaveLab editor in the Cubase lower zone instead of the built-in sample editor.
Wait a sec. There currently are 2 different Cubase-Wavelab functions that I know of. To clarify, It seems based on what you said above you are referring to option 2 below? Option 1 is what I’m referring to. It’s not as seamless as how I recall the original integration function with SX3, and seems to have more hiccups or drawbacks, but it’s not after the project is rendered.
- Edit in Wavelab: Audio>edit in wavelab. This is a recent new feature…maybe C9?
- Export Audio Mixdown: file>export>audio mixdown In the “after export section” check “open in Wavelab.” That optioin has been there for years.
Seeing Wavelab editor in the lower Cubase project zone, or a full window just like sample/drum/midi editors would be fine, assuming ARA brings the seamless integration that we had years ago with SX3 and prior versions.
I was thrilled when I saw this announced (C8 perhaps?) but weirdly, I find I seldom actually make use of this. I think it may be because any audio that would need to be treated in WaveLab, I would probably have treated in WaveLab already, prior to importing into Cubase. As it is currently implemented, it just starts WaveLab on the audio and then, well, you’re in WaveLab …
Yes, this is what I’m hoping for, but more – you’re already “in” WaveLab, within Cubase. What I’m looking forward to is to not have to think, OK, I was in Cubase but now I’m in WaveLab, what route is the audio taking, do I have to change some setting on the audio interface’s driver to accommodate the different applications, is there hidden sample rate conversion going on somewhere, all sorts of things. Sad, I know, but there have been cases … I suppose what I’m expecting is a single environment experience, across the entire Steinberg suite - Cubase/Nuendo, WaveLab, Dorico …
RX would be wonderful if it could be used with ARA.
But just to bring this up to date, since RX7, and the change in the Direct Offline Processing workflow, RX Connect has worked perfectly. No hacks required.
I have it as a Favourite in my Direct Offline Processing window and use it every day.
The only slight annoyance is that it doesn’t auto-apply, but that is only a small issue.
Other than that, it bounces back and forth happily. I highlight a region, hit the button, and RX opens. Edit and "send back to Cubase. If I am unhappy, undo on Cubase takes me back to RX and so on.
RX Connect is still broken in RX 7 on Windows and requires a hack to get it working. I’ve reported it to iZotope originally in RX 6 but nothing has changed. They’ve hard-coded an incorrect path into the plugin, and in any case the installation doesn’t follow Windows programming guidelines and will not work at all if you run Windows using a non-Administrative account as is best practise (which it seems most people ignore and therefore won’t encounter the problem).
But anyway, back on topic – yes, in the meantime we’ve got SpectraLayers in the lower zone in Cubase, thanks to ARA 2!