Nodes-Based Chaining for the Batch Processor

Node based editing is a very powerful way to achieve very complex batch processing tasks. It was introduced in apps like Davinci Resolve and is also used in AI image generation for apps like ComfyUI. Since I started using them, I came to value the immense versatility of using this technology and naturally I starting to dream up all the amazing possibilities it could potentially bring to Wavelab’s Batch Processor.

What do you think?

I am not acquainted with the two software you mention. What is the difference between “Node based editing” and a “plug-in chain”? Could it be conditional processing based on audio analysis?

I hadn’t fully considered the extent of the possibilities, but your idea is absolutely brilliant. Imagine a workflow where an “Input Node” links to an “Audio Analyzer Node,” which then feeds data into a “Conditional Node.” This could direct the flow of audio to various plugin chains—let’s call them Node A, B, and C—based on specific conditions. The processed audio or signal can then be simultaneously routed to an array of “Output Nodes,” such as MP3, WAV, or AIFF, or alternatively be converted to different bit rates or even funneled through a “Dithering Node.” Unlike the limitations of the current batch processor, which can only handle serial exports, a node-based system allows for parallel operations.

The beauty of this lies in the construction of intricate signal and file processing architectures that not only work in parallel but can also dynamically change pathways. Existing batch processor plugins like Normalizer, Pan Normalizer, DC Remover, and others could be transformed into modular nodes. The game-changer is the flexibility to define the routing and parallel processing of signals or files. For instance, you could link a “Stereo Node” to dual “Mono Nodes,” process each seperately through a specialized “Mono Plugin Chain Node,” and then converge them via a “Mono to Stereo Node,” eventually directing them to various “Stereo Export Nodes” for parallel conversion to formats like WAV, MP3, and AIFF. You could even add conditional routing inbetween!

Although I’m not an expert, I envision two primary categories of nodes: “DSP-based” for Signal Processing and “File-based” for File Processing. There’s even the potential for hybrid nodes that allow seamless transition between DSP and file-based operations.

In summary, the potential is virtually limitless. Your node-based system could include:

  • Import Nodes for file ingestion (WAV, AIFF, MP3)
  • Conversion Nodes for sample rate and bit conversion
  • Analyzer Nodes for audio analysis
  • Processing Nodes compatible with VST 3 plugins
  • Conditional Nodes for decision-based routing
  • Helper and Filename Nodes for auxiliary tasks
  • Export Nodes for various output formats

And these can be connected in any sequence, constrained only by the available input and output connectors on each node (which would also define their use cases within either DSP signal or file processing lanes).

Additional functionalities might include nodes that can apply alphanumeric prefixes or suffixes to filenames, based on conditions like file size or bit depth.

The advantages are manifold:

  • Parallel file and signal processing
  • Conditional exports based on analysis
  • Advanced filename management
  • AI-readiness for pattern recognition nodes

Imagine the possibilities: Say you have a three-hour ambient recording featuring a mix of thunderstorms and gentle rain. Normally, you would manually segment and process these parts. With this proposed Audio Node Builder, you could construct a complex node network that automatically segments, processes, and exports files based on specific loudness conditions—ideal for crafting bespoke sample libraries.

Expand this with AI nodes capable of pattern recognition—be it one-shot samples, drum loops, or guitar riffs—and you’re looking at an exponentially more versatile and dynamic system.

If you open up node development to the community, the system not only becomes modular and expandable but could also foster a marketplace for third-party nodes, thus multiplying the possibilities exponentially.

In short, the incorporation of “Conditional Processing” based on audio analysis is very likely a groundbreaking application. If developed with an open standard, it could unleash a tidal wave of community-created nodes, taking the concept to uncharted territories of innovation.

The node architecture itself is available on GitHub. A very popular version is also chaiNNer. Its really getting traction in Image & Video processing, but I haven’t seen it yet applied to audio processing. The principle is the same but the architecture would of course be different (DSP, File Processing aso). I guess it is only a matter of time before it will also be available for audio processing and I guess whoever gets the foot in the door first, would certainly gain a competitive edge.

1 Like

Here is one I build for image processing (again, just for the sake of showcasing how node systems work). The “Audio Node Builder” (working title) would be a bit like a modular synth on steroids (as you can have not only DSP signal processing but can have all the file processing stuff running in tandem and fully automated of course).

The node system that you see in the image above has literally saved me hundreds of hours of processing work. Very efficient and versatile and perfect for AI processing (which will probably be finding its way into audio processing - and has already, to some extend, done so in Spectral Layers). AI is great for pattern recognition (=functions). BTW, Meta will release its AI audio model to the public and will allow full open source and free use (also for commercial applications).

1 Like

Actually, I had similar ideas as far back as 15 years ago, but they never came to fruition. However, if that’s the current trend, I’ll certainly reconsider them :slightly_smiling_face: Specifically, the batch processor is a feature of WaveLab that I particularly enjoy developing.

4 Likes

As Viktor Hugo said “There is nothing more powerful than an idea whose time has come”.

The real driver for these builders in my opinion are AI based nodes. Once the architecture is in place, building the nodes based on various AI audio models, weather for analysis or processing or both, will simply be the grand icing on the cake. From my experience, A.I. works really really well with node builders.

BTW, Davinci Resolve is the number 1 video editing platform from Black Magic which according to many has surpassed Adobe Premiere and Apple Final Cut Pro as the goto video editor. It’s used by most studios involved in broadcast and film, and one major driver is the internal node builder. There may even be B2B licensing opportunities for Steinberg, as Davinci Resolve only has a node builder for image / video not audio.

Here is information on Davinci Resolve… DaVinci Resolve 18 | Blackmagic Design

We have completely switched over from any other video editing software because this is so powerful. It also has the “Fairlight” audio software built in. An amazing piece of software.

And it is FREE!!!

1 Like

If I can recommend a new node it would be an “Auto Trim Silence” node. When we get files for Mastering, the silence at beginning and end are often at various length (coming from mix). Changing each file manually seems like processing time we could probably eliminate. I’m imaging a plugin chain where I can…

1.) Auto trim silence (get rid of silence at beginning and end)
2.) Add 200ms silence at beginning and end (which is possible already)

I believe number 1 “trim silence” is currently not possible in the batch processor. I am also not aware, if there is a third party plugin that can achieve this.

If someone knows of third party VST plugin that can achieve this and can be used in the Batch Processor, please let me know.

Otherwise, this would be a nice addition to a possible future “Audio Node Builder”.

Cheers

On a side node (independently from this thread topic). WaveLab already has a tool for this, the Auto Split tool:

1 Like

I don’t think this is a good idea. Sometimes a client wants more space around their music. Doing “topping and tailing” is part of the mastering process and does not need “automation”. MTCW

For this context, what I agree with, is that the process should be human-controlled, and not machine-automated, even by the best AI algorithm :slightly_smiling_face:

1 Like

Actually, Wavelab and the Batch Processor are not only used by sound engineers who master music. There are many applications and one is sample library production.

Try truncating 1000 samples manually, good luck :sweat_smile:

1 Like

I would not trust some piece of software to do that correctly every time. You would have to check each sample to see if it cut off the beginning or end. FWIW

I have been using the truncate feature with fade in and out in the batch processor for audio preview files for years now. Never had an issue.

You learn to trust the system over time and really start appreciating the amount of time saved, especially if you have many deadlines.

If you have the time, you can of course plow through each sample individually.

1 Like