Star Trek Request: Self-Documenting Audio Files

I would like to see a new audio standard for WAV. The signal would be encoded with information from each device in the input chain which could then be automatically read by Cubase.

Example: Gutar ----> FX Pedal ----> Mic ----> Mic Pre ----> Cubase Input

Each device, starting with the guitar would encode a tiny ID with information such as Vendor, model, pickup selection, tone control settings, gain, pad, etc.

As the WAV file is read in, all that info is permanently embedded in the -file-.

This is analogous to how digital cameras now encode each photo with COPIOUS info about camera/lens/exposure/GPS/histogram/date. Apparently, all the vendors got together and decided to do this and no photographer can imagine life without that self-documentation.

It’s ridiculous that, in 2015, there is no similar facility in-built to audio. We still have to write down the ‘settings’ for each track.

The big problem with DAWs is that very few people look around at the other media to realise just how far audio is -behind-. And the real maddening thing? VIDEO is -way- more complicated than audio. IOW: you’d expect that audio would’ve gotten these features FIRST.

Steinberg should (as with so many other things in the past) take the lead and try to get other vendors on board. I for one, would give priority to any vendor conforming to such a standard. Documenting stuff is one of my LEAST fave tasks.

Nice idea. But it seems that with access to an unlimited amount of 3rd party plug-ins, each with their own labels for their unique controls, it would be hard for a sequencer to keep track of them. Are you suggesting that Cubase would encode all the settings from the “Transmogrifier” level in one plug-in, “Plate A/Plate B reverb time blend” from another, the “Presence” setting from another, etc. etc. into the audio track? Seems like less of a sequencer (Cubase, Logic, et al.) problem than getting Gibson, UAD, Voxengo, Yamaha, Korg, Epiphone, … to get on board. A tall order it seems like.

( Does video have a more limited palate of tag-able features …

…camera/lens/exposure/GPS/histogram/date …

… is that how they are able to do it?)

Anyone with Photoshop and a digital camera knows that a LOT of info is encoded with each image’s RAW data. It’s just something that every vendor agreed upon. Audio vendors -can- create standards. They did it with MIDI, 40 years ago.

WAV files already have something similar to images and video… the XML header. You can put anything you want in there.

If you look at the VST plug-in format, Cubase already ‘sees’ all the info from every plug-in in the chain. So that’s already there. You could have that on each channel that was batch exported.

The part that requires vendor cooperation are analog devices (guitars, pedals, etc.). But there are chip companies that already make a little chip that encodes a signature (like a UPC barcode) onto the audio (outside the range of human hearing.) Such chips cost $.25 ea. They could easily be added to any active guitar pickup or pedal or synth or 48v mic.

When I started working with people in video, one of the first things that stunned me is just how far ahead they are in terms of standards and tools. From acquisition to project management, they are YEARS ahead of audio; they treat their work the same way one manages large software development. Audio is still stuck in this paradigm of like 1985… this romantic view of mixing desks and tape and so on.