New Track Types -> New Recording Paradigm?

Initially, I came up with these two new track types as just that: new, work-saving track types. But the more I think about it, I think they also represent a new paradigm in how we record! Taking a cue from professional photography, the idea is to, as much as possible, keep all the pieces of each instrument or performance together, rather than scattered around across MIDI, Audio, and VST tracks, for example. I thought Instrument Tracks were a great step in that direction, but clearly I was imagining them to be much more than they were originally intended.

So here goes - here’s two new track types, each of which tries to simplify a different type of performance, by managing both the performance and the rendered audio (among other things) in one place, plus my thoughts on why I think this direction could be such a game-changer:

#1. “Re-amp” Track: records two logical audio signals: a mono DI signal + a stereo processed signal. Stereo signal may be recorded simultaneously or generated / rendered afterwards, as described below. Can monitor, view, route both independently, but the primary reason and capability for this track type is to seamlessly manage the two logical signals as one for cut/copy/split/paste/etc editing, e.g., in the Project window. Normally, the stereo signal is used (e.g., routed to mix busses, etc.) but if desired, the stereo signal can be regenerated (rendered) from the mono signal, e.g., by routing through internal or external FX. In other words, the Re-amp Track type has two distinct modes: mono “source” mode and stereo “processed / amped” mode. The most common use case for the Re-amp Track is for electric guitar and bass, using an external amp simulator such as the Kemper Profiler, Line6, AxeFX, etc.

#2. “MIDI/Audio” Track (replaces or redefines the current Instrument track type): records two logical signals: a MIDI track + a stereo audio signal. Stereo signal may be recorded simultaneously or generated / rendered afterwards, similar to the Freeze function in the current Instrument track type. Unlike the current Instrument track type, however, is that a Frozen “MIDI/Audio” track is nearly indistinguishable from a plain Audio track: it can be split/copy/pasted/etc. If necessary, the track can be reverted back to MIDI, e.g., for further editing as MIDI, and then re-rendered (ala Freeze) to switch the track back to Audio mode. In other words, a MIDI/Audio track has two distinct modes: MIDI mode and Audio mode. Each mode should be virtually identical in operation and capability as their plain MIDI and plain Audio cousins; the power and reason of having both capabilities in one track is to manage the two as one, greatly facilitating workflow with MIDI instruments.


  1. Everywhere I say “stereo” above could be any supported audio format. I only work in stereo, but there’s no reason that a Re-amp Track or MIDI/Audio track couldn’t render to a surround format.
  2. The initial implementation of both track types could simply discard all changes made in the secondary mode when reverting to the primary mode, similar to Un-Freeze on Instrument tracks today. But I can imagine these track types becoming wildly popular features in Cubase, and thus future versions could actually preserve the secondary mode change history, giving the user the option to re-apply them. But hey, one step at a time. :slight_smile:
  3. The MIDI/Audio track type could potentially manage any number of MIDI lanes/tracks as the MIDI portion. Thus, you could have separate MIDI lanes for each drum in the drum kit, then Freeze / Render to audio when done MIDI editing, thus visually “rolling up” a whole bunch of MIDI lanes into a single audio track.

Of course, some will point out that you can do all this already - by hand. And that’s exactly my point. These two new track types are all about improving productivity and speeding workflow. Whether you’re recording electric guitar, bass, vocals, synths, etc., each individual track moves through phases: recording the raw performance, editing, then rendering/bouncing to audio, ready to be mixed with everything else.

I think this represents a pretty fundamental mind shift from the way we track today. Now: the Project is just a bunch of event streams of various types: a MIDI track to record the MIDI performance, an Audio track to render the audio, and a VST track for the instrument itself. Proposed: the Project tracks become almost completely self-contained for each instrument/performance. For example, the “Drums” track has the MIDI performance, the VST instrument, and the rendered audio, all right there!

Think about how incredibly simplifying and liberating this would be - it moves us beyond the old tape-deck paradigm to a much more modern way of working, where everything to do with the drums is on the Drums track, everything to do with each of the guitar performances is on their respective tracks, everything to do with each synth performance, and so on.

If you’ve used Lightroom (or the old Apple Aperture, RIP) you know that professional photography made this move already - long ago. A photo is a photo, all in one place: versions, edits, even stacks of similar photos or photos taken as a burst. I no longer have to manage each version of the photo as separate files, because that’s time-consuming, fatiguing, and error-prone. Same thing with having to manage (at least) three Cubase tracks for one MIDI performance!

OK, I’ll stop now. Time to get some real work done. But I’m very curious what others think. Does this new paradigm make sense? Would it improve your productivity, like it would mine?

– jdm