ITB Tracking?

Show of hands. How many of you totally ITB studios actually track sessions? As direct recording offers no FX to the artist while tracking, how many of you can actually run a regular tracking session without any latency issues/problems for the players.

Would any of you care to share your session setup for tracking a typical live band (Drums, Bass, Keys, Guitar(s) and vocals)?
How many HP Cues can/do you send? How many mics vrs. DI’s? Are they all in the room or is there any isolation (drum booth)? Do any of the players, say the keyboardist, play virtual instruments during the live tracking? I’d just like to kow how close to a DESK RUN SESSION experience can be obtained with just mic pres and controllers?


It’s perfectly possible. I have not used an external mixing desk with Nuendo for years.
Ususally I do Foley or ADR or voice overs, so there will be only one microphone, but I have also recoded bands that way and It just works fine. I use software monitoring only, no direct routing, because I want to use my effects during tracking.
Make sure your computer has a fast cpu (i have a 3,3 hexacore here).
PCIe is more reliable than USB/firewire and allows lower latencies, but I have had success with rme firefaces as well in different setups.
I have used up to 4 different mixes (usually I everybody hears the main mix, though)
Keep your overall latency around/under 8ms and everybody will be happy.
No cpu expensive vstis here, just an occasional sample player.

Have to disagree on the above point, as the good session players I’ve had in the studio have been sensitive to latencies as low as 4 ms and I’ve not found that giving everyone the same mix works in most cases either. When it comes to bands, my hybrid setup has been the most hassle free way to go.

Absolutely agree with fuzzydude on that! 3ms drives me nuts as a player. I didn’t get comfortable until 1.5ms and that was just me playing piano with 4 mics (2 direct & 2 room). So, the thought of trying to do 8 to 10 mics for drums, a DI for bass, a DI and mic for lead guitar, a pair of DI’s for the stereo outs of the keyboard player and 4 vocal mics for the lead and BG vocalists seemed impossible. Let alone trying to run 3 or 4 cue mixes, with FX, to satisfy everybody. On a desk, you don’t even have to give it a thought.

Admitedly, the last time I tried it was on N4 and on a PC before PCIe cards. So that’s why I’m asking now. I’m trying to gauge how far the technology has come BEFORE I retire my desk. Man, I wish somebody would post a video of a tracking session with Nuage!

I have checked …again … with 3 ms when playing /recording myself ( first time this year…).
It does not bother me… I might be rather insensitive, but if your systems don’t add
anything real bad to the latency you should also have major trouble playing through
an amp since 1 meter distance equals about 3 ms.
The 3 milliseconds latency should get nicely compensated, because head cans are
roughly 2 to 3 meters closer to your ears than your speaker cabinet.

The only circumstance I can imagine that might be disturbing is when singing and that again only when direct monitoring is not used or the artist receives the direct and the delayed signal. Any added fx ( with latency ) should not be any trouble.

We here do tracking with playback and all necessary (selected) FX on the headphone mix since RME deliveres drivers which can produce 1.5 ms (0.7 ms). We usually use 3 ms quite trouble free.
Of course, that was not quite so easy workable with slower PCs using PCI busses and if drivers of your specific hardware was not fast enough, yet.

M2c, Big K

I still love to remember the days when we still produced on our analog console.
I kinda still miss it … and the beer was cheaper, too :wink:

I don’t believe this is correct, assuming I understand you correctly. I used to have a formula yonks ago and the outcome was something along these lines. A 20 kilometre mic cable would produce about 1ms of latency. I’ll try and dig up that formula when I have time.

I was refering to acoustic signals.
In electric conductors you get near lightspeed, which gives you about 300.000 meters per ms.
In air medium we have about 340 m/s, which adds up to roughly 1 meter per 3 ms.
My point was that sound takes longer from speaker cabinet to ear than a signal with 3 ms needs
to arrive at the headphone.

But surely, … we know that DAWs don’t always follow the laws of physics or logic.

Cheers, Big K

Ah, I’m with you now. Thanks for the clarification as I hadn’t considered you might be talking about acoustic signals. I also see in your sig you’re using Nuendo 8, how’s that working out for you? :wink:

I can print 32 tracks with a good EQ and Comp plugin on each through software at 32 sample buffers. See system specs below. To me, that’s the target for tracking a live band.

However, even working on what I believe to be the fastest low latency system currently available, there is still not enough headroom on the CPU to allow me to call it fully professional. In my opinion, the CPU can be at no more than 50% during use, including spikes, to call a system “Pro” for a given purpose. That is 100% headroom, which is good rule of thumb for Native systems.

So while tracking, you need to be able to go nuts, opening/closing plugins, editors, mixers etc without ever having “should I do this?” cross your mind. Anything other than that is a compromise to the creative process. And if there is a hardware based solution that does not clutter creativity, why not choose that instead?

And even at that, at 32 samples, actual, measured latency will be (on an RME MADI card) approx 2.5ms PLUS AD/DA which will be at least 1ms to 2ms additional to the buffers. So you’re talking about 5ms, which will bother a very good musician. And if you’re trying to use UA plugins, they will quickly being to play havoc with your system load as you add them at 32 sample buffer. Using a hardware based solution for monitoring still wins in an apples to apples comparison. My hybrid setup is just better than a pure Native can be at this point. All of this is at 48K. At 96K, latency will be a bit lower but track count will take a major hit, especially with good plugins.

We are getting closer, but still a year or 2 away from a fully capable console/tape machine replacement in a Native DAW. It’s going to take 16 sample or even 8 sample buffers. It will happen with more power. Remember when 32 samples was a fantasy on a single core CPU? Hey, I’ve been chasing this (and spending money to do it!) for years.

But as far as just a few inputs, yeah, I’m there.

I have some related questions for you Direct Monitoring Experts.

I have 8 pres from a digital console plus the Audient ASP880 feeding into the RME RayDat for tracking bands. I also have a Focusrite Saffire Pro 40 handling 4 external synths as Midi Devices. All that is working perfectly, no problems.

While tracking bands, I have been doing my cue mixes for artists with the console, I can send 3 separate hp mixes this way. No latency at all. Lately I’ve been experimenting with Direct Monitoring. On the hiphop sessions I do with 1 mic and 1 cue mix it works pretty good. I just can’t get used to not hearing the mic all the time. In between takes the artist will say something and I miss it because of the monitoring weirdness. (I have my preferences set to “Tape Machine” style.) With the console, on the other hand, the mic is always heard. I never miss a beat. Maybe I should try a different preference setting? Or do I need a separate talk back mic for the artist? That seems crazy.

Question 1: When using Direct Monitoring, how do you deal with not being able to hear the artist all the time?

As far as setting up different cue mixes, I am stumped. I’ve read the RME docs, tried everything I can think of but I still haven’t been able to get 3 separate cue mixes. I have 8 channels coming from the Raydat to the console via optical so I’m good there. Do I set up the separate mixes in Nuendo or TotalMixFx? Once I enable direct monitoring, I get no sends in Nuendo. The direct routing outputs stop working too. So I’m guessing I need to set it up in TotalMixFX but I can’t figure out how to do this. Anyone care to shed some light on this topic so I can truly test direct monitoring with more then 1 mic and 1 cue mix? I just had a band in over the weekend and I used the old method…With my console supplying 8 of the pres, monitoring for the ASP880, and three cue mixes.

Question 2: When using Direct Monitoring, how do you set up multiple headphone mixes with Nuendo and the RME RayDat? (Please don’t tell me I need to use the “Control Room” feature just to set up separate headphone mixes)

Thanks for any info.

Yes, … Nuendo, actually.
I changed it to 7 a moment ago. Obviously I was just a little ahead of time.
But that does not change the fact, that N7 is more agreeable to me than N6 in regards of gui and stuff. N7 functions pretty good with our rigs and set-ups.

Hi, Getalife…
Your are right. We’re not quite there, yet, but when going back to the dreaded Win 95 area… We live in Wonderland, already. :smiley:
Let’s wait and see what we get in a few years.

Big K

I run lots of multi-track sessions and use hardware for monitoring. I have personal mixers for each musician. It beats setting up cue mixes for everyone and it is tough when, as an musician, you want to change your headphone mix but don’t want to interrupt to session, so you just deal with it. With the individual mixers the musician can do what they want.
I record myself on drums and find that even small amounts of latency can be annoying, so hardware is the way to go for me. I find that effects are not necessary and in fact, I think that people play better without fx, unless it is part of their sound which is created by them at the source anyway. If a vocalist needs verb, latency doesn’t really matter much so that one is easy.


Wow, so no one can answer these 2 easy questions? Is nobody using Direct Monitoring for serious multi track tracking?

Question 1: When using Direct Monitoring, how do you deal with not being able to hear the artist all the time?

Question 2: When using Direct Monitoring, how do you set up multiple headphone mixes with Nuendo and the RME RayDat? (Please don’t tell me I need to use the “Control Room” feature just to set up separate headphone mixes)

Obviously, I can’t answer your question, since I’m actually researching getting INTO ITB tracking. But I defintely feel your frustration in not having your question(s) answered! Here’s my latest: N7 not playing nice with Melodyne - Nuendo - Steinberg Forums I have at least 3 others from weeks ago still being totally ignored. Maybe, they don’t have the answer to the question, so they can’t respond. But this is the thing that makes most owners crazy, THERE IS NO MORE PRODUCT SUPPORT!

I remember vividy BEFORE THE YAMAHA TAKE OVER, being able to call SB at 9am sharp Mon - Fri and being able to get a question answered within minutes! If I called after 9:30, I might have a 10 minute wait. But either way, we’re talking minutes and nobody ever tried to make me feel stupid for whatever my question was!!

All that’s gone now. It doesn’t matter who the company is, Avid (the worst), RME, VSL, you name it. The best you can do now is “file a report” via their web version of support and wait days for a reply. The only places left, that I know of, where you can actually TALK to someone is Universal Audio, Lexicon and ADK! Otherwise, we’re restricted to forums and the kindness of other users and THANK GOD FOR THEM or we’d have nothing!

I’m using Cubase (8.2) and I haven’t done this for a long time -(since . . maybe 5) but it used to be with my PCIe RME Digiface that I was able to create multiple Direct Monitoring mixes.

That is, the RME would send a direct monitor feed from a recording track to the corresponding monitor output (as setup in VST Connections) at the level set in that monitor channels mix.
This in addition to the Control Room feed at it’s main mix levels (or whatever was selected).

Basically, Direct Monitoring would work for multiple mix outputs and mixes appropriately to each.

I never saw this talked about which surprised me.
Obviously you could not apply any effects to the DM signal, but I did use Verb on the signal which worked - with latency, of course.


It’s been awhile since I’ve used Direct Monitoring, but when my RME Fireface 800 was the primary interface here I live tracked full bands all the time without any issue using While Record Enabled in preferences. Keeps communication open between you and the artist.
As for monitor mixes, I kept it basic, using a stereo mix as primary, then feeding a line from the bass, guitar, vocalist, etc. to a More Me capable headphone amp for additional flexibility via TotalMix. Honestly, though, most of the musicians I’ve worked with over the years were pretty easy to please with the basics, especially the more experienced. Did not use any effects and no one missed them, not even the singers. I did keep verb resident for playback only.

Went back to a digital console in 2010 for its ease of use in live tracking, specifically for setting up monitor mixes more fluidly. Nothing like simply reaching for a knob or fader to tweak a mix or effect. Anyway, don’t know if any of this helps, but there you have it.

THAT statement right there is the major cause for my hesistaion on this. I’m getting lots of “Works great,” “Do it all the time,” etc statements with a helathy dose of “somebody using some kind of little hardware mixer (cheating) to make it easier” or, as in your case, they just set up a tracking protocol (desk) and a mixing protocol (controller or mouse)!

So, maybe I’ll just go with a DM2k and the Nuage Master Controller. I want to be sure about this, because that’s a LOT of money for me and It’ll take a pretty good while to get it back, ya know?

I comes down to where you choose to spend your mental and creative energy. Every moment spent thinking about how to best get adjustments made to a “fiddly” setup is a moment NOT thinking about how to make the music better.

I’m a producer, engineer and mixer and have platinum credits in all 3 categories. I am capable of producing and engineering at the same time, even on large tracking dates. It’s all very second nature after decades of doing it. However, these days I choose not to engineer while tracking if I’m also producing. I can do both at once, but not quite as well as I can when focused on only one or the other.

Same principle here regarding monitoring choices, I think. Adding another layer of paging/complexity/etc to your mission is going to reduce your available energy for creativity. And these days, so many times, the engineer is also the producer. It’s already challenging enough to try and make exceptional music. All about focus.

If you have a choice, go with some hardware. Doesn’t have to cost a lot. My method has always been to stack the deck in my favor as much as humanly possible. Making super low latency monitoring simple, quick and reliable is the smart choice, IMO. And currently, that means hardware.

But then again, this is just one man’s highly informed and thoughtfully considered opinion. :wink:

Question 1: When using Direct Monitoring, how do you deal with not being able to hear the artist all the time ?

Use a clone of the empty track and engage monitoring.
If you need many, put them in a folder to arm them at once and hide them.

The recording insustry is full of interesting people.
What do these poor guys do when they rehearse and they have to stand at 3m distance from other musicians? :laughing: