Recording an orchestra

OK, concert recorded successfully! Turns out there is a guy in the orchestra who is well versed in live recordings. He came to defend me when I was explaining to orchestra management at the dress rehearsal that I needed at least two seats reserved for me. Evidently, the guy before me put his boom stand right against the stage, so the boom did not block any seats, and there were no wires crossing the space between stage and seats. But I was not willing to put my mics right over the conductors head, and that giant A50 clearly impeded two seats. The orchestra guy supported my arguments, so management caved in, made some seat changes, and blocked two seats for all concerts going forward.

The dress rehearsal recording went well. My new friend in the orchestra volunteered to bring his own Zoom F8 recorder to the concert so that I would have redundancy, and he also had an ART passive mic splitter. So on concert night, in a matter of 5 minutes, we set up his recorder and sent my mics to both recorders (direct to my Steinberg MR816). Everything worked perfectly. One of the great features of the F8 is that you can replicate your mics to other channels, the purpose of which is to allow you to sequentially lower the input levels. In this case, my new friend set it up with four stereo inputs (two by mic cable and two by replication), and he set the inputs lower for each pair. This is a great technique for avoiding overages and distortion – if the first pair went over, you have three others to rely on. But I have not heard what his recorder picked up so I can’t comment on the preamps. Reviews online seem to indicate it’s a pretty good box.

My 10-year-old laptop captured everything, I had no moments going into the red (having determined the levels the night before), and the backup machine also recorded everything as far as I know. Once you get insurance, you don’t really need it!

I had people in the seats on either side of my two seats, even though the concert wasn’t sold out. Turns out the front row seats are the cheapest, and the hard-of-hearing like to snap them up. Not only do they have difficulty hearing the orchestra, they have difficulty hearing themselves. Including involuntary bodily noises. At one point, the person next to me began to hum one of the more popular melodies, and he couldn’t hear me when I whispered to him. And I have a pretty good idea what he had for dinner. But the A50 goes high enough that I can’t hear it in the recording. That’s at least one advantage to putting the mic up by the stage instead of by the seats, but, I can report that my recordings are much more alive and stereo than the last guy’s. Still, whatever these aging fans do, one thing they do is show up and pay, so lots of respect to them; this orchestra wouldn’t exist without them.

Actually, I’m pretty pleased with the overall stereo effect. Lots of coughs, though, further back. The mics were maybe 12 feet behind the conductor, who had his musicians around him right up to the edge of the stage. I had the A50 as high as it would go, so it was a little over the conductor’s head, but not much. I think I fit within the SRA of 96 degrees.

As I mix, I’m not so happy with the bass. There were four basses at the extreme right of the stage, but the percussion was put at the extreme left. The last piece, Scheharazade, features a lot of percussion. Turning up the bass (perhaps to compensate for my cheap mics) also turns up the low percussion. Still, I think I’ve got a good representation of what they did. Long live live orchestras!

I had another complaint (from one of the organization’s board members) that my laptop was too bright, and he wanted me to turn the screen toward the stage. Of course that would have meant I couldn’t see it. But I did realize I have to turn the brightness down. And this is yet another advantage to a dedicated digital field recorder. So I am thinking my next purchase is one of those (probably SoundDevices Mix Pre3), and the ART splitter has to go with that. Then, it’s better mics.

I had anxiety about the power cord, but fortunately nobody kicked it up. During intermission I stood on the cord in front of all the equipment to ensure that if the cord was kicked, it would come out of the slack on the other end. I also had anxiety about recording levels, and I began to wonder if there is a specific downside to recording too low at 24-bits. I think, based on my limited knowledge of LPs, that you can fit an orchestra into a 60Db range or so, and with 24-bits, you have something like double that, so even if you were 60Db too low, you would still be ok. Have I understood that correctly?

Thanks for everybody’s help on this project!

Great to hear your experiences mate :slight_smile:

Yep i’m still enjoying this one…

I’m glad it all worked out and it sounds like you asserted what you needed. I also think it was very smart to have a backup recording and the overload protection on the backup tracks is a very wise move.

Maybe you’ll post something on “made with cubase” eventually for us to hear a bit of?

Good luck with future projects.

Hi Stephen, maybe I can do an extract. Going to deliver the mix tomorrow, so let’s see what the management thinks!

Here’s an extract of what I recorded and mixed on Saturday. As mentioned earlier, this is a pair of very cheap AT-2020 mics in an ORTF configuration into the Steinberg MR816CSX interface into a 10-year-old IBM laptop into Cubase. I did my best mix within Cubase, which consisted primarily of equalization and loudness processing. I also added a touch of reverb – the dress rehearsal didn’t seem to need it, but the concert night was full of warm bodies. Some of them were humming, but the mics didn’t pick it up. But a lot of them were coughing – waiting for the quiet parts, apparently. The loudness processing is through Ozone 5. I did have redundant recording, but I didn’t need it since the main path caught the whole thing. It could be that the other path had better quality, but I’ll never know.

Would love any comments before I take it down!

It sounds excellent to my ears. Nice balances of the parts, very clear and clean sound, no harshness, good dynamic range, good imaging. Very listenable and enjoyable. You worked really hard to make this come out so well and it really worked.

Thanks Stephen, it was an interesting project. You can hear a lot of audience in the quiet parts, but you can also hear the conductor stamp his feet at certain moments! He was quite enthusiastic, by the way. Talking to the musicians, they were happy and energized by the performance.

Hi Early,

Nice work. Imaging is great. Sound is balanced. You should be happy with that recording.

I had to chuckle when I read your comments about people talking and coughing. Welcome to live recording! It happens all of the time. Hiking the mics up helps, but nothing really cures it. You also had the benefit of using directional mics with the dead side of the cardiod pattern pretty much to the audience. Another benefit of the ORTF technique and is part of the reason that array almost never sounds bad. It picks up what it’s aimed at without pulling in much of the room sound.

Over the years I’ve had to deal with sounds from coughs, doors being closed, sneezes, talking, phones ringing, babies crying, and more. Usually I let it go and don’t try to edit it out as its part of a “live” performance. However, if it’s really bad, I might try to do some editing. I usually accomplish this using spectral editing in Wavelab. It’s tricky to do, but can be effective. The goal is to minimize the offending freq without messing with the sound. Sometimes if the noise is in the same freq as key elements of the music, you’re simply screwed and have to live with it.

I’ve been sorely tempted to leave a bowl of cough drops out at the entrance for people to take to reduce coughing.


Ha, good idea about the cough drops! Thanks for listening and commenting.

A sharp, swift blow to the back of the head with a large, dense pointy object works best! :laughing: :smiling_imp:

But then you need a scream filter on the mix.

I once recorded a group of Tu Van throat singers with a Schoeps CMC 6 MK/21. It was very good at minimizing the audience noises. Another time I added basketball bouncing because an interview had started without anyone on the court and during it, the players came out and started warming up – bounce, bounce, bounce. So, we took that as “room tone” to put under the earlier quiet part of the interview – sometimes worse is better. :wink:

I’d also consider handing out a little slip or even making an announcement explaining that a recording is being made tonight – so please try to keep things quiet, particularly during softer parts of the music.

I’d like to see some screen shots of the Project just to see how it all looked inside Cubase. It really is a nice recording. I’m glad to see the positive comments about it.

Tom, thanks for all your comments on this thread. I’m glad you took the time to post such long, detailed and useful comments here. Great job.

Yes, I’ll second that! Rather than post a bunch of screen shots, I’ll just describe what I did, based on guidance here and some other articles I found on other forums.

Based on Tom’s advice, as previously discussed, I bought the very tall stand and placed it about 12 feet from the conductor, and some of the musicians were actually in front of the conductor on the stage. It also seemed to be about the right distance to get a good Stereo Recording Angle based on the ORTF mic configuration. So that’s basically the input side. Mics into MR816csx into my old laptop running Cubase 4.5. I had set up a template with two mono tracks going to left and right on the output track. So I just had to hit the record button. I tried to peak somewhere around -12 db, based on the dress rehearsal, but I think I hit about -9 at the highest point, so I had plenty of overhead.

I then copied the whole Cubase project from the laptop to my deskside running Cubase 8. I did not change anything about left and right panning - the left mic is hard left and the right mic is hard right. I cut the tracks so the applause was in separate segments. Then I selected all of the audio segments with the orchestra in it on both channels and normalized to zero. I’m now thinking I should have normalized left and right separately. Maybe somebody has a thought on this. After all, the gain was set manually by me looking at the input levels I was getting in dress, so they weren’t necessarily equal in any way.

Processing after that was basically equalization, loudness, and reverb. With equalization, I was primarily trying to boost the bass a little. They had four basses; probably could have used more in the loud sections of Scheherazade. I do most of this in Ozone 5, and my first try was to boost the bass in Ozone, but I wasn’t happy with the fact that I was also boosting timpani, which were really too loud in the peaks of Scheherazade. I thought it was strange that they placed the timpani right behind the first violins, and close to the front of the stage, but talking to the Executive Director, that was just done by custom for this orchestra. It was a guest conductor who didn’t want to break their normal routine. So then I gave up on that approach and instead boosted the bass in the right channel only, as the basses were on the far right, and it had less of an effect on the timpani.

For loudness, I first tried using a hard limiter on the master channel. Looking at the wave images, it was obvious that the peaks were all coming from the percussion (and this is always the case), so I just wanted to get those down without affecting other dynamics. But something about the hard limiter sounded harsh to me. So I instead used the Maximizer in Ozone. Reading the manual, it describes the function as an algorithm that anticipates peaks by looking ahead and then applying psychoacoustic science to lower the peaks in such a way as to make the lowering less noticeable. If you think about it, a limiter in the digital world is just a model of the circuits developed to do that before we even had digital. And it now makes sense to me that now that it’s all digital, it could be done in entirely different and more satisfying ways. So instead of applying a limiter, I found that the Ozone plugin sounded a lot better. And I set the point of starting to limit at -7db, which seems extreme, until you look at what is in those last 7 decibels, which is all percussion transient peaks.

And last, for reverb, since the hall was largely full, and my mics were not so far from the stage (and cardioid), I added some reverb. I tried all the versions in Cubase, but it sounded best with the reverb supplied by Ozone. As someone said in another forum, you don’t really need to put Hall convolution on something that was already recorded in a hall; you just need a light simple reverb.

In the end, I still felt the recording was missing “body”, and I added a small wide hump of midrange in the equalizer of the Cubase master channel.

No compression anywhere, but limiting as I’ve described.

So that’s it; would love any comments.

The best approach here would be to set both channels to the same gain (on mic pres) when capturing the event. Then when you normalize - do it as a stereo file (as you did).

If you normalize channels independently, you mess with the stereo image. ORTF is designed so that both channels have the same gain in capture and replay. Most portable recorders (including the zooms and SDs mentioned earlier in this thread) have the capability of linking channels so their gains move together.

I almost always add reverb when recording with ORTF as you capture some, but not enough of, the hall. I thought your reverb was tasteful.


I’ve been helped many times by generous and knowledgeable people on audio forums. I’m glad I could return the favor and provide some useful feedback. I’m also glad Early took the time to post files and explanations of what he did. This was a worthwhile thread.


Hi Tom, thanks for good advice, as usual. Back to the subject of channel levels. My left and right were not linked when I recorded them. Maybe there’s something in Cubase that does that; I’ll have to investigate. What I did was adjust left and right inputs to be approximately the same based on the dials on the MR816. But I did surmise that the tracks were not at equal levels; maybe I wasn’t really at center, or maybe these cheap mics are not so equal as they should be. I do suspect that this is the case with my mics. One of them has fallen to the floor at least once in a band recording with wobbly stands (I use them as drum overheads). Anyway, based on that, I felt free to adjust the left and right tracks to my taste. When I mixed, I went to sections where the whole orchestra was playing and adjusted left and right to get a good center stereo sound to my ears. Possibly I violated an ORTF principle?

Also, you didn’t hear it, but I did increase the left channel ever so slightly in the cello concerto, because the cellist was on the left, and he was not coming through as much as I thought he should. So I definitely took liberty with left and right mic volumes. I’m wondering if I broke rules!

Appreciate any thoughts on this.

Thanks for the good discussion!

You did violate ORTF principles. But this is probably OK.

When recording ORTF, the best approach is to have the gain on the interface or recorder the same for both channels. Then have the gain for each level the same in Cubase when playing back. This was how ORTF was designed to work. Other settings can distort the stereo imaging. When I wrote that I link channels, I meant that I link the inputs on my 788T. This recorder has stereo linking functionality. Then one gain knob controls gain for both the left and right channels of the stereo pair. This is a good convenient feature. When in cubase, I set up a group channel and route the left and right ortf channels to the group. I do all processing on the group - to keep the stereo linked image intact.

Mic positioning changes would be my first choice to adjust relative volume concerns. This could include adjusting players’ positions (relative to the stereo array) to the extent possible.

But, sometimes its not until post that you realize adjustments are needed…if you’re being careful with what you’re doing when you make adjustments, and you’re layering on 'verb after the fact, it’s probably not a big deal to adjust relative channel gain. Listening to the end result is the ultimate determination if it works.

I’ve adjusted relative channel gain when the image seemed lopsided. The lopsidedness was often caused by the group playing louder on one side than the other. Not adjusting makes it feel like you’re carrying a bucket of water with one arm. Better to carry two buckets (one in each arm) and be balanced. :smiley: When I do this - I typically use a similar gain offset for all songs so that players don’t wander around in the stereo image.


Hi Tom, I’m still thinking about that… but only because I think my two mics were not providing the same level of signal, possibly because one of them fell to the ground during a recording of a drum set. Enough to break the mic mount. I never replaced that mic, because it still works. I ordered a new mic mount from Audio Technica. I mean, I’ve really been going low budget! LDC’s can sometimes pull the mike stand down if you’re not careful!

Before the concert, I positioned myself in front of the array and snapped my finger to see where it would register in Cubase. One of the mics was lower than the other, so I increased the input on the lower one until the two registered about the same. (I routinely do the same for drum overheads, and always place the mics equal distance from the center of the snare. Previously I had recorded the overheads on one stereo track, but I found that it was better if I balanced them by ear). But, as I said, given the timpani situation, and the relative volume of the cello soloist, I made some minor volume changes per performance. And I suppose the timing differences still produce good stereo for ORTF, but I might have compromised the volume differences that are also supposed to help create the image.

So still learning, and thanks for teaching!