Help with realistic mock-ups

@mducharme I apoligize for that. This mistake has to do with my problem to deal with the Quote option. Obviously I did something wrong.

Okay, just as with all Synchron Player instruments you see with the Synchron Strings Pro instruments in the Perform tab the Expression cc11 slider and the Vel.XF cc1 slider and you have the button to enable (blue color) or disable (red color) Velocity XF.

When you click on the Edit tab and then select a patch by clicking on it, you see in the options Velocity Crossfade. Here you can choose for Global, Off or On.
On = this patch has always VelXF (independant from the on/of button in the perform tab)
Off = this patch has never VelXF (independant from the on/of button in the perform tab)
Global = if the VelXF button in the Perform tab is on, that patch has Vel.XF. If the Vel.XF button is Off, that patch has no Vel.XF
The choice you make here is just for the patch you selected.

With Synchron Strings Pro there are presets (the so called XF sus presets). In these presets all long notes have the option “Velocity global” and the short notes have the option “Velocity off”.

I hope this makes things a bit clearer.

You can read this in the manual of Synchron Strings Pro: SYNCHRON Strings Pro | VSL - Instruments
You find it under “Preset Types”

Curiously I was told off by a friend for using cuivre in my latest symphony as you can’t stop a trumpet by sticking a hand inside it - this is only for horns. I had to explain that this appears to be a somewhat unclear early 20th century term which, as interpreted by Spitfire, simply means brassy just as you say, but is frequently misused or misunderstood. For this reason I said I would hide it in a score designed for real musicians but as an articulation it’s great when used with care.

yes, that’s perfectly clear, thanks. I was simply generalising from the top PERFORM level defaults whereas you were drilling down to the patch options in the EDIT button. I haven’t yet seen the new Synchron Strings Pro so that would explain my confusion there. How do you find them, by the way?

Good to hear that it is clear.
Concerning the Synchron Strings Pro, I like them a lot.
I am a fan of Synchron Strings 1. They are not perfect, but I can do a lot with them, especially when layering them with Dimension Strings.
And I can handle also the legato patches of SS1, although there was much criticism on this point.
But the Synchron Strings Pro are definitively better. I don’t need to layer them with another library and the legato is also better.
I finished yesterday a piece for them. I give a link.
This is my first serious performance of a piece, that I just made in Dorico, without using a DAW.

Serenade for Synchron Strings

your serenade might not break any originality records but it is a sheer delight to the extent that I hardly bothered initially to pay attention to the rendering. Both the more detached and more expressive lyrical passages work very nicely – my only question would be if in a few places in the inner parts more of a long-detache wouldn’t give more textural clarity. But this is subjective – since Dorico 3.5 introduced note length automation, my own renderings have become more broken up than previously, perhaps too much. I hope you’re going to add more movements to this piece?

Anyway, I thought I’d add a movement from my own String Serenade, the 4th which is an Elegy. It’s the only thing I’ve rendered so far using exclusively the Dimension Strings SE con sord. patches which to me give the sort of innocent, almost child-like sound I want. It certainly won’t be to everyone’s taste but I’m curious if anyone does like it! it can be found here:
https://app.box.com/s/jmr6lkgjdb61rc618phqbrzsk05gnckg

Thanks for your nice words. Nice to hear that you like the music. Indeed all patches (short, long, legato and others) sound very well and it is possible to shape them in a way, that it becomes a very natural and musical flow. And the way they are recorded, makes that they sound also at louder levels very convincing.
What I like so much in working with Dorico (in combination with a good library of course), is that when you write some notes, you can make that notes immediately sound more or less as it should be. And that gives often inspiration for the next notes. I admire all those composers in the past (and till today), that have just a piano or other keyboard instrument and compose for whole orchestra’s without hearing that sounds in reality (just in inner hearing). That is a skill that I just have for a melody and some basic harmonics, but certainly not for the sounds of all the instruments. So I’m glad that in these days I can express my musical ideas with the technology that is available now. (And beside that: most if not all my orchestral music will probably never be played by a real orchestra, so in this way I can share my music).
Concerning the few places in the inner parts that sound not so clear, I’m aware of that. I tried also to approve that. The tenuto patch didn’t make things better always. So I thought then: this is what it is for now.
For me the length automation didn’t work so well. I concluded after using them that I prefer to make my own choices with dedicated playing techniques. It may be not the fastest way to work, but I don’t have deadlines. I had a professional music education but I’m not a professional composer. And working with playing techniques works rather fast, I really have nothing to complain about this.

Concerning making more movements you are not the first one who suggested that. And I have some sketches, that I moved out of the existing movement, and put that in a second flow, that I can work out. So I will see what follows.

I listened three times to your piece. I really wanted to try to understand your music, and the first time I had some trouble with that. (But that is not because of your composition but because of my some let’s say lack of ability to understand harmonics the way you and many others use them. I’m sure: there is nothing wrong with the way you use them. When there is some criticism here, I criticize myself, not you). Then I saw that it is: an Elegy. And when I realized that, the notes opened up for me and I felt much more comfortable with it. So in the end I can honestly say, that I enjoyed your music!

and thanks to you as well for your kind words. Harmonically, I think my Serenade in general is fairly straightforward compared to quite a few other works but it doesn’t follow any particular convention. On techniques, it’s true that NoteLength is a great time-saver for me and automatically chooses the right articulation for the context much of the time though of course there are quite a lot of occasions when you need to go in and override. When the note lengths becomes more user-configurable, I think it will be better still and perhaps some of those, like you, who don’t get on with it right now, will find it more useful in the future.

Anyway, I hope you’ll post any future movements on the forum.

TUNING!

Apologies if someone’s already covered this as I’ve briefly scanned a rather long thread, and didn’t notice anyone covering it.

Just going with equal temperament tuning across the board, from start to finish…drives me nuts. Real orchestras inherently don’t work that way! Outside of things like harps, pianos, organs, tuned-bar percussion…the musicians are constantly altering intonation to lock chords without the ‘beats’ of off tuned 4ths. They’re producing ghost tones or harmonics in the process. Plus, most wind instruments, fretted strings, and so forth each have oddball tuning characteristics that real orchestras are always dealing with (even fighting) throughout musical phrases.

Experiment with more just tuning scales…it might take a lot more instances and hopping around after key-changes…or learning to use things like RPN, pitch bend, or note-expression events to redefine the scale’s tuning.

Bottom line…a brass quintet in the real world is NOT EQUAL TEMPERED! Nor is a woodwind section. Not quite as noticable with strings if there is heavy vibrato going on, but still…it makes a difference in a mock up…a real string quartet isn’t equal tempered.

Ambient NOISE

That’s right…stick a mic in an empty concert hall and let it record all night. Study it a bit when it’s done…those mics will pick up all sorts of interesting things over the course of a night, but mainly you want the room’s natural white noise. Put that track of natural ‘room noise’ in your mix. If you’re going for a truly ‘live’ sound…one could also mix in other random but subtle noises typical of a concert environment (bumps, thumbs, feet patting on stage, valve/key clicks, pages turning, etc). Don’t over do it…but having some of this ‘noise’ going on in the mix can help mask flaws (or over perfections) in our sample libraries.

Some tricks with brass. to experiment with…
It’s hard to get a natural sounding digital brass instrument. One trick I learned that can save some time, and even make inexpensive libraries sound way more realistic is to play your brass track DRY and use a real microphone to record it as your speakers are playing it. THEN apply your reverb and such in the DAW to your audio track. Even if you don’t have a great studio/room to do this, and not so great speakers…give it a try anyway. Chances are good that your ears will learn some things, and have you making much more natural sounding brass tracks.

1 Like

I’m surprised you missed this thread which started out about a Tannhäuser mockup but quickly became mainly about tuning.

2 Likes

One curious caveat to your noise observation is that recording technology is incessantly striving for quieter mics and preamps that reduce any noise print. Concert halls are designed to be acoustically favorable and as isolated as possible from the outside world to minimize any outside noise factors. Special silent HVAC systems are installed where equipment is remotely located and feed via padded trunk lines that are undetectable with a decent hipass filter, even on a recording of simple room noise. So adding noise to a recording doesn’t make much sense to my brain. Noisy recordings are usually an indicator that the recording equipment isn’t quite up to snuff. It’s a bit like the people in the digital organ world who introduce fake wind sag that makes the tuning dip with large chords “because it’s more realistic!”. Wind sag is a deficiency of the instrument and (at least in my opinion) not to be emulated or lauded, but fixed with more stable winding. Even Bach complained about this two-and-a-half centuries ago. One reason I don’t particularly enjoy watching old VHS tapes (apart from the nostalgia) is the fact that there is audible hiss and the tracking can vary the pitch if the tape is worn. Same with crackling on an EP.

1 Like

Yes, and the more they filter out the natural world (I.E. the natural ambiance of a room) the more ‘fake’ it sounds when dealing with acoustical ensembles and soloists.

I’m not so much talking about faking humm, buzz, signal leaks, electronic distortion, or wow and flutter stuff from old recording mediums. I’m talking about natural, real world noise, tuning issues, performance phasing, etc. How much, when, and where to put it in a mix can take experimentation (and a lot of listening to live performances, sitting in real ensemble rehearsals and moving around the room as they practice, judging bands and orchestras playing in a variety of halls and auditoriums from different vantage points, etc)…but it really can help warm up a mix, hide issues in instrument libraries, etc.

Each room, and all the objects in it has a natural resonance (and considerable portions of it are detectable by the average human ear)…it sets our ears in motion as the air moves around in the room and colors audio perception in significant ways. Plus, it’s not ‘just the reverb’…room temperature and so forth effects the overall course of a performance…and having the white noise in there serves as a ‘sonic floor’ for the human audio engineer’s ears to ‘mix against’…something most REAL ROOMS, ALL are going to have.

Musicians pick up on this and it effects how they play and blend. The conductor’s choices on his or her interpretation as well. Those things are intrinsically a part of music involving human musicians and acoustical instruments. When it’s missing, it’s pretty obvious. The ambiance of a room can also serve as a ‘reference’ point on doing the mix, and gives the ear a comparative base for perceiving dynamic changes, and emulating directional characteristics…over all staging can be improved (even if you later decide to take most, or all of the ‘ambient room noise’ out of the mix). Case in point…if you put on a blind fold and wander around in a dead recording room (say every surface covered in 6 inches of pink insulation, plus a well padded/diffused floor)…you have NO IDEA where in that room you are…clap, stomp, whistle…directional reference points are much harder for most human ears to get in such a room. In contrast, put on a blind fold and wander around Carnegie hall. Within an hour or so, you could probably detect with a surprising amount of accuracy where you are in that room…just from your ears and brain making use of that ‘ambient noise’. The resonance and characteristics of the hall give your ears and brain ‘reference points’ to tell where a sound is ‘coming from, and the milliseconds of difference between sound-source, and reverb give even more hints’. So…putting some kind of ‘noise canvas’ in the mix can really help you sort out a more realistic mix (and you still have the option of reducing, or taking the noise out of the mix later).

The ‘room’ (or lack thereof) is the ‘sonic canvas’ on which we paint…if there is no authentic ‘back-ground’…it sounds much more FAKE.

Acoustic ensembles aren’t like pop music, where every musician has a personal monitor crammed in each ear, molding their performance in a virtual soup of artificial effects. Trying to mix them and virtualize them as if they were…leads to the basis of people saying even some of the best mock-ups (especially in modern day movie tracks), sound FAKE.

And the more close microphones they use, the less natural it sounds, also!

I am amazed when I look at YouTube videos and see more microphones than performers, and that in venues with good acoustics…

It seems that some engineers have lost the plot.

David

3 Likes

I both agree and disagree…

Sometimes sounding ‘real’ isn’t as important as being creative and sounding ‘good/cool/interesting/whatever.’ Quite a few engineers don’t care as much about authenticity as they do an over-all pleasing performance…through cheap ear buds…or some freaky new surround sound system, or whatever.

Good mock ups can and do trade off authenticity for aesthetic ‘preferences’. Something can sound ‘fake’ when compared to a real orchestra, but still sound really good by its own right :slight_smile:

When learning to mix…I personally just found it (and still do), really helpful to have some room noise in there…again, even if it gets removed later.

How loud is louder? Louder relative to what? PPP often has more to do with timber quality and articulation style than actual db levels. Where is that soloist sitting? Who cares what the ‘meter says’…with some base reference points that are pretty consistent throughout the mix…it’s a bit easier for me to wrap my ears around it.

This is why particularly good vsl libraries position the players in a sound stage and vocal libraries often include many mic positions; you can choose a “perspective” as a means of sculpting the sound.

Yes, I’m aware of them, and they’re gradually getting better, but there’s still no constant canvas to mix against.

90% of the stuff made with those libraries, artists plug and play whatever the defaults were in the box. They all sound the same. The dynamics and phasing aren’t realistic (perhaps mathematically near-perfect, but not realistic). The tuning doesn’t jive with the natural flow of real musicians playing the same phrase. Things that would be masked or covered by noise in the real world are overly exposed. Things that should be there in a real world performance are ‘missing’.

In no way am I saying those technologies are bad. They’re incredible, and sound really good…sometimes even BETTER than real acoustical equivalents in term of psycho-acoustics and theoretical ‘logic’. They still sound (rather obviously) like a computer orchestra from the very first note to the last. In my experience though, you’re pretty much stuck in the can they made for you unless you build some of your own presets and envelopes (and noise, again, serves as a constant reference point to shape it), and one score made with such a library typically sounds like every other lifeless score made with it. Applying a different room canvas, and mixing to ‘fit it’ can change that.

Plus, alot of people don’t have those libraries, but they do have a good bread and butter base-line pallet of less complicated instruments…in that sub $500 price range. They can also whip out the iPhone and make samples themselves. They can get really good mock-ups too, if they learn to stage and mix.

Each year, fewer and fewer mixes have DEPTH. They sound one dimensional…almost dynamically ‘compressed’. Part of the reason for that is all the ‘natural ambient noise of a given performance space’ has been ‘filtered out’. The brain has no constant ‘reference point’ to judge dynamics and directional origins.

I’ve always been divided between recording my live playing to enter notation, or type it and then do manual adjustments.

Playing is supposed to be more human. But my playing might simply be considered dirty and inaccurate.

Good players always have very precise timing. They don’t anticipate or delay note starts from a given tempo. They adapt their tempo, while still keeping accurate timing.

My preference, now, is to enter notes precisely, do local adjustments to shape phrasing, and do a detailed work on the tempo map. That’s where, I think, the magic happen.

Paolo

2 Likes

I agree wholeheartedly (but each to their own).

I might use MIDI step input to get the general harmonic shape of phrases, then Dorico’s extensive tools to stretch or shrink the timeline as needed. It’s so much more efficient than Sib or MuseScore - though I do still use them occasionally.

What do you mean by this? I am aware of adjusting note start and end, but no notion of “stretching the timeline.” The only mentions of stretch in the documentation are about notation / engraving.

Probably the gold standard is comparing a piece to a live recording directly to know for sure if you achieved it.

Recreate that cough you heard in the audience in between two movements. : }