Why a DAW?

As if I haven’t used way too many words already, I’d like to add one hopefully clarifying thing to my recent reply:

I learned a long time ago that there is indeed a ‘gap’ between notation and sound. First of all I bought a study score of Sibelius 5th Symphoy and discovered that what he wrote down didn’t at all match up with what it actually sounds like. As you said, tenuously connected.

Then one day I was asked suddenly to conduct a rehearsal of Schubert’s Unfinished, and discovered that the opening didn’t even have the time signature I thought it would have. I heard it in 6/8, whereas of course it’s actually in 3/4, but with the measure length exactly the same as I thought it was. This was a shock, and was a result of the ‘baggage’ you mention.

So I do get that by composing ‘in my head’ in terms of notation I am indeed limiting myself. But I can’t help it, that’s just the way my head works. And I do love ragas, even recorded ones, and I do understand that a recording of one has very little to do with the essence of what it’s ‘really’ about.

All that being said, I’m simply trying to get my ahead around HOW a composer who uses a DAW as the means of achieving a composition does so. Since notation is being ‘bypassed’ (from my point of view), what’s being used INSTEAD of notation. Is it simply a matter of playing notes into the process, and then asking the DAW to play them back? Or is it more complex than that, and, if so, in what way.

1 Like

I think there are degrees of control that people are comfortable with and this influences the way they work.

On one side you have some composers of western classical music. They see themselves as a controller or master of the music. Who in an attempt at complete control simply produce a well-notated score like I talked about in my post above. In extreme cases a composer’s most enduring work turns out not to be music at all, buy a treatise on musical composition or theory instead.

On the other side of things you have some Japanese noise musicians who purposefully don’t learn their equipment and who change it out and rearrange it frequently in attempt to have absolutely no control over the outcome. If you asked them how they get their thoughts into their compositions they would laugh at you. They don’t want their thoughts to get in the way of their compositions, they want them to be at the mercy of a power that is beyond themselves.

Most people are somewhere in between these two extremes, and in my opinion it is somewhere in between where the best music is made. I myself was on the side of less control for many years, so to pull myself more towards the center I learned music theory, started writing scores, and got trained in playing classical piano.

However, it sounds like you are closer to the full control side of the spectrum and this might be causing you to ask the wrong question. The question is not simply HOW a composer gets their thoughts into a DAW (or turns them into music more generally), but to what extent this is the case if at all.

If I could recommend a little exercise to help shift your perspective on the possible ways music can be made I would suggest you get an instrument that is impossible to completely control. Something like a Lyra-8 maybe. With an instrument like this you will be forced out of your comfort zone. You will be able to have some general ideas about what you want the music to sound like, but you will never be able to have complete control over it and you will be forced to work with your instinct and emotions in real time and in collaboration with the instrument itself. Some people work this same way in a DAW :wink: .

2 Likes

When you compose a score, you don’t have control over the outcome, neither should you expect to have control. You can only communicate the music that exists in your head to the musicians through the score, successfully or less so. The director, the musicians, the venue, the instruments themselves are factors beyond the composer’s control, and that’s how it has always been. All music that preceded recording has been written to be performed live. Recorded music setting precedents on a composition’s performance “fidelity”, or “quality”, or “whatever” is a thing that’s some 100 years old. There is a spectrum of how much “entropy” is injected by those factors (musicians, directors etc) into the composition (on top of what’s NOT set in stone or demanded even in the most rigidly marked score). On the other hand, a recording always (unsurprisingly) plays the same. Thus the composition is not a composition anymore, but a rendition. (Kind of how one comes up with a motherboard chipset but then companies produce different models of the same chipset, with different strengths and weaknesses)

Moreover, within a DAW one can easily have 8 trombones screaming at fff and a flute playing a ppp C under the staff in perfect balance, having the single flute overpower the 8 trombones if needed. Of course, this would not fare well if played live. But it is now a reality that one has to accept, and incorporate into orchestration.

So, it depends on what kind of music we’re talking about here. A DAW most certainly allows for unprecedented degrees of freedom when it comes to shaping sound, but this comes at the cost of relinquishing what freedom exists in notation and rigidly binding the composition to a specific performance.

By the way, isn’t it entertaining how things have changed? We have a case where we feed a certain amount of data to autonomous AIs that interpret said data and introduce playback and performance irregularities, according to each unique AI’s self-programming, but we consider this to be a very deterministic method, whereas having one person enter the data themselves into a completely deterministic system, at the same time interpolating the data of those “would-be-AIs” is considered more free and unrestricting.

1 Like

To answer the question ‘how a DAW?’ - I mostly get my initial ideas by playing until I have something worthwhile and a DAW is a good environment to do that in. I also use a lot of pure improvisation to generate material. I use the ‘retrospective record’ feature of Cubase to capture improvisations I like.

Then I use the DAW’s arrangement features to generate a draft structure. Cubase has the concept of an ‘alias’ which is a linked copy of a section of music. This is great because I can edit a section and the edit will be applied wherever that section occurs.

Then I refine things using midi note editing, velocity editing, quantisation, automation, tempo tracks, and transposition. I use the ‘piano roll’ editor extensively - it makes the editing process much more transparent than the score editor.

I also use group tracks to allow me to control multiple tracks at once, and use effects plugins (reverbs, compressors).

If the arrangement is getting heavy on CPU, I often render the midi to audio to reduce the load.

If you have not already, you might try looking at the Cubase channel on YouTube, which gives a bunch of “HowTo” as well as other looks at that specific DAW. Other YouTube Channels (that would likely subsequently come up in your YouTube feed would give you leads to how others approach DAWs.

This is clearly beyond my comfort zone, but the processes I have seen suggest that many use layering (in sections) at least where MIDI input is concerned: one lays down a chord track or bass line and then starts adding to it–chords, then bass, then melody, then pads that realize or sweeten the chords or voicings, etc.

But there’s still a kind of notation going on under the hood - MIDI messages are created and stored inside the DAW project; these messages record the start of each note, its pitch, its duration, dynamics and so on. Are you asking about MIDI?

When I first read this post, my first reaction was to write:

Why a digital advanced music notation system? Mozart and Beethoven used wonderful cut bird pens, and the results…

Ok forgive my joke… :wink:

I hope I’ve understood the underlying question, and I admit I’m not much of a DAW user!

For a basic example of what MIDI stores, take a look at Play mode in Dorico.

Each track is given note information, commonly shown on a piano roll. Each note has (at a minimum) a pitch, an on (start) and an off (end), and a velocity.

Discrete (Program Changes) or continuous changes (Control Changes) can be made on the track. Each track is assigned to a playback instrument. Playback instruments turn the MIDI notes into audible pitches, and react to the MIDI PCs and CCs. For instance, a Program Change may tell the instrument to switch from a single violin sound to a tutti violin sound, and a Control Change may tell a piano instrument to activate or deactivate (or half pedal) the sustain pedal, or to pitch bend, or to increase or decrease vibrato.

As to getting data into the DAW, common methods involve recording from a MIDI keyboard, MIDI guitar or breath controller, or using grids of buttons (think drum machines). PCs can be performed with buttons, and CCs can be performed with pedals (some of which support continuous data rather than just on/off messages), mod wheels, knobs, ribbon controllers, joysticks etc.
Given the underlying data is mostly numeric values (0-127), it can also be drawn in with a mouse, or even typed as text.

As to the paper, the minimum you need to tell a DAW is tempo, which exists as a global track. You can record live with no relation to the project tempo, but if you want to be able to take advantage of e.g. quantisation then your project tempo (which can again change continuously) must match your recording (or you must record to a click). Other than that, the paper isn’t really relevant; the project expands to house whatever you put in it.

This doesn’t begin to describe potential advantages or disadvantages to modern DAW, notation programs or hybrids of the two, but hopefully it goes some way to describing the paper and the holepunches.

In a DAW the important letter is the A. Which isn’t a M, or MIDI. If you have to record or master audio tracks, you simply need a DAW. That’s why…

Then you may need something else. This happens in Cubase, which has a scoring function I’ve used for years, as in Dorico, which has something similar to a sequencer. But these are just additions to their primary functionalities. Then for you they may be enough (or not)…

Speed, it’s much faster to get to a listenable track using a DAW instead of by notation, which is one reason why much of Hollywood does it. There you get minimum time and budget usually so a DAW is the quickest path (and many people working in the field don’t even know notation it seems unfortunately).

I write for games and do most of mine in Dorico, but I also work in a DAW, especially for electronic work, and it is faster, but I try to avoid it for the following reason; the DAW encourages you to write the same boring tracks over and over because you draw from your piano improv skills, rather than your imagination. Worse is it trends more to sound design rather than music these days.

You can easily hear this, most media music is derivative and recycled, partially because of this I think. Anybody seen the latest Dune movie? It’s really good (coming from a guy who has read the books at lest 50 times), but Hans Zimmer’s score can’t properly be called a score. It’s so devoid of a tune, that I realized my ear finally grabbed on to a single interval as the closest thing. It got repeated enough to serve the purpose, but the music is more audio and no music. Except oddly a bagpipe thing in the middle that stuck out like a sore thumb.

Anyhow I use a DAW on all my music for mixing, here’s to looking forward to Nuendo/Cubase integration!

2 Likes

I may be missing something crucial, but isn’t the visual element of a DAW (piano roll, audio peaks etc) also a form of notation? The computer doesn’t play it back, whatever sound system you have does, and it all goes through a big mixing and processing process before it’s pushed out of what ever hardware system you have. Whereas you don’t get performance instructions in a DAW (at least for a human) you get a huge array of other (cough) meta data displayed useful for the computer and hardware to interpret it.

So - why daw? Well, if you would like your music interpreted by the computer (which has advantages) then there you go - a useful translation tool. Like paper notation , a wholly different type of tool which some get on with and some don’t. Others yet use CSound, Max/MSP, PureData to interface with that medium. Also should point out plenty of different ways of notating music on paper…

When I was introduced to software audio workstations it was in the light of music concrete and similar audio processing - the high art avant guarde type and it just didn’t appeal to me as a medium. It never occurred to Peter Manning (our lecturer and early pioneer of the medium) that it would be used for anything else.

I see it’s value as a tool though for working composers - it has its own limitations and idioms though, and it spells the death of live performance in certain industries…

Aha, you have seen through my ignorant question and gotten to the heart of the answer: I am very well aware of what MIDI is, and use MIDI signals from my electronic piano to input my notes (and hence my notation) into Dorico, and before that Sibelius, and before that Finale, and before that other more primitive applications, etc., back to what seems now like the dawn of time.

What I was not aware of was that MIDI signals Themselves can be recorded just as audio can, and that a sort of ‘MIDI recording’ lies at the heart of what a DAW does. That was the missing piece, and suddenly the whole puzzle makes sense to me. Thank you VERY much, to you and to all the others who took the time and trouble to reply.

1 Like

As often happens with my queries, Leo, you have been the one who provided the details that finally get through my thick skull. Poster ‘ebrooks’ provided the basic fact, when he said that there was still some sort of notation going on ‘under the hood’ in a DAW, and then explaining that that ‘notation’ is really a way of recording MIDI input. That was the core answer I needed. But I was still vague on details. Now you have provided those, and I can’t thank you enough. I think I finally understand the answer to my question. Thanks to you and everybody else who responded.

3 Likes

It is funny you bring this up… I am actually really tired of the kind of overdone orchestral and almost ridiculously epic stuff you hear on 95% of new movies and shows that could just be cut and pasted from another movie with no one the wiser. The new Foundation series on Apple TV+ is a perfect example of this kind of generic music that adds absolutely nothing to the show. My wife even laughed about it when we started watching and said it sounded so boring and it is true! The series has so much potential for new and exciting music (I would love to rescore it with just a lyra8 and a pulsar23), but it is just the same old stuff you hear anywhere. I imagine it is a combination of the studios playing it safe and requiring that type of music, a lack of diversity in the composers who are writing it, time constraints and deadlines, and as you said the programs, tools, and techniques that are being used.

I actually don’t remember the music from Dune except for the bagpipes during the battle scene which were so badly out of place that it made me uncomfortable lol. So another huge music budget squandered on some rich guy resting on his laurels instead of going to someone new with more passion and better ideas. I hope Hans is getting good use out of his knifonium :-/. It is sad that all the money from these big movies is only going to like 5 different media composers. Have you seen Junkie XLs studio? It makes me sick, I wish it would get spread around more.

The opposite of this is the music on HBOs White Lotus which was amazing. Unique and added so much to the show.

The main overdone idea is what I call the “Epic Drum”. I think Zimmer started it ages ago, he certainly has done it to death. But every movie out there has to have the Epic Drums (I’m purposefully using the over used word Epic here - the music is anything but). Anyhow it’s some mishmash of big drums, I think Taiko playing a big part of it. And there’s nothing interesting with the rhythm, it’s always a pattern of “duh-duh duuuuuuhhhhhh”. And it has to be ear splitting loud.

Even the latest favored-boy composer, Ludwig Göransson, has fallen for it with in Boba Fett miniseries trailer, nobody is immune (he’s got some good ideas, but coming from hip-hop he isn’t able to keep a firm grip on them through his music it seems.)

Anyhow nobody has taught these people that loud is not full, and a grand sound doesn’t come through jackhammering volume, but by contrast.

That’s becasue the person producing the music is using that Trombones at fff for the timbre, not the dynamics. So, they use those samples and then they decrease the dynamics to balance it against one flute.

This definitely is a selling point of sample libraries, as not all music is intended to be wholly and completely realistic… and not everything composed in the 21st century is intended to be played by an orchestra.

The fact that people in the 16th century lacked our technology does not mean that their way is the only right way to think about these things.

Music was written to be performed by real musicians back then because there were no other alternatives.

These days, we have alternatives: Computing Technology, Sample Libraries and Synthesizers. I think people need to recalibrate their idea of what is normal. Normal changed a long time ago. People are just slow to catch up (and it’s ways cliche to be an old thinker, it seems - not just in the realm of music, either).

2 Likes

An encapsulation of the tragedy of our times.

3 Likes

Yes, we no longer need to consider music a ‘concentus’ of musicians coming together to play a ‘concerto’. After having been together for rehearsals, discussing about the message and the craft of that music. And there is no need to be listening to them in the same public or private space, creating a living community.

We have virtual communities, where lonely people in their bubble can watch at musicians watching at their laptops on YouTube. With some low-grade listening devices, mainly conceived as boom-boxes. And we can even send them a Like, replacing the warmth and energy of an applause, or the de visu appreciation after the concert. All while waiting for that unemployed graduate boy bringing us our cheesburger.

It’s outfashioned and maybe a bit conservative to complain about this lack of flesh, blood, real smiles and human proximity. Yet, I still feel an attraction for those old black and white movies…

Paolo

3 Likes

It was not my intention to steer the topic in such a direction. I love Cubase, I use Cubase, I appreciate Cubase. I would also buy Dorico in a heartbeat if I had money to spare.

I just misjudged where the OP was coming from when he said he starts with the music in his head as notation. I thought that a notation software with Noteperformer would click to them more, rather the more analytic approach of a DAW.

I am not saying people should not use the “force” to create unrealistic orchestration, either. Everyone is free to use the tools as they please.