Why a DAW?

This question might perhaps be better asked in the Cubase Forum. If that’s the case, I’m sure someone here will tell me!

Anyway, the background to my question is this: I can’t imagine ever buying Cubase or any other DAW. To me, music IS notation. Well, let me correct that: to me, music is the SOUND that is suggested by notation. I can’t think of it in any other terms. I learned to read music notation before I learned to read English, my native language, and when I compose I see the notes in my head, before I try to write them down or play them into Dorico. Or to put it another way, I TRANSLATE the sounds into notation, in my head, as my first step.

So my question is this: How do composers who do so in a DAW do it? What is the thought process? Obviously they’re thinking in terms of sounds, but how do they translate those sounds into a DAW environment? How do they know when they’ve got what they intended to get? Only by playing it back? Or is there some other way.

This is just idle curiosity. If you’re offended by idle and/or pointless curiosity, please don’t bother replying! I simply don’t understand how composing works if the process is done via a DAW. I’ve read many descriptions of DAWs and how they work, including the Wikipedia article on the subject. It’s all Greek to me, as the saying goes.

If you feel like indulging me, I look forward to your reply.


Take a look at media composer Guy Michelmore’s Youtube channel for an amusing look at his composing process.

Much of the DAW jargon (very different traditional music jargon) has it roots in the recording industry often leading to confusion.


A very interesting topic. This dilemma still remains for me.

You can still compose by notation in a DAW. But I strongly suspect that performance… would not be what you envision. Even when composing on a sheet of paper, no matter the accuracy of instructions, there is always a room for interpretation, and different performances. Valentina Igoshina and Artur Rubinstein play the same waltzes of Chopin, from the same sheets even, but they are very different performances. That’s lovely and magical. Personally, I find this thought, that a musical idea resides on a piece of paper (or a digital file) as “instructions” and then it’s up to the performer to bring it to the world as sound, liberating.

On the other hand, when using a DAW, one has to immediately decide on a performance. What was a “poco rubato e leggero” on paper (and a very specific expectation of sound in my head) needs to be input as hard facts, data against a timeline. Now, there’s the option of a performer playing from the sheet of paper. Then it’s just a matter of setting up a microphone and pressing the record button (sort of). But what if the performer does not share the exact vision of my “poco rubato e leggero”? There are many possibilities then.

For one, I could try to communicate my expectation of the performance to the performer. Another one is that the performer could perform in such a way that sways my own expectation of performance! Many possibilities! The common thing here is communication. Interaction between humans.

But if I sit down with a sample library, a DAW, and myself, things change. I must caputre the performance myself. Trivialities that a performer could take for granted (e.g. bowings, bow position, string choice) are all parameters that have to be decided on by me FOR EACH SINGLE NOTE. And while in the previous example interaction and communication was the word, here it is programming, and dictation. There is no interaction with a computer, just execution of my own commands, and a feedback loop of trial and error.

The two worlds are quite far apart for me, this quantum world of zooming in to paint the dynamics of each attack or purposefully placing each event early or late to create momentum and the galactic world of writing down “ben misurato” and having another person immediately understand what it’s about. I don’t even want to go into full score orchestration and how distracting it becomes to orchestrate directly into a DAW.

When I compose a sketch directly in Cubase, I know that what comes out of the speakers is not what I envision. Not by a long shot. The thought that by spending many hours moving faders, painting CCs, layering samples and performing many other tasks I could come closer to what I envision is simply frustrating to me. I can’t bring myself to accept the fact that I have to direct an orchestra of perfectly sounding imbeciles (I’m not talking about the stellar musicians that recorded the samples of course!) that I have to teach how to phrase even the simplest of musical phrases! It’s like a mild insult I’ve heard at times: “You give him/her a whole note and he/she doesn’t know what to do with it.”

That was big, and it was a rant. I’m so sorry, but this topic always touches me.


For most of my life I composed mostly in my head - this was my training. Using a piano rarely (if I had to but I still needed to conjure the instrument blendings in my head - tough school). Then write it down.

When I started using computers (and Proteus II - if anyone remembers that!) the sounds were so laughable compared to my imagination I gave up trying to mock-up my music and just used piano sounds instead for playback, to help ‘visualise’ it so to speak. So, essentially, nothing had changed for me .

The huge moment for me was NotePerformer. Suddenly, composing became hands-on and immediate, the playback was no longer laughable (except, to this day, for choral sounds) and the process much more akin to moulding clay.

I make no bones about this, and no apology either: good playback has changed the way I write much for the better. And made the process fun too. Thank you Arne Wallander (I’d love it if he saw this)


Comparing a DAW with notation programs is comparing apples to oranges.
You use a DAW to make an audible product.
You use a notation program to make scores/parts for musicians.
Both cases can be used for composing or arranging.
With a notation program you actually make a semi-finished product. After all, it has yet to be played.
With a DAW you generally work with sounds or with midi right away. Most DAWs also have a notation editor, but in general you have much less options. This certainly applies to making scores and parts.
Both types of programs are increasingly converging in terms of possibilities and there are many similarities.
So depending on the desired production, you choose a DAW or notation program.


To echo @Marien , DAWs are really heavy-lifting audio editing suites. Yes, they are perfect for detailed mockups (best results garnered by playing in each part manually one-by-one) but that’s a horse of a different color.

Really, I think,—in this context at least—that it really comes down to how you want to compose. Many people (like myself) often improvise and need to hear their compositions outside of their heads and DAWs are wonderful for allowing you to play what naturally flows out of you and make sense of it later. Other people think notation first. (I find myself somewhere in between.) Ultimately, I prefer to noodle around at a piano and then notate it, and make a detailed mockup later in a DAW if it suits me. Other people love to play it all out into the DAW, and essentially transcribe it later. Just depends on how you want to work. In the end, there are people who compose and publish their work entirely in one environment or the other. For me, I need publisher-quality scores to put in front of live musicians, but I also need multi-track mockups recorded by live humans too, so I use both.


To me those are two totally different things in my head.

When composing in Dorico I see notes and I think in notes. This process is much more a “head-first” thing.
When working in Cubase, the process involves much less thinking about the specific notes that will eventually end up on a musician’s sheet. It is somewhat a “ear-first” thing that happesn on a gut level much more than in the head. Less overthinking here.

It’s more like “talking” vs. “writing”:
The creative process in Cubase is more like “talking”: I say something, correct it on the fly, stutter a bit, use different words, start a sentence again. When talking is done, I need to have got my point across.
Dorico to me is more like “writing”: This involes many things like thinking about what specific words I might use, the correct spelling of foreign words and the grammer of complicated sentences. When writing is done, everything needs to be perfect.


I use both Dorico and Ableton to compose. Dorico to write and Ableton to try and experiment. In addition, I use StaffPad as a sketchbook, which I always have with me.
Apart from that, I finish my work in Dorico when I have to make written parts and in Ableton/Pro Tools when I have to make sound files. For me, the difference between working in Dorico and a DAW mainly lies in the desired end result.


I think that is a 2010 way of looking at things. We should be looking at convergence. There is no good reason (other than the programming required, which is a big reason) that these should be independent activities.

If we go back to the late-80s or 90s, there were MIDI sequencers and there were audio recording systems. Forward thinkers wanted to blend these activities and that is what became DAWs. DAWs literally are the combination of MIDI sequencing and digital audio recording/editing.

The next natural point of convergence is to blend the notation environment with the DAW environment, recognizing that many composers begin their work in a DAW and end up in a notation application, and many do the opposite. For both camps, there are huge potential gains in productivity, creativity, and quality of results by making these two environments work more seamlessly.


No. This is the way my body feels and reacts when working with those pieces of software.
Don’t tell my body how to “look at things” :wink:

In my ideal world, I would use both Cubase and Dorico as a hybrid DAW/Notation package.

Cubase is perfect for iterative, sketch- and revision-based compositional process, which is used by many composers who work through improvisation. A couple of examples:

  • You can have multiple versions of any instrument part without losing sync and without having to copy-paste the rest of the entire score endlessly (the Cubase track versions)

  • You can break down the form of your piece (even sonata form!) into component parts, play them in and work on them in any order , with multiple in-place versions - and still have it play back continuously (the Cubase arranger track). I used it to put repeating sections back to back in a project, orchestrate them differently and play them back in correct order. I really miss this in Dorico!

When you stop thinking about DAW workflow as a linear flow, Cubase is an incredible tool. Unfortunately, it looks at notation as the final step of the process when everything else is said and done. It doesn’t allow notation to become a part of this iterative, modular process.

Dorico, at this current stage, is the reverse - it excels at notation but it is still completely “beginning/end” orientated, for obvious reasons, and it’s not very good yet for iteration, sketching and modularity that’s in-place rather than copy and extension based.

I hope that as the Dorico team is building the Cubase integration they keep this modularity in view. Extending it is what would make Cubase/Dorico an unbeatable combination.

1 Like

Flows can be used to circumvent this “problem.”

Because setting up instruments, articulations, etc. is easier in a DAW for me.

And if the music won’t be played by real humans the DAW is superior for editing MIDI, mixing and mastering.

If the goal is composing music to print to sheet music and hand to musicians, that is where I’d use a notation application.

I find editing music in notation software very slow. I’d rather use paper and a pencil, then just scan it in using one of those application. That would still be 2x faster than using notation software.

1 Like

As others have said, different projects have different needs. In a lot of film/media work the majority of it is a hybrid approach where you are building mockups in the DAW with the best and most realistic sound possible. This is easier in a DAW because you can automate expressions and envelopes, use plugins, etc, etc. But often at least a few of the parts will be played by real players so you need the notation for that. So I use both Cubase and Dorico and usually mock up in the Daw and make it sound as good as I can, create a duplicate project and clean up all the MIDI for notation and then export musicXML in to Dorico to do final tweaks, add articulations and dynamics, etc.

As for my thinking process with composing, I like to switch it up, but often I’ll just load a piano VST and then figure out my tempo and start playing and then cut and paste what I like and build from there.


This is one of those areas where people have different innate capabilities and experiences that lead them to prefer notation or DAW for composition. There is no “right” for everybody.

My take? Let notationists notate, let DAW’ers draw in the piano roll, and let those who hope for their convergence continue to hope and to express their hope.

I myself simply cannot compose in a DAW. It’s not as if I haven’t tried at length. It just doesn’t work for me. And yet all my compositions are for playback only, they never get handed to any person to interpret and play on a real instrument. Dorico has made a great start in it’s still-young life with virtual instrument playback functionality

The setup and playback of virtual instruments continues to be much easier in a traditional DAW. Those who share my point of view hold out hope that the Dorico over time can narrow that gap. Those who don’t see no compelling reason for it. Some see it as a diversion from development time that could be devoted to their own hopes.

The Dorico development team will allot their development time based on the market the way they see it at any given time.

While I understand the attitude that these are simply two separate work activities that do not need to be merged, personally I sympathize more with the idea that this is thinking rooted in the past of the two disciplines. But that is not to say I disparage neither the people who disagree or their ideas. The market and time will tell, not me.


With the increasing tools in Dorico for cleaning up recordings, and the ability to precisely set the latency offset, I find myself real-time recording as a compositional technique more than ever before. It’s similar to what Guy does in his videos, except 1) his sounds better, haha, and 2) I remain strongly connected to the notation, not just the sound.

I find that in many cases, the result is music that flows more naturally, since it was partially improvised in real-time. It’s fascinating to see the differences in compositional choices. Also, if I get a good notation out of it, playback is much more natural.

I’m looking forward to seeing more refinement in regards to these tools (like smart split points). Also, I would like to see more options for humanization that allow better playback of sample libraries that are intended for DAWs, like Aaron Venture Infinite series.

If these refinements continue, I wouldn’t need a DAW at all in the writing process.


In my opinion composing in your head is a very limiting exercise. It is like imagining a chair in your head vs actually building one. No matter how good your imagination is a physical chair is a different thing, and if you don’t work together and in collaboration with the raw material (wood in the case of the chair or sound in the case of music) the end result is lifeless. To me writing down notation is a step like sketching a chair. It can give you a better idea of the final thing and serve as a guide, but you have to be willing to totally change your plans if the material demands it. In my opinion the major problem with western music is that because writing it down has such a long history and has become so ingrained in us we see a score as some sort of musical platonic ideal and we mistake the written score for actual music, when they are only tenuously connected. We feel like we are in complete control, and when we feel that way we close ourselves off and our music becomes as limited and feeble as our own selves. In the extreme we have even disconnected the composer from the musician with the former believing he is the later when he has never made music in his entire life.

You say that to you music IS notation and you can’t think of it in any other terms. This is the baggage our western musical heritage has left us with. Consider Indian music for instance: it has just as rich a history as our own, but it has never been written down. A raga has no composer. It has simply grown and evolved as a living thing by being played by musicians over hundreds of years. It is never the same thing twice. It is impossible to notate a raga and it is much better off because of this.

Hope that didn’t go too far off the deep end lol.


I agree with you. That’s precisely WHY I want to be able to use notation to compose while using immediate aural feedback. I do it all the time. I don’t just whiz through hundreds of notes confident it will all sound good in the end. I don’t even imagine more than the general sound of the passage I’m working on, constantly using playback to hear how it sounds in actuality. Then I go to work trying to experiment, listen, redo, etc., etc. until it starts to sound good to me. Not sayin’ everyone works that way, but I do.

1 Like

my attitude is very similar. For a large-scale orchestral work the chance of performance by a professional orchestra is remote. I don’t know of a single one of the finest living symphonists who has achieved more than the occasional performance or recording of a work. I’m not talking about commissioned commercial music but rather that which arises out of an inner need for creative expression. Therefore it’s essential to achieve the best possible mock-up, partly for personal satisfaction but also for others who may be interested in hearing the works. Even if you have a reasonably good inner ear, it’s still much easier to get instant feedback from top quality sample libraries. I use to write for about a decade into Cubase or similar but the minute I switched to notation software, the technical standard of my work went up immediately as I could really see what I’d done.

In other words, other than for those who don’t read music or are not very interested in doing so, a combination of playback and notation is essential – it is not one or the other. Dorico is already clearly the best at this and will only improve.


I really appreciate your attempt to answer my question. You’ve come the closest of all to understanding what I was trying to ask in the first place. The subject heading of my original post was probably poorly worded. Instead of saying “Why a DAW?” I should probably have said "How a DAW?’

But that wording sounds rather nutty, and so I chose the word Why when what I really meant was How. I do understand that if I need a good virtual performance of my music, a rendition from a DAW is probably going to give me a more realistic sound. But that’s rather beside the point. What I’m really trying to ask here is HOW a composer gets her or his thoughts into the DAW to begin with, since notation isn’t being used. Yes, I know there’s something called a piano roll, and I even get what that is, since at a very young age I did operate a pedal-pusher player piano. Holes in paper and all that. So I do understand that the ‘representation of the sounds’ in a DAW is sort of a visual picture of what the sounds ‘look like’ as a result of being played on a keyboard. What I fail to get is how the sounds the composer wishes to ‘record’ get ‘punched into the paper’ to begin with. (Since there is no paper and no hole-punch.)

Hopefully that makes the intent of my original question clearer, though I have some fear I may have instead just made matters worse!!!

1 Like