Support for Drawing Tablets?

After 15 years with Cubase, I think I’m getting sick of MIDI keyboarding… especially WRT VSL and sample libs.

I -think- I’d like to explore working directly with a notation program via a drawing tablet for data entry if that’s possible. I’ve used drawing tablets for design work and if I could actually ‘input’ via tablet as quickly as I can use paper/pencil I’d be a happy guy…

I’ve only ever used Cubase ‘Score’ in any serious way… I stuck with it because I do mostly mixing ‘live’ audio with MIDI tracks and despite its obvious limitations it’s very convenient to be able to work with the mix of audio/midi rather than constantly ‘exporting and importing’ MIDI from a notation program into a DAW as I see others do.

Anyhoo, my question is: what are my options?

Does Cubase Score or Finale or Sibelius support a drawing tablet in any serious way? Is it faster to work with than a MIDI keyboard? Do they support nice pallettes of tools (hair pins, symbols, etc.) like a CAD program would?

My main enduring frustration with the keyboard is that after I play, you still have to do all the crap with tweaking controllers/keyswitches and the like and I am SICK of it.

I’m thinkin’ it’s at least worth a try to try and skin the cat from the other direction—entering the notation and then working on the MIDI. The performances probably won’t be as cool, but if it’s less frustrating, it’ll be worth it.



No, none support them in a meaningful way, nor do they really support the Pen input mode in windows (for graphic tablets and pen tablets). I do run cubase and Energy XT from time to time on a Fujitsu pen tablet because it has a daylight screen and it does work with the Pen only input but with the exception of drawing curves it is not a great way to control a sequencer, superb for working with additive synth plugins though…

The only notation program that supported pen input was from Microsoft and it worked rather well but is no longer available and not near professional standard anyway, there is an freeware experimental effects VST out there somewhere that was expressively designed for pen usage and it is a hoot to use with a pen.

I use a tablet, however only in a mouse emulation mode (I know of no music specific software pen/tablet control apps like one sees for CAD programs).
I find it great for working in a wave editor, also for some automation - Basically, anything involving “drawing” in the conventional sense.


I’ve ranted about this on the VSL forum a lot and get a variety of shrugs or contempt. But I find it frustrating that I can use pencil/paper =much= faster than ‘MIDI orchestrating’. Or rather, I can play something in 5 minutes, but then spend the next -hour- setting up keyswitches and controller data to make a ‘performance’.

And when you see guys doing really -great- illustrations with a tablet, I get -really- jealous. It seems like it -should- be possible that one could ‘draw’ music as elegantly as one can paint.



I can play something in 5 minutes, but then spend the next -hour- setting up keyswitches and controller data to make a ‘performance’.

it really makes doing music so frustrating… somehow a revolution should be made in this area.
yes playing string woodwind etc instruments with keyboard is not ideal, but should be some easier way to set up those controller data when editing.

There actually -is-:

The guy behind this created the Garritan ‘Stradivarius’ and the -idea- was fabulous… It allows you to create a fairly decent ‘performance’ in real time with a MIDI keyboard, wheel and 1 or 2 expression pedals.

The problem is that:
a) no one else got on board with the idea. When I’ve mentioned the idea on various fora (VSL, East West) people guffaw. My -guess- is that most people simply don’t play ‘piano’ well enough to benefit from this and so -must- painstakingly slog through hand MIDI data entry. So VSL, et al have no incentive to improve the situation.

b) the guy doesn’t support Note Expression, which to -me- is a key.

But it is my opinion that doing everything by hand tends to lead to mechanical writing. At some point, the reason most MIDI music is mechanical sounding is because that is what is easiest to input.

And what drives me NUTS is that no -visual- artist would tolerate that. They insist on smooth tools… so they get them.


Have you seen the Seaboard-

I use a Wacom graphic tablet for note insertion in Sibelius. Granted it’s nothing more than mouse emulation (read: rudimentary and nothing particularly sophisticated), it actually works much better than using a mouse, because it’s more natural and more like writing with a pencil on paper. Incidentally, I was doing the same years before, when I was using Finale. I haven’t used any recent version of Finale, but I’m absolutely sure that graphic tablet mouse emulation works with that software too.

I hope Steinberg’s new notation program will include native touch and graphic tablet support.

Thanks for the input. :smiley:

I got a Wacom… It’s not just hooking it up. I’d also have to do some real work to make my DAW desk accommodate it ergonomically–my setup is currently all oriented around a MIDI keyboard and controller sliders. I wonder how one fits all this hardware around ya so it’s easy not more of a PITA.

For example, I’ve had as many as 4 screens, but then I realised they were making my -monitors- sound like crap.

Always trade-offs.


Has nothing to do with what I’m trying to achieve, but I’m sure it’s very ‘expressive’.


I keep the Wacom (it’s a 20" model, i.e. fairly large) in a slot on a custom-built desk that holds computer, monitors, keyboards, near-fields etc. and only pull it out when I use it. When in use, I typically keep it on my lap. It doesn’t need to be ON your desk.

I use these in my day job everyday, they aren’t all that if you aren’t actually drawing.

The right click is gonna be what gets ya! Also “in-between fingers” fatigue, cramping, wrists… We have evolved to mouse using people for PC tasks. I sit here and try to use the stylus for everyday tasks, it never lasts too long. Switching from stylus to mouse over and over again aint too fun either, but you can get very good at it over time.

I agree with this. It took me -forever- to warm up to a Wacom simply because it’s so -annoying- (for me anyway) for anything -except- real drawing. I’ve seen guys demo it and it reminds me of vacuum cleaner salesmen—they do magic, but in the real world? Eh… not so great. :smiley:

That said, for -drawing-, a tablet became FANTASTIC after about 100hrs of fighting it. And -if- that could be transferred to a music notation metaphor, it would be -wonderful-. But as I began by ranting… getting the notes onto a ‘stave’ is only half the battle. -Then- the palette has to be smart enough so that when you put in a ‘>’ or ‘.’ or hairpins, it ‘tells’ VSL… ‘switch to marcato’… now cresc… now stacc… OK, now slur these two notes… The symbols have to be tightly integrated with all the controller and keyswitch junk I HATE.

For -me-, my ‘Starship Enterprise’ would be:
a) a notation program that worked with a tablet well
b) and then note-expression stuff that auto-magically converts the symbols into a universal format that all big sample libs understand.

a) sample libs that work like samplemodeling so you can capture a real-time performance and all the controller junk gets automagically converted to all the relevant articulations.

As it stands, DAWs remind me of those fancy video editors like After Effects or Blender. You sketch out a ‘wireframe’ and then spend a TON of time tweaking it to get the final ‘rendering’. And it’s my belief that this is why so much digital ‘art’ is so mechanical. You need the realtime feedback.

Ironically, this is not a problem for me with pencil and paper. Since you’re just ‘hearing it in yer head’, I feel zero frustration. Yeah, I can’t hear it, but I know that when I -do- get it in front of players they -magically- take very simple ‘instructions’ (notes, dots, hairpins) and turn it into ‘music’.

With DAWs, it still feels like more -programming- than anything else.


True, but entering notes with a graphic tablet and durations/rests with the computer keyboard still beats entering notes with mouse & computer keyboard or music keyboard & computer keyboard. Like I said, it’s rudimentary and far from perfect, but if you’re used to writing music with pencil and paper, that’s as close as you’ll ever get today. Of course I totally wish that Steinberg’s new notation software will be far more advanced in that regard.

You’re lucky you get a response! :wink: :wink:

Generally, people can only see a solution that is just a step in front of where they are. Quantum leaps or left field ideas generally don’t get traction, regardless of how much time/effort/money they would save or how many opportunities they would open up.

Look at how many centuries it took to get something like Leonardo’s helicopter to actually fly.

But when an idea’s time has come, it flies!

The problem with trying to do such ‘playing’ indirectly by keyboard/tablet/whatever is that:

a) sample libraries only model discrete scenarios of the continuous spectrum that real performances can transition freely between.

b) the controller action repertoire is very generic and tends to isolate parameters that are interacting dynamically within an actual performance.

For example, on a SoundsOnline thread (, someone was trying to model a classical violin performance from a video. Their first attempt was good, but exhibited some of the stiffness of a lot of sample-based stuff.

When I looked at the video, I noticed that:

a) during the stronger sections, the notes were not only louder, but the performer took shorter and more abrupt bow strokes, probably reflecting the higher tension in their arms, so that the notes were slightly ahead of the orchestra.

b) during the quieter sections, the performer drew the bow longer, and seemingly more relaxed, so the notes were not only softer, but slightly behind the orchestra.

I pointed these out to him, and his second attempt required a lot of tweaking, but also sounded more natural.

To me, this says we are only going to get good sampler performances if we can:

a) set up parameters so that they interact in the same way a true performer’s physiology/temperament/emotion would have them in relation to the actual instrument’s dimensions/inertia.

b) control the interaction by just a couple of abstracted meta-parameters, making it easier to perform in real time, or using automation curves.

For example, to get a more realistic violin performance, an ‘intensity’ parameter, perhaps controlled by foot pedal or automation curve, could:

a) with increasing ‘level’, simultaneously:
___1) increase the level of notes.
___2) move the notes more forward in time.
___3) blend-in/select the more staccato patches.
___4) increase the initial bow bounce.

b) with decreasing ‘level’, simltaneously:
___1) decrease the level of notes.
___2) retard the notes more in time.
___3) blend-in/select the more legato patches.
___4) soft start the notes.

Now also imagine another meta-parameter for feel/genre that changes the bias amongst the patches, in much the same way that ‘volume’ selects between patches that match the timbre for different playing levels.

For guitar samples, tempo would have to inversely vary the time between individual strings in a strum.

I see that while artistry shifts the upper boundary of what can be manifest, analysis and quantification of what makes them so helps to shift up the lower boundary for everyone else.

One just has to see how difficult it was even for a trained professional to use photo editing programs to touch up portraits compared to what an untrained person can do with Portrait Professional in a few simple keystrokes in 10 minutes. That was because someone distilled all the complexity into a few simple key parameters and made a program that made it easy for ANYONE to do it.

Keep making suggestions suntower. One day someone will take up the challenge!

To follow on from what I wrote above, maybe we won’t really get more realistic and easily played sampler-based instruments until samples are more micro-adjustable depending upon meta-parameters.

Imagine a sample instrument consisting of a whole lot of very short impulses (from each stage of several notes), which, according to a couple of meta-parameters, are dynamically selected and morphed (in time and timbre) between.

Just maybe these huge multi-GB sample libraries, sampling complete notes, could be substantially reduced, at the expense of increased CPU, to just a few MB, while being a whole lot more versatile!

@Patanjali… all that is way too up in the clouds for =me=. All I can tell you is that the -performance- part of the equation is mostly possible NOW. Samplemodeling gets =much= closer, using just Kontakt scripting. VSL can create ‘auto-switching’ patches that are pretty good. Not perfect, but pretty good.

The problem is that, all that controller data doesn’t get translated to notation properly. And/or there’s no support for ‘expression’.

It’s -possible-, it’s just that sample lib makers don’t choose to support Note Expression and notation programs don’t translate the symbols into universally understood MIDI/controller stuff… again Note Expression could probably handle this. The reason they don’t support it? Lack of demand by… as you say… people who can’t see beyond the status quo.

If notation could hold all the levels of detail that can be expressed in MIDI, they would probably be too cluttered, or have too many symbols to remember. After all, the few level symbols (ppp pp p mp mf f ff fff) don’t really cover the 127 of MIDI. Or we would have to depart from the traditional symbols and go for things like L54 or L114.

I remember one particular MIDI rendering of a recording of ‘Somewhere Over the Rainbow’ that was far too complex, and contained many shifts in tempo, indicating that to be notationally defining it specifically enough to ensure it is reproduced exactly would take far too much time to encode.

Unfortunately, MIDI is still the #1 bottleneck we have to deal with. And it’s starting to make less and less sense to cling to a 1982 standard. But then again, no one has to courage to just discard it and rebuild it from scratch, because everything third-party (not to mention all MIDI hardware) would become instantly incompatible. But maybe it’s possible to develop a new, truly modern standard that maintains MIDI backward compatibility?