Feature Request: 3D Notation

BIG Feature request: 3D Notation

I want to be able to specify a position in three-dimensional space for each note or a number of notes.
Likewise, I would like to introduce an expression map that describes a movement of a sound in space.

The goal of this feature is to write music for the cinema.

The output of Dorico would have to be expanded to surround formats for this purpose or Binaural. If you select a note, you might see a panner on which you could determine the position of this instrument in the room.

Dorico should be able to generate spaces in which one can move the notes.

Best regards and thank you again - very good 1.2 Version of Dorico i like it.

Andres

1 Like

I can’t tell whether you are joking or a serious crazy visionary.

1 Like

At the core I am a musical instrument maker and now also a system analyst and programmer. It may be that I’m crazy too :mrgreen: , I already exhibited at the Bitmovie in Italy in 1986 as a computer graphics artist (with awards) and designed and animated black holes.

For me, such a feature would be relatively normal. :unamused:

If this is to be considered at all, we have first to clarify concepts. In this case, we have to separate the notation from its implementation. You can already specify a position in any adequate coordinate system and notate it, i.e. write it down — though I’ve only come across one attempt to do such thing in my years reading TENOR abstracts, despite the fact that most philosophical and artistic grounding of spatialization as a musical parameter was produced by composers, exactly the sort of people who write scores.

Still, this doesn’t get you any closer to playback, which seems to be what you really want. First off, all an expression map does is, well, map an input to an output. In theory, I could already send Dorico’s MIDI output to Max, for example, and control something with a dynamic hairpin read from the score.

I’d like to counter again this idea that everything should be a swiss army knife (no pun intended). On one hand, because Dorico is first and foremost a notation software, and musical notation as it is understood here will always be symbolic and human-oriented, instead of discrete and machine-readable. Describing anything to a human or to a machine is rather different, or else we wouldn’t need both a notated and played duration view in Play mode, like we already have. On the other hand, as the team gets to flesh out their vision for the Play mode, all the tools will be there to properly interface with another tool, something robust, designed not only for spatialization but to flexibly accommodate all one needs to do in this kind of work.

In closing, what I think would suit most users the best would be a) the implementation of the possibility to create custom playing techniques — as well as the possibility to specify gradual changes in these — and b) the fleshing out of Play mode, adding the possibility to draw envelopes that could be mapped to certain outputs. And this not even mentioning ReWire, which I hope will be implemented soon-ish.

The simplest interpretation is to fix or animate the position of the musicians on the stage.

But now we have the year 2018. More and more music is being composed only for electronic virtual instruments, also in Dorico as Notation.
And these virtual instruments can move freely in three-dimensional space. There are more possibilities, why should not we be able to realize this via musical notation on the computer?

Oh, but you can do that. For quite some time, now. Spatialization is nothing new. Also, you can even do it Dorico right now: the simulation of a space or stage and an instrument’s position in it is the whole deal of Vienna Symphonic Library’s MIR, no?

And I agree that it would be useful, if for only a limited portion of users, to expand Dorico’s output from stereo to n number of channels, though I must admit I have no idea how feasible this is. As long as we’re dreaming big, here goes nothing: if one could, for example, push each instrument individually in its own channel into a DAW through ReWire, that would be great to achieve what you want — with the appropriate tools.

@aaandima: it seems to me that what you are asking for is spatial sound rendering and not “3D notation”. I am not aware of any even remotely standardized approach for the latter, so the question would be – if idiomatic notation is the issue – how an implementation outside of Play Mode should look like.

On a side note, check out ina/grm Spaces and flux Spat Revolution … :slight_smile:

Doesn’t CuBase have a way to render surround sound? Isn’t sound production (especially dynamic sound positioning) more a function of a DAW than a notation program?

Doesn’t CuBase have a way to render surround sound? Isn’t sound production (especially dynamic sound positioning) more a function of a DAW than a notation program?

If you write modern music compositions -definitively not only within DAW

In the Past, on example Mr. K. Stockhausen must write extra Papers to instruct animations because Notations Software has no such features.

(Example: (german language))

On time before componists like K. Stockhausen write Notations for Human Artists AND Electronic Instruments to, conductors like Lorin Maazels write her seating plan : The Orchestra: A User's Manual - Seating

Unfortunately, composers of cinema and modern music do not yet have any integrated tools in the notation software to set tones in the room or even to move sounds in the room. I see the movement of sounds in the room becomes more and more important and belongs now and in the future in the hands of the composer and only then the conductor works out the seating plan.

As a modern composer, I would like to be able to define the positioning of sound events in the room myself - even in the music notation program and not only in Cubase later. When writing for human musicians and at the same time sounding electronic instruments, I would like to be able to indicate where in the room this instrument should be positioned.
In addition, it is now possible to integrate a seat plan into the notation with the computer, without this disturbing the later expression for the individual musician.

The fingering can also be specified in Dorico now. When positioning instruments or sounds in the room, however, Dorico should, if possible, be able to make this space audible while listening to it. Necessary is the 3D rendering in Dorico itself not necessarily, but it would be desirable.

Okay, from your answer it seems you mean to focus on notation. Very well.

The Stockhausen is not a good example. First off, because of temporal distance: he had no software tools to work with, and, when it came to hardware, his focus was on developing control interfaces for human performance instead — unlike, say, Xenakis, which one can argue did try to find ways to score every parameter. I’m sorry if this comes off as mean, but despite Stockhausen having passed away as early as 2007, I have to laugh at the idea that he couldn’t do so or so because of notation software.

And secondly, because of the nature of the concept of musical notation itself. Musical notation will always be an abstraction, one that’s optimized for human performance and analysis. Notation captures the structural, that which can’t be fixed and which will be actualized only in the moment of performance. This means that notation captures working relations that can’t be accurately described positively. In matters of pitch: 440 Hz will be as much an A as 442 (and I’m not talking about diapasons and tuning systems): the concept “A” is an abstraction that subsumes all adequate frequencies. In musical notation, we don’t usually describe rhythms by way of absolute durations — and when we do, the aim is almost never to increase precision but to decrease it.

So, if you really do mean to talk about notation instead of actualization/computer rendering, you have to be very much aware of what kind of data you’ll be inscribing in the score, and why. Alexander’s question was, as usual, absolutely critical (or crucial). There is no notation to describe movement in space for a myriad of reasons. And, as I said, space as a compositional parameter was something developed very much by people in the tradition, the sort of people who write notation, and there must be a reason why no one considered the idea seriously before. From your posts, no one can even surmise what you want to put into notation, how, or what’s keeping you.

Why is it that so many musicians and others consider two-channel stereo to be the ne plus ultra of sound reproduction? I record music professionally, and do so in surround as a matter of course these days. If synthesis is so important to a notation program – and I am playing devil’s advocate here, since I do not need it – why should it be limited to two channels? Ambisonics has been around for about 50 years.

David

I have no idea who that would be directed to, since everyone who popped by this thread seemed to be more or less involved in the issue, most contributing with alternatives. Personally, I’ve been trying to understand what aaandima is trying to get across, and he seems to be talking about notation and not audio. After all, if you need a notation software for its notation and not the playback, surely you understand that these are two different things.

3D notation could be kinda cool, like looking at Google Earth. Just think how Appalachian Spring would look projected onto the mountains.

If you rotate the viewing angle about a horizontal axis, music could transpose automagically. Unless the system ends up being the higher the note, the louder? Then you can get rid of expressions which might really clean up the page. Might have to wear special glasses (which I do anyway because of old age).

If you view from under the notes and looked up, you could do inversions. Conductor scores could be a step above the player’s parts. Organ music could have the notes way above in the organ loft. Bass players and percussionists would be between the conductor and sitting players. Opera scores would obviously be deep in the pits, but the singing parts would be very visible. Similarly for ballet. For marching bands there could be some good use of 3D, maybe incorporating the movement as well.

I’m having trouble visualizing what a page of 3D notation would look like. If the result is meant to be printed or on a screen then a sketched example, however basic, ought to be possible.
Or am I completely missing the point?

You are absolutely right, because there is no standard yet for the positioning of single notes or sounds. But there are postions of soprano, tenor, bass, alto, written like SSTBBA in the choir, for example.
There are no movements, except perhaps with composers like Thomas Tallis. New is now the influence of electronic music. And here in modern programs the position of a sound as a vector in space is described (like Dolby Atmos).

If you want to write music for a choir of people and involve electronic instruments, then you should be able to position an electronic instrument so that it does not disturb the selection of, for example, Choir positions SSTTBBAA acoustically, but supports it.

But if you only write music for electronic instruments, it could be done in vectors like Dolby Atmos. Sound objects that can be positioned at specific coordinates in the listening room. Coordinates are relatively well transferable in notation, like “y2x3z5” on Example

don’t think it’s handy, surround is a mixing decision on instrument level, not a composing one on note level IMHO
it might be handy to have an indicator like LF C RF LM RM LR RR SUB (expand for atmos and all other codecs :slight_smile: )
Also take a look at how ROLAND did this with motional surround.

All this 3D stuff is worthless, if it’s not understood downstream.

This is a very interesting idea. But I think there would have to be a standard, or at least widely used, notation technique for this before Dorico could implement it. In DAW’s it might be possible to implement it using 3 automation lanes representing the three spacial co-ordinates, with an appropriate rendering process to map this onto various speaker configurations as accurately as possible.

No doubt once holographic computer displays become standard, the (then) Dorico or CuBase Team will be on it in a flash.

Notations for 2D paning allready exists (2013) like “Sýnthesis” by Dr. Christian Dimker
http://christiandimpker.de/projects/synthesis/
http://www.digifaktum.de/downloads/pdf/286-9.pdf
He created vector symbols for paning to, but 2D is not 3D. I think Mr. Dimker will create such symbols later in his work.

But it is possible that such symbols are still searched by a few people - of course only until the possibilities are recognized. Then everybody wants it. I will now leave the discussion. I am a bit dusty here.