This is something I have never really understood.
If you want to control an instrument via CC messages, usually I have done this in the Key Editor a la the manual
There are also automation lanes in the project window. Buried within these lanes under More/MIDI channel/All CC there is a list of CC controllers.
I would have thought these were the same MIDI controllers that appear in the Key Editor, but they are not (!)
I don’t see why there are two types of CC per track. Formerly I thought that there was track CC automation and ‘Part’ CC automation, but the more I think this through, the less it makes sense to have such a distinction
They are indeed the same MIDI Data, but two different (and mutually exclusive) ways of visualising and recording/editing them. You choose your preferred method in MIDI Menu>CC Automation Setup. When set to “Automation Track”, when you enable Write on the MIDI track, the CC will get recorded as automation data (if you don’t enable Write, the CC data will get recorded into the MIDI Part, as per usual)
When set to “Automation track”, and you have recorded some, e.g., volume automation, you can still “see” it in the MIDI Part, but without any nodes visible, therefore not editable from there.
You can use the MIDI Menu>Functions>Extract MIDI Automation, to convert CC# data in a MIDI Part into an automation lane, but, for the converse, you have to use “Merge MIDI in Loop”.
Your explanation is clear but the process is about as transparent as a brick wall and needlessly complex IMO. It does not seem to work as you describe.
At the moment, I have a fresh project with one four bar track. I have written some cc11 into the key editor and then gone to find out where it might be in the automation lanes. Drilling down I find the lane labelled CC11 but there is nothing in it. Then, I try writing in this lane - some automation data. and nothing is showing up in the Key Editor (where the first cc11 data was written - UNTIL the track is read enabled. Setting MIDI controller automation set up to either MIDI Part, or Automation Track makes no difference.
Personally I don’t see the logical difference between an automation lane and a controller lane - they both show a linear representation of numerical data - 0/127, and they both have point to point structures. The graphical display is slightly different but the data is the same.
Why can’t they just display the same thing? Where is the problem, if they are the ‘same’ data?
One thing I observe is that the automation data is turned on and off by the R and W buttons, whereas the cc written to the Key Editor, remains ‘on’ . This seems an obscure point, but might be relevant.
As for the merge MIDI in loop. This does nothing that I can detect here, and I am puzzled as to why “loop”? If “loop” is “part” it should be called part, if it is the whole track it should be called track.
OK so I just tried to “Merge MIDI in Loop” with the read button on, Something happened but I know not what. In the Project Window using Edit in Place, there is now three curves. There is the original blue line from the automation (seen in the bottom of the mini key editor) and TWO different lines of key editor points! In the Key Editor (the real/normal one) there is only two lines! The original line I drew seems to have altered, I assume this is summed, and then there is the fient blue line representing the 'automation data" .
I do feel a bit further forward, but why this is so complicated escapes me. I do NOT feel like I understand why this is so designed
Well, normally, you wouldn’t “mix” the two… it is intended as a preference for how you basically want to work with MIDI CC#s… but with the means of converting one to the other, if necessary. If using Automation Lanes, then it works like using automation for audio, so the habit is to use Write/Read (and not bother with CC# lanes inside the MIDI Part).
As regards “Merge MIDI in Loop” (as opposed to “Part”), it is telling the truth … it performs the operation between Left and Right Locators .
Thank you Vic,
Thanks to your info: I am beginning to understand the concept behind the design, though I don’t think much of it.
I wanted both processes to work symbiotically they don’t it is awkward, however…
Today, I realised that if you stick with MIDI CC in the Key Editor (forget automation lanes), and, when in project view, you can view the controller lanes by using “Edit in Place” which is nearly the same thing. It achieves what I wanted, to see controller lanes in the project window. For the benefit of future viewers of this thread, I had some trouble getting the controller lanes up until I hovered just underneath the piano keyboard and right clicked - easy when you know.
I have been trying to create a Generic Remote on Cubase Pro 9 using my Korg nanoKontrol(2010) for writing my midi cc data.
I had used the learn function and assigned the Faders to Cubase; but I am not sure what needs to be selected in the Device, Channel/Category, Value/Action for my nanoKontrol faders to control them as below
First question, are you certain you’d rather use a Generic Remote Device rather than Quick Controls? (especially if there are no more than eight parameters you want to control)
You could use either VST Quick Controls, which would pilot the first eight knobs in the VST Instruments rack for the selected instrument,
or you could use Track Quick Controls, which would pilot the eight slots in the Quick Controls section of the track’s Inspector.(and you can right-click on each slot, to select the desired parameter… you might have to navigate a bit to get there though )
If you really would prefer to use a Generic Remote Device, then please bear in mind that you cannot do this globally (i.e., common to all VST Instruments)
So, for example, if you want to control “Synth xxx”,
in the bottom section of the Generic Remote Editor window, for your first fader, you would set…
Control Name= (Whatever you had named it in the upper section)
Device= [the name of the Instrument (“Synth xxx” in this example), which you should see in the dropdown menu]
Channel /Category= Device
Value/Action= [whatever parameter you choose in the dropdown list… presumably “Modulation” in this instance, if available]
Flag= [Probably just leave blank]
Then of course you’d do similar for the remaining desired incoming faders.
If you want that for a 2nd vst Instrument in the same project, then, personally, I’d instantiate a new Generic Remote for that (you can create as many as you need… well, there’s probably a maximum, but i doubt you’d reach it )