MIDI Remote API. Map a set of commands to a physical MIDI button

The current downside of the traditional macros is that the user can’t add a delay between two commands. Through scripting though, so many things I know that are possible… I know this because I’ve played a bit with AppleScript, JavaScript and Bash. But mostly, AppleScript.

Anyway, my question is as follows. Is MIDI Remote API able to assign/map multiple commands (like in a macro) to a single physical MIDI button?

EDIT: through actual coding, not through mapping assistant.

I presume that going through the API you can use whatever is available via JS to send or delay the commands required.

However, placing it in the script, I guess would be fixed mappings, and not mappable via the mapping assistant though? So you may lose some flexibility there unless you managed a list that the JS code could read perhaps?

I’ve not had a chance to play with it yet, but that’s how I imagine it to be.

1 Like

This is what I have tried to do over the weekend, so that my PLE monstrosity is coded in JS instead.
I think I got to ambitious.

Will be revisiting this soon though.

What I found was that one could map a binding:

makeValueBinding()

What I was expecting was that one could make a binding with a functor/callback, and put what you wanted to happen in the callback, which would include sending sysx to the device.

Then I found this from novation

button.mSurfaceValue.mOnProcessValueChange = function (context, value) {
      const offset = this.offset;
      const state = context.getState('subpage.faderButtons');
      if (state === 'Arm') {
        sendSetButtonColor(context, 0x25 + offset, 'recReady', value);
      } else if (state === 'Select') {
        sendSetButtonColor(context, 0x25 + offset, 'select', value);
        sendSetRGBButtonColor(
          context,
          0x25 + offset,
          [this.r, this.g, this.b],
          value
        );
      }
    }.bind(channelSettings);

Then stuff like this:

var masterFader = s.makeFader(xOffset + 16, yOffset + 1, 2, 5);
  masterFader.mSurfaceValue.mMidiBinding
    .setInputPort(midiInput)
    .setOutputPort(midiOutput)
    .bindToControlChange(0x0f, 0x3d);
  masterFader.mSurfaceValue.mOnTitleChange = function (
    context,
    objectTitle,
    valueTitle
  ) {
    sendSetParameterName(context, 0x58, valueTitle);
  };

Again this looks like only binding to the control change and that would limit it to one actions. And that’s when I realized that I was confused.

Still it would be nice if it were more like
myfoctor = function { …

doActionA()
doActionB()
sendSysex()

}
…mySufaceElement.setCallback(myFunctor)
Surface.addSurfaceElement(mySufaceElement)

I am guessing this MVC/VPA style expectation is what is hindering me from understanding.

This was probably easier to read in TS… but again it’s not documented as well as I would hope, and will take some time to figure out. I could keep trying to figure it out, but documentation > trial and error.

In the end, it’s a lot easier than trying to reverse engineer PLE Hex. :smile:

Sorry, but this post is more of a display of my ignorance than any solution, but maybe others will come to our aid.

3 Likes

I did something similar in my CC121 script.
When you change the Low or High Cut frequency it will automatically active it.
I used the mOnProcessValueChange for this and a code snipped from Jochen with the custom triggers. However, mOnProcessValueChange is called for each change of the value. Also when channels are switched, or mapping pages are switched. It is bound to the control and not to the host value. Hooking into the host value would be good option for this to allow multiple bindings.
I implemented code to detect the changes of pages and channels to not activate the high or low cuts in that events.
You can find my js file over at the CC121 forum.
What I miss is the kind of ‘core’ object (I’m new to Javascript, I’m fom C/C++/C#/Python). I tried to make things blink, but the timer functions (SetInterval) are not available. So maybe the Delay(ms) is also not available.
Thanks for that code snipped. I missed the getState function. This is great.

1 Like

Your code is the most readable of any MR script I have seen so far, thank you. Taught me a few things. Head is not in the game at the moment…

Still can’t convert your PLE’s directly to JS though, as I still can’t find a way to bind multiple commands like

page.makeCommandBinding(button3.mSurfaceValue, /*[PLACE TO PASTE COPIED SNIPPET]*/)

I want something like:

// Imaginary Code !!!!
 myFunctor = function(context)
{ 
    host.doAction( /*[PLACE TO PASTE COPIED SNIPPET]*/)
    host.doAction( /*[PLACE TO PASTE COPIED SNIPPET]*/)
    host.doAction( /*[PLACE TO PASTE COPIED SNIPPET]*/)
    var sysex = makeSysex([0x01, 0x53, address, r, g, b]);
    midiOutput.sendMidi(context, sysex);
}

page.makeCallBackBinding(button3.mSurfaceValue, myFunctor)

Please create a Macro Key Command and map that instead. MIDI Remote is “only” for remote controlling things that already exist in Cubase.

1 Like

Some operations take longer to complete. This is why I’m looking for macro alternatives. So far, I’ve been using Bome Midi Translator Pro, AppleScript and Sendmidi to delay different Generic Remote entries/mappings.

A script example (that I use as translator output in BMTP) looks like this:

Applescript code
-- When the script runs and by the help of a MIDI virtual port (named "SessionKiano"), send to Cubase a MIDI message that has the following characteristics: note-on, channel 2, pitch 109, velocity 127.
do shell script "eval $(/usr/libexec/path_helper -s); sendmidi dev SessionKiano ch 2 on 109 127"

-- Wait half a second (before sending the next MIDI message).
delay 0.5

-- Send another MIDI message (perform something else in Cubase).
do shell script "eval $(/usr/libexec/path_helper -s); sendmidi dev SessionKiano ch 5 on 10 127"

I thought I could just use JavaScript…

1 Like

Is that to avoid infinite loops? cuz I think one can do that anyway.

I hope the “infinite loop detection guard” works. If the script gets stuck for (I think) 2 seconds, it will automatically be switched off.

2 Likes

As I explored (you need to explore due to poor documentation) from the real world example there are multiple ways to achieve this. I used the concept of sub pages to assign multiple controls to one button. Let me try to explain it.
You can have multiple sub pages within 1 mapping page. For this you need to create a sub page area which contains the subpages. A subpage collects host value bindings to controls. You can use the same controls on each subpage. Only one subpage can be active. So you can switch between the subpages and each subpage offers different host value bindings. I used a single button to switch between them with a full press (toggle). You can also detect e.g. a note on and off separately and program it like a shift button. Activate one subpage as default and by pressing the “shift” button you activate the other subpage and by releasing the button you activate the default subpage again.

See this picture how I achieved this with my CC121:

The bottom rotary knobs are assigned to different subpages and I can toggle between EQ gain and EQ type with the bottom left button.
Other solution is to define multiple mapping pages. I have three mapping pages in my CC121 script and step through them with the bottom right button. So I have actually assigned 4 different host parameters to each of my button row rotary knobs.
But you can’t change 2 parameters (do actions) with one control with this approach.
To do that you can use the custom value variable as I use it to enable the high and low cut pre filter automatically by just changing the frequency. I detect the change of the frequency and enable the filter.

Hope this helps!

3 Likes

So… if you want a set of buttons that behave like regular ordinary MVC or VPA and properly trigger callback with events rather than being bound directly to single actions. JavaScript seems like a poor choice of languages. Anyway, keep reading for a possible way to accomplish this.

It’s true that Logic uses JavaScript. but even they us an Event based architecture, and you can script all of the MIDI, not just Host (DAW) control focused events. And their way of making Host changes is “setParameter”. Simple and Clean! It’s baffling to me how you could read what Logic has, for even 10 minutes and then not think that you needed to at least be at least parity.

Here is an example from Logic:

function HandleMIDI(event) {
event.send();    /* send original event */
if (event instanceof Note) {  /* if it is a note */
event.pitch += 12;   /* transpose up one octave */
event.sendAfterMilliseconds(100);  /* send after delay */
 }
}

function Reset() {
SetParameter('slider', 0);}

What you need is a trigger for the callback. It is beyond me why they didn’t just provide callbacks and a host class with methods or even the raw functions that Logic has. It’s a tried and true architecture after all, used by applications for decades.

But it seems that it is might be possible to construct this behavior with a little work around.
To get the same in Cubase without the MIDI scripting, I came up with the following cockamamie scheme.

Just brainstorming here but…

What you need is something to base the callback on, like a “dummy control track” (DCT). I’m thinking of a PLE that will set a value on the DCT, make a change, select the tracks with the return tag. You would need one Before PLE and one After PLE, and one for every callback you wanted to trigger, named appropriately of course. They wont do that callback, they will set some value on the DCT that will trigger the callback… eventually.

In the callback for the change from the PLE it calls a function that is concerned about triggering. um call it, “checkCalbackTrigger()”. This sets the trigger to the one you want to do.

Then in the callback when the track changes. (mOnTitleChange)… I think…, and the trigger is set, but the title is not the DCT, you do the specified callback via a lookup.

If you just did the triggered callback in checkCalbackTrigger it would trigger on the DCT, so you have to wait until it comes back from the PLE and the track changes back to the one you wanted.
Of course this is on a callback by callback basis.

This convoluted mess is because I can’t figure out how to watch for the event on the specific track without changing tracks.

Now for the functions in the callback.

You can make a set of CustomValueVariable s that are bound to the actions you want to trigger in the callback, and you make functions that match the name of the behavior. These then set the CVV, but the rest of the code will read like it is a function call to the host. Maybe even make it HostWrapper.doAction.

You could write the HostWrapper one time, You could even number the DCT actions and the callbacks so you wouldn’t even need to worry about what those are after it was written just once.
Save an array of whether it is immediate, or on track return, and build that functionality in.

It’s convoluted, and I haven’t tried it yet. But if you can display the name of the focused track on a display on a device, then I can’t see any reason this wouldn’t work. Which begs the question again.

Why didn’t they just provide events and callbacks? Just this thought experiment suggests that there isn’t a technical reason. It would have been a far superior architecture, and would have provided a simple path to something even more powerful.

Why does the surface representation on screen in the lower zone, or popped out to a separate window, not accept input? I can’t think of creating that code, and not thinking that it should be included. Someone would have to come along and tell the developer to leave out out specifically.

Are these related somehow?

I don’t get it.

1 Like

Did not read the full post, but you guys should check out SoundFlow. I’m not payed by them but I have used it for several years and love it.

You can trigger scripts (written in Javascript) that can send midi, make delays, interact with UI etc etc.

It’s very easy to route a MIDI device through soundflow and let the output be manipulated.
I’ve been working on having my Icon Platform M+ be both a CC controller, Mackie and QuickControl. In this video I’m routing it through SoundFlow, which then sends midi to Nuendo. When in CC Mode I’m also sending SYSEX messages to the Icon to get feedback on the screen.

3 Likes

I’ve always thought so and it’s so good to hear someone say it in black and white. I’ve tried, you can’t really get it to do anything clever on its own etc

Hi oqion
I’m beginner developper with Midi API remote
I’m trying to find as many examples as possible because the PROGRAMMER’S GUIDE is very poor with explainations…
So, in your code above, there is a variable “context”. I don’t understand how it works. In fact I don’t understand wich value take this variable. Can you give me an explaination please ?
thanks
Phil