Anyone know if it’s possible or how to map a midi knob to timeline jog function. Using the legacy Mackie method works, but is it possible to do so with the new midi remote mapping system?
Depends on what “possible” means.
The new MIDI Remote is very primitive in its approach, as it only interfaces by emulating key presses.
It is not directly aware of actual jog wheels, you need to painfully emulate yourself, thru programming, repeated keypresses when a knob is turned one way or another.
So it is technically possible but with more effort and much worse results than using Mackie.
If the hardware controller has an endless encoder capable of sending different notes or different CC numbers for turning left vs. turning right, it gets pretty close to Jog Wheel behavior or am I missing something?
That’s a big “if” and it shouldn’t be necessary.
There should be functions in the API that connect directly to knobs and return to the user the required data (left, right, amount and so on), without any key command layer.
Yeah they should just offer a Jog (Relative) map, so based on whether you inc/dec within the same CC# the jog will move forwards or backwards.
So you end up with the following mappable destinations in the mapping assistant:
Jog Left
Jog Right
Jog (Relative)
Otherwise it’s down to us to keep recreating the wheel for any controllers where we want such control.
Cubase can only interpret what the controller is sending. If the controller has encoders that can send some form of inc/dec values, Cubase’s MIDI Remote is equipped to handle those. Via the API you can connect such encoders to Jog.
Yes, this is the type of controller I’m trying to map. It’s the large wheel in the middle of an Arturia Keylab 88 MKII, which works in Mackie mode, but I can’t seem to get it to work on the jog function via the midi remote mapping editor.
The Mackie protocol doesn’t have this problem.
The problem is Cubase, and not the hardware which somehow doesn’t send good info.
And the problem is not what data the API gets, but how the API is able to drive Cubase, which is thru emulation of Key Commands.
When the API gets the info from an encoder, it doesn’t tell Cubase “now, smoothly zoom in”, it tells it “I’m repeatedly pressing the Zoom In key”.
What other DAW would you need to learn javascript, recreate what you’ve already done in the UI via code, just so you can get a working jog wheel though? That’s what goes through my mind.
Maybe I’m in the minority here, but I consider the jog wheel to be a core transport function. Very rarely do I use fast forward/rewind buttons - I tend to set them to next/prev marker.
Furthermore, the MIDI Remote and Mapping Assistant have been designed to make things easier for users… if we can do it via JS then it would be a walk in the park for SB to make this a stock function that reads the relative value.
I never said anything about sending good or bad information. I think you have the wrong idea though. Mackie protocol can only be used if both the sending and receiving devices support it. Same with any protocol. They have to speak the same language.
I don’t disagree with you. The mapping assistant and the API have the potential to become fantastic features of Cubase. From everything I have read, they are in their infancy and are expected to grow in coming updates.
You said “Cubase can only interpret what the controller is sending”, which is an inaccurate description of the problem.
The problem lies in the way the new API is able to interact with Cubase, not with the contoller.
One can easily add a knob to the MIDI remote, and have it react to the motion of the hardware knob.
But there isn’t a good way to connect the received data with various areas of Cubase (zoom, scroll, transport, slide etc).
It is an accurate description of how one unit communicates with another.
What exactly do you mean by “connect directly”?
Have you actually used the MIDI Remote API?
Did you try to make an actual Remote for a device, with basic functionality like zoom, scroll, transport/jog?
I have one device with endless encoders that I have made a JS script for. I haven’t tried tying any to the functions you mention, no. I’ll give it a go this weekend and let you know my findings!
@digitallysane I played around a bit with the API and was able to successfully map an endless encoder (on a DJ TechTools MIDI Twister) to the Jog Left and Jog Right functions.
I’m posting the script as-is below if you want to have a look.
Remember you will have to make adjustments to have it work with your hardware.
JS Script
var midiremote_api = require('midiremote_api_v1')
var expectedName = "Midi Fighter Twister"
var deviceDriver = midiremote_api.makeDeviceDriver('DJ TECHTOOLS', expectedName, 'WOLAND')
var midiInput = deviceDriver.mPorts.makeMidiInput()
var midiOutput = deviceDriver.mPorts.makeMidiOutput()
deviceDriver.makeDetectionUnit().detectPortPair(midiInput, midiOutput)
.expectInputNameEquals(expectedName)
.expectOutputNameEquals(expectedName)
var surface = deviceDriver.mSurface
function makeJog(knobIndex, row, channel) {
var knob = {}
var cc = knobIndex
knob.jogLeft = deviceDriver.mSurface.makeCustomValueVariable("JogLeft")
knob.jogRight = deviceDriver.mSurface.makeCustomValueVariable("JogRight")
knob.encoder = surface.makeKnob(knobIndex, row, 1, 1)
knob.encoder.mSurfaceValue.mMidiBinding.setInputPort(midiInput).bindToControlChange(channel, cc)
knob.encoder.mSurfaceValue.mOnProcessValueChange = function (activeDevice, value) {
if (value < 0.5) {
knob.jogLeft.setProcessValue(activeDevice, 1)
}
}
knob.encoder2 = surface.makeKnob(knobIndex + 1, row, 1, 1)
knob.encoder2.mSurfaceValue.mMidiBinding.setInputPort(midiInput).bindToControlChange(channel, cc)
knob.encoder2.mSurfaceValue.mOnProcessValueChange = function (activeDevice, value) {
if (value > 0.5) {
knob.jogRight.setProcessValue(activeDevice, 1)
}
}
return knob
}
function makeSurfaceElements() {
var surfaceElements = {}
surfaceElements.knob = {}
surfaceElements.knob[0] = makeJog(0, 1, 0)
return surfaceElements
}
var SE = makeSurfaceElements()
function makeBindings() {
var page = deviceDriver.mMapping.makePage('JOG')
page.makeCommandBinding(SE.knob[0].jogLeft , 'Transport', 'Jog Left')
page.makeCommandBinding(SE.knob[0].jogRight , 'Transport', 'Jog Right')
return page
}
var page = makeBindings()
Cool, I’ll take a look.
Does it use key commands for doing that?
It binds to key commands, yes.
page.makeCommandBinding(SE.knob[0].jogLeft , 'Transport', 'Jog Left')
page.makeCommandBinding(SE.knob[0].jogRight , 'Transport', 'Jog Right')
Well, yes.
This is what I meant.
The communication between the MIDI Remote API and Cubase is, at this stage, bad design (or just a hack).
You are able to get the knob information easily into the MIDI Remote and then, in order to actually use it, you need to do those programming acrobatics to split the rotating knob data into key presses.
This should be as simple as MIDI learn in a VST: this knob does Jog, and that’s it.
But for this, functions like Jog or Zoom or Slide (or anything that can be logically mapped to a knob or slider) should be directly, natively available to the MIDI Remote.