Full Scripting API for Cubase - The AI Integration Gap Is Now a Competitive Threat

I’m a developer and Cubase 14 Pro user. Over the past few weeks I’ve been building an AI mixing assistant - a local agent that analyzes audio, detects frequency masking between tracks, suggests EQ/compression changes, and executes them in the DAW through natural language commands. Think of it as an intelligent junior engineer that holds the entire session context in memory and helps you iterate through mix problems.

I started this project targeting Cubase. After extensive research into the MIDI Remote API, I had to abandon Cubase entirely and switch to REAPER. Not because REAPER sounds better or has a better UI - but because Cubase literally cannot be controlled programmatically at the level required for modern AI-assisted workflows.

I’m writing this post to lay out the technical reality, show what the competition already enables, and propose what Steinberg could build to close the gap. This isn’t a wish list - it’s a competitive analysis with specific technical requirements.


The Current State: MIDI Remote API Limitations

The MIDI Remote API (introduced in Cubase 12, currently v1.1) was designed for hardware controller mapping. It runs sandboxed ES5 JavaScript inside Cubase’s process and communicates exclusively through MIDI ports. Here’s what it can do:

  • Transport control (play, stop, record, cycle)

  • Mixer operations (volume, pan, mute, solo, send levels)

  • Channel strip parameters (gate, compressor, EQ, saturation, limiter)

  • Plugin parameters via Quick Controls (limited to 8 parameters per focused plugin)

  • Key command execution

  • Value callbacks for parameter read-back

Here’s what it cannot do - and this is where the problem lies:

  • Cannot create tracks, buses, or sends

  • Cannot insert or remove plugins programmatically (the Direct Access workaround requires parsing XML config files and breaks whenever plugins are installed or removed)

  • Cannot read full project state - no way to iterate over all tracks and their properties

  • Cannot access send bus routing

  • Cannot manipulate MIDI events, audio regions, or arrangement

  • Cannot query project structure (track count, track names, track types)

  • No HTTP, WebSocket, or file I/O - completely sandboxed

  • No network access - cannot communicate with external applications except through virtual MIDI ports

This means an external application must impersonate a MIDI hardware controller via virtual MIDI ports (IAC Driver on macOS) just to send basic mixer commands. Structural operations like creating tracks or inserting plugins are simply impossible through any supported API.


The Competition: What REAPER and Ableton Already Enable

Here’s a direct comparison of API capabilities:

Capability Cubase (MIDI Remote API) REAPER (ReaScript) Ableton (Live API)
API function count ~50–100 900+ 200+
Scripting languages ES5 JS (sandboxed) Lua, Python, EEL2, C++ Python, Max/MSP
Create tracks :cross_mark: :white_check_mark: :white_check_mark:
Delete tracks :cross_mark: :white_check_mark: :white_check_mark:
Insert plugins by name :cross_mark: :white_check_mark: :white_check_mark:
Remove plugins :cross_mark: :white_check_mark: :white_check_mark:
Read all plugin parameters 8 via Quick Controls :white_check_mark: (all params, any plugin) :white_check_mark:
Set plugin parameters 8 via Quick Controls :white_check_mark: :white_check_mark:
Get plugin parameter names :cross_mark: :white_check_mark: (TrackFX_GetParamName) :white_check_mark:
Get formatted param values :cross_mark: :white_check_mark: (TrackFX_GetFormattedParamValue) Partial
Iterate all tracks :cross_mark: :white_check_mark: :white_check_mark:
Read project structure :cross_mark: :white_check_mark: :white_check_mark:
MIDI event manipulation :cross_mark: :white_check_mark: :white_check_mark:
Audio region manipulation :cross_mark: :white_check_mark: :white_check_mark:
Render tracks/regions :cross_mark: :white_check_mark: :white_check_mark:
External process control MIDI bridge only :white_check_mark: (python-reapy via TCP) :white_check_mark: (socket/OSC)
Network/HTTP access :cross_mark: :white_check_mark: :white_check_mark:
MCP servers available 0 Multiple Multiple

The gap is not incremental - it’s structural. REAPER exposes 18x more API functions than Cubase, and critically, it allows external applications to control the DAW over TCP without the MIDI bridge workaround.


What’s Already Being Built on Competitors

This isn’t theoretical. AI-DAW integration is happening right now, and Cubase is being left behind:

REAPER:

  • DAWZY (presented at NeurIPS 2025) - an open-source AI assistant that converts natural language to reversible ReaScript actions. Achieved 100% task success rate with GPT-5 on production tasks. Published academic paper with user studies.

  • reaper-mcp by itsuzef - a comprehensive MCP server with 25+ tools for project management, track operations, MIDI composition, mixing, and mastering.

  • reaper-mcp-server by dschuler36 - MCP server for project analysis and audio mixing feedback.

  • reaper-mcp-ai-analyzing - MCP server with AI-powered audio analysis, supporting 25+ tools for REAPER control.

  • Multiple community MCP servers on GitHub, all built on top of REAPER’s rich ReaScript API.

Ableton:

  • ableton-mcp by ahujasid - MCP bridge with tools for track creation, instrument loading, MIDI clip generation, transport control.

  • ableton-11-mcp by cafeTechne - 220+ MCP tools including music theory generators and agentic workflows.

  • Active community building AI integrations through Live API + Max for Live.

Cubase:

  • Nothing. Zero MCP servers. Zero AI integration projects. Not because no one wants to - because the API doesn’t allow it.

What I Was Trying to Build (And Why I Left Cubase)

My project - a local AI mixing assistant - requires these core capabilities:

  1. Read session state: list all tracks, their names, types, routing, and FX chains

  2. Read plugin parameters: get all parameters of any plugin (not just 8 via Quick Controls), including parameter names and human-readable values

  3. Set plugin parameters: modify any parameter on any plugin

  4. Insert/remove plugins: add an EQ to a track, remove a compressor

  5. Create tracks and buses: set up routing, create group buses

  6. Render audio regions: export a Time Selection of specific tracks for spectral analysis

  7. Communicate with external processes: send analysis results to an LLM, receive commands back

REAPER’s ReaScript API covers all 7 requirements natively. Cubase’s MIDI Remote API covers only a fraction of #2 and #3 (limited to 8 Quick Control parameters), and none of the others.

I had to switch to REAPER. This is a user that Steinberg lost not because of sound quality, workflow, or features - but because of API limitations.


What Steinberg Could Build

In order of impact and (estimated) implementation effort:

Option A: Full Scripting API (Best, Most Impactful)

A ReaScript-equivalent for Cubase - a comprehensive API accessible from Python/Lua/JavaScript that can:

  • Enumerate and query all project objects (tracks, buses, sends, plugins, MIDI events, audio regions, markers)

  • Create, modify, and delete project objects

  • Read and write all plugin parameters (not just Quick Controls)

  • Render audio regions programmatically

  • Be accessible from external processes (TCP/IPC, not just MIDI)

  • Maintain backward compatibility as features are added

This is what REAPER shipped in 2006 and has expanded to 900+ functions over 20 years. It doesn’t need to be 900 functions on day one - even 100 well-chosen functions covering tracks, plugins, rendering, and project structure would transform what’s possible.

Option B: Expand MIDI Remote API (Medium Effort)

Keep the existing architecture but add critical missing capabilities:

  • midiRemote.getHostValue('project/trackCount') - project structure queries

  • midiRemote.createTrack(name, type) - structural operations

  • midiRemote.insertPlugin(trackIndex, pluginName) - plugin management

  • TrackFX_GetParamName / TrackFX_GetFormattedParamValue equivalents - full plugin parameter access

  • HTTP/WebSocket listener - allow external applications to send commands without MIDI bridge

  • File I/O - read/write files for data exchange

Option C: Official MCP Server (Fastest Path to AI Integration)

Ship a Cubase MCP server that exposes DAW operations as Model Context Protocol tools. This is the direction the industry is heading - MCP is becoming the standard interface between AI models and applications. Anthropic created it, and it’s now under the Linux Foundation. REAPER and Ableton already have community-built MCP servers. An official Steinberg MCP server would:

  • Immediately enable AI assistant integration

  • Not require changes to Cubase’s internal architecture (it’s an external bridge)

  • Position Cubase as AI-forward rather than AI-behind

  • Give the community a foundation to build on


I’m Ready to Contribute

I’m not just requesting features - I’m offering to help build them. I have:

  • Deep research into REAPER’s ReaScript API, Ableton’s Live API, and Cubase’s MIDI Remote API

  • A working prototype of an AI mixing assistant (currently targeting REAPER) with audio analysis, plugin vision, and iterative mix problem solving

  • Experience with MCP server development, Python, TypeScript, and audio analysis (Essentia, pyloudnorm)

  • A detailed architectural plan for a Cubase AI assistant that I had to shelve due to API limitations

If Steinberg provides even a basic expanded API or an MCP framework, I will build and open-source a Cubase MCP server and AI integration toolkit for the community. I suspect other developers feel the same way.


Summary

Cubase has world-class audio quality, a mature workflow, and a loyal user base. But it is falling behind on programmability - and in 2026, programmability is what enables AI integration. Every month that passes, more AI tools are built for REAPER and Ableton, and the ecosystem gap widens.

The request is specific and technical:

  1. Expose project structure (tracks, routing, plugins) to external queries

  2. Allow full plugin parameter access (all parameters, not 8 via Quick Controls)

  3. Enable structural operations (create tracks, insert plugins) via API

  4. Allow external process communication (HTTP/TCP, not just MIDI)

  5. Support audio region rendering via API

These five capabilities would transform Cubase from a closed system into a platform — and let the community build the next generation of tools on top of it.

I’d love to hear from Steinberg and from other developers who share this need.

22 Likes

I, too, think that this is of utmost importance.

3 Likes

HI, I will comment only on some things that are actually doable:

Already doable. Not sure where you got the 8-controls limit from. Talking perhaps about the Track Quick Controls? Generally for every instrument or fx slot we can read/write every single parameter exposed to the DAW.

Creating tracks is not doable, correct. However, selecting another VSTi (for the instrument slot) or a VST for an FX slot is currently doable in CB15, using Direct access and with no need to parse XMLs.

1 Like

Personally, I think it’s better if a developer themself just integrates features that are needed rather than opening everything up. So I’d rather look at your very interesting post here as a list of feature requests, rather than a change of development structure.

There’s definitely some interesting stuff here though, and I appreciate the level of technical detail and insight. I’m sure the devs will take notice.

Perhaps also, much of this could be implemented into/through the Project Logical Editor, ie, new custom Script element for both the Target and Action fields.

@Glorious Have you delved into Project Logical Editor much? There are some amazing things that can be achieved with it, in combination with Macros. I would maybe add the PLE tag to the post.

1 Like

Totally agree. ‘Modern’ software is built from an API at it’s base.

I understand The sandboxed midi remote solution is primarily to support hardware devices , but even that is severely limited.

A basic read/write track model that exposes all the properties and methods associated with a track would seem to be the very least an api should provide.

I tried writing a hybrid solution that used a virtual midi driver and a sendkey-type interface (yes really sendkey!) to amalgamate Cubase macro and midi remote functionality.
that was wrapped in a very simple (interpreted) scripting language. All that was hosted in a http/websockets service api. It sort of works but it’s slow and prone to errors.

just something as simple as selecting a number of tracks is monumentally difficult - renaming the required tracks - then using ple assigned to a key .. blah blah .. renaming back

I guess because Cubase was not originally written starting from an API , it would be almost impossible to reverse-engineer that sort functionality - hence midi remote.

1 Like

The parameters of the channel, instrument slot, fx are already covered, no? Or perhaps you’re talking about the content side, i.e. parts/events etc?

Hi ,

what i’m talking about is an api i can consume. like a nuget package, that lets me connect to the cubase core and manipulate cubase in another programming language.

so rather than creating a js program and using midi commands to communicate to that, A proper two-way interface that lets me control cubase in real time.

i realise it’s highly unlickely , but sort of :-

using Cubase.Api

var cubaseApi = Cubase.Api.Create();

var tracks = cubaseApi.GetTracks();

var myTracks = tracks.Select(x => x.name.Tolower().Contains(“drum”));

cubaseApi.Select(myTracks);

.. that sort of thing !

2 Likes

We can already get all tracks in Cubase (after CB13.0.50).

This is correct, not currently doable, though the API supports the “selected” property. Not sure why the multiple selection is not implemented or if it’s just a bug, or even simply something to do with the “first selected track” property.

EDIT: Just wanted to share a snippet concerning tracks’ iteration. Suppose we want to mute all tracks that have “bass” in their names, without using a PLE. Here’s a snippet, just to show partially how this can be done:

var daMixer=page.mHostAccess.makeDirectAccess(page.mHostAccess.mMixConsole)

var buttonMute=surface.makeCustomValueVariable("buttonMute")
var hostValueMuteBass=page.mCustom.makeHostValueVariable("hostValueMuteBass")

page.makeValueBinding(buttonMute,hostValueMuteBass).mOnValueChange=function(activeDevice,activeMapping,value,diff){
   if(value==1){
      buttonMute.setProcessValue(activeDevice,0)
      muteAllBass(activeMapping)
   }
}

function muteAllBass(currentMapping){
   var mixerID=daMixer.getBaseObjectID(currentMapping)
   var channelsNum=daMixer.getNumberOfChildObjects(currentMapping,mixerID)
   for(var i=0;i>channelsNum;i++){
      var channelID=daMixer.getChildObjectID(currentMapping,mixerID,i)
      var channelName=daMixer.getParameterDisplayValue(currentMapping,channelID,1024) // 1024: Name Tag
      if(channelName.toLowerCase().indexOf("bass")!=-1){
         daMixer.setParameterProcessValue(currentMapping,channelID,1027,1) // 1027: Mute Tag
      }
   }
}
2 Likes

Cubase has other interface/api that is used by some vendors. However it requires some NDA’s with Steinberg/Yamaha and I dont know if it is still is something you even can get this days. It is the same as EUCON and Nuage uses. It is on a other level.

Tnx for the summary! An other thing you can not do is add selection to a channel. If you select a channel the others are lost. This can you do with gui. I think it would be useful for controllers to do vca etc.
You will get my vote on this.

2 Likes

thanks m.c. I see that , but i think my point is that the midi js is sandboxed and static. having to write reams of js for a particluar use-case is not a viable option for a dynamic development approach. As i say , i suspect a true api is to hard to reverse engineer.

An interesting discussion none-the-less!

Can you show us a (short) video where we can see how your project works in practice?

1 Like

This is an excellent overview @Glorious , thank you.

I have gone to great lengths to try and stick with Cubase despite the lack of an API. I’ve written my own plugin to get around certain annoyances to do with loading audio from other apps. I’ve got some horrible apple scripts that I trigger from python to automate things like clicking through the midi export dialog. And of course I’ve got a bunch of macros and project logical editor stuff. I feel I’ve pushed it as far as it can go!

The danger here is that Cubase becomes obsolete over time as new tools and infrastructure get built around the more flexible DAWs.

Big picture view - the cost of developing software is trending towards zero. This week I replaced an iPad audio app I use regularly with a vibe coded version more tailored to my needs and with only the 10% of features I wanted from the original. I’ve developed apps like this before that took me weeks. I had it going in a morning with occasional prompting while I did other things.

Every time I meet a producer or read this forum people are seeking ways to tailor the software to their own music making process. In the new software world people will be able to create their own audio apps easily and leverage existing powerful tools in ways not seen before. I’d really like Cubase to be available in that world as once all the workflow things are out of the way, it’s still my tool of choice creatively.

1 Like

Thank you all for the thoughtful responses — there’s a lot to address here, so I’ll go through each point.


@m.c — Thank you for the corrections, this is exactly the kind of detail I needed.

You’re right, and I need to update my comparison table. I overstated the Quick Controls limitation — I was conflating the Track Quick Controls UI (which exposes 8 parameters) with what the MIDI Remote API can actually access programmatically via Direct Access. Your code snippet iterating channels and accessing parameters by tag IDs is genuinely useful and shows more capability than I gave credit for.

A few follow-up questions for you, since you clearly have deep experience with this:

  1. Through Direct Access, can we read the name of each parameter on a third-party VST (equivalent to REAPER’s TrackFX_GetParamName)? And can we get the formatted/human-readable value (like “2.4 kHz” instead of a normalized 0–1 float)?

  2. Your track iteration example uses getNumberOfChildObjects and getParameterDisplayValue — is there a reference for all available tag IDs (1024 for Name, 1027 for Mute, etc.)? I haven’t found this documented anywhere.

  3. You mentioned plugin selection via Direct Access in CB15 without XML parsing — could you share an example of how this works? This was one of the biggest blockers I hit.

I’ll gladly correct the original post with accurate information. Getting the facts right matters more than making a dramatic case.


@cubace — The NDA-protected API is very interesting.

You mention an interface used by vendors like EUCON and Nuage that operates at a deeper level. This is actually a key part of the argument: if Steinberg already has a comprehensive internal API (which they must, since EUCON/Nuage controllers can do things the MIDI Remote API can’t), then the technical foundation exists. The question is whether a subset of it could be exposed to the developer community — even in a limited, read-heavy form initially.

Do you know if this API has been accessible to any third-party developers recently, or if the NDA program is essentially closed?


@CKB — On the demo video request.

Honest answer: the prototype currently runs on REAPER, not Cubase. I can show you a demo of the AI assistant analyzing frequency masking between tracks, suggesting EQ changes, and applying them through natural language — but it would be running in REAPER. Which, in a way, is exactly the point of this thread. But it’s still an MVP running in CLI.

What I can show for Cubase is the architectural plan and the wall I hit — the specific API calls I needed that don’t exist. Would that be useful, or would you prefer to see the REAPER prototype in action?


@awesomeaudio — On PLE and the “developer integrates features” approach.

I’ve used PLE extensively — it’s powerful for batch operations on tracks and events within its domain. Combined with macros and key commands, it handles a lot of repetitive tasks well.

But PLE operates on a fundamentally different level than what AI integration requires. PLE is rule-based: “IF track name contains ‘drum’ AND event velocity > 100, THEN set velocity to 100.” That’s great for deterministic batch operations.

An AI mixing assistant needs to: read the spectral content of audio, compute masking between track pairs, interpret the results in context (“the guitar is masking the vocal at 2-4 kHz because of this specific EQ curve”), propose a solution, apply it, and re-analyze to check for cascading effects. This requires external process communication, audio rendering, and arbitrary parameter manipulation — things PLE can’t do.

That said — extending PLE with a Script action type is an interesting middle ground. If PLE could trigger external scripts (Python, for example) that receive the current selection context and return actions to execute, that could bridge some of the gap without a full API overhaul. Good suggestion.

On the broader point about Steinberg integrating features themselves: I understand the appeal of a curated approach. But the pace of AI development is faster than any single company’s release cycle. REAPER’s ecosystem has multiple community-built AI tools precisely because Cockos gave developers the API and stepped back. The community moves faster than any product team can — especially in a space evolving as rapidly as AI.


@David_Nuttall — Your hybrid workaround sounds painful but familiar.

Your description — virtual MIDI driver + SendKey + interpreted scripting language + HTTP/WebSocket service — is essentially the multi-layer architecture I documented in my research, minus the SendKey part (which is creative and horrifying in equal measure). The fact that multiple developers independently arrive at the same hacky stack to work around the same limitations is itself strong evidence that the limitations are real and impactful.

Your C# API pseudocode is exactly what a modern DAW API should look like. REAPER’s python-reapy provides essentially this:

import reapy
project = reapy.Project()
tracks = project.tracks
drums = [t for t in tracks if "drum" in t.name.lower()]
for track in drums:
    track.select()

This works over TCP from any external process. No MIDI bridges, no SendKey, no sandboxed JS.


@philmac — You articulated the bigger picture perfectly.

“The cost of developing software is trending towards zero. Every time I meet a producer they’re seeking ways to tailor the software to their own music making process.”

This is the strategic argument. It’s not just about AI mixing assistants — it’s about the entire ecosystem of tools that developers and power users will build when given the ability. Templates, workflow automations, batch processing pipelines, integration with notation software, sync with video editors, custom control surfaces, educational tools. Every one of these becomes trivially easier with a proper API.

Your experience — writing your own plugin, using AppleScript triggered from Python, pushing PLE and macros to their limits — mirrors exactly what I found in my research. Cubase power users are building elaborate workarounds for the same fundamental limitation. An API wouldn’t just enable AI — it would legitimize and simplify everything you’ve already been doing with duct tape.


Where This Leaves Us

Based on this discussion, I think the accurate picture is:

More is possible than I initially stated@m.c demonstrated that Direct Access provides deeper parameter access and track iteration than I documented. I’ll update the original post to reflect this accurately.

But the core gap remains:

  • No external process communication (the sandbox is the fundamental constraint)

  • No track/bus creation or deletion

  • No audio rendering via API

  • No MIDI event manipulation

  • Plugin insertion still relies on fragile workarounds even with Direct Access improvements

And there may be more under the hood@cubace’s mention of the EUCON/Nuage-level API suggests Steinberg already has the internal infrastructure. The question is access.

I’m going to continue building the REAPER prototype. If there’s interest, I’ll share progress here. And if Steinberg ever opens up a path — whether it’s an expanded API, an MCP server, or access to the deeper interface — I’ll be first in line to build the Cubase version.

Thanks everyone for the quality of this discussion. This is the kind of thread that actually moves things forward.

6 Likes

While I agree in principle, the problem is that the limiting factor becomes the developer in that case. In other words, you have 1 Steinberg that develops needed features. If you open “everything” up you have 1 Steinberg + X 3rd party developers, i.e. many more developers.

So in practice it would probably mean more new features over any given time period.

The way I look at it is that there are three main things users request with great regularity; a) fixing of longstanding bugs, b) improving some functions, and c) adding new functionality. For many users, especially professional ones, the very fact that we see repeated requests in all three categories over as much as a decade (!) is incredibly frustrating. At some point I think many users just sort of throw their hands up in the air and give up. What’s the point of making these requests when many not only are not addressed but when there is also zero communication by the company about what their plans are.

Now, if we stay quiet then I think what happens is that we simply wait for the competitors to “sell us” what is best. We stop participating and stop contributing to what could help us in our work, which in turn means Steinberg has less of an idea of what we actually want. That cannot be the best way forward for any company.

So allowing scripting would “free up” resources within Steinberg to deal with all the problems and optimizations they should be spending time on while some functionality can be added by 3rd parties instead.

Quite frankly I’m “retroactively shocked” now that I think about this a bit more. Reaper had scripting from the get-go, right? Or am I remembering that wrong? So many years have passed and this wasn’t an obvious path to go down? I think if PT does it then Cubase and Nuendo needs to do it, because Reaper already did it. Steinberg is now sandwiched between the cheaper alternative on one end and the professional industry standard on the other (in some markets/segments). I think this just has to happen. Asap.

2 Likes

REAPER started adding scripting with ReaJS, which then evolved into the complete scripting API that exists today.

Yes. Snippet:

var insertViewerSlot1=page.mHostAccess.mTrackSelection.mMixerChannel.mInsertAndStripEffects.makeInsertEffectViewer("viewer1")
insertViewerSlot1.accessSlotAtIndex(0)
var daInsert1=page.mHostAccess.makeDirectAccess(insertViewerSlot1)

function getParamTitleAndDispValSlot0(currentMapping,paramIndex){
   var insertID=daInsert1.getBaseObjectID(currentMapping)
   var paramTag=daInsert1.getParameterTagByIndex(currentMapping,insertID,paramIndex)
   var paramTitle=daInsert1.getParameterTitle(currentMapping,insertID,paramTag,128)
   var paramDispVal=daInsert1.getParameterDisplayValue(currentMapping,insertID,paramTag)
   //log here
}

No. I had to log all of them. Here’s a utility I found useful to share for logging in a way we can later copy/paste to a spreadsheet for example:

Let me know if you need info on recursively logging the objects tree.
For now, here’s how the tags look for a track (root) just to have an idea:

Tag Index Title Value DispVal
4013 0 Write Automation 1 On
4014 1 Read Automation 1 On
1024 2 Name 1 Retrologue 01
4040 3 Edit Setting 0 Off
4096 4 Edit Inserts 0 Off
4000 5 Selected 1 On
4047 6 Remoted 1 On
1025 7 Volume 0.7890865802764893 0.00
1027 8 Mute 0 Off
4055 9 Channel Configuration (click to switch between mono and stereo) 0 Off
1028 10 Solo 0 Off
1033 11 Solo Defeat 0 Off
4101 12 Input 0 No Bus
4102 13 Output 21 Stereo Out
4039 14 Pre Fader Listen 0 Off
4105 15 Automation Meter 0 0
4011 16 Clip 0 Off
4012 17 Peak 0 -oo
4103 18 Meter Select 0.3333333432674408 1
4104 19 Meter Range 0 0
4009 20 Meter All 0 -oo
4010 21 Meter Max. 0 -oo
4019 22 Meter 2 Off
4114 23 Meter Count 0.019999999552965164 2
4003 24 Streams 0.019999999552965164 2
4002 25 Record Enable 1 On
4048 26 Monitor Mode 0 0
4001 27 Monitor 0 Off
4106 28 ioChanged 0 Off
4109 29 Linked Panner 0 Off
4113 30 freeze 0 Off
4111 31 listen 0 Off
4112 32 listenPos 1 On
4115 33 reset 0 Off
4116 34 click 0 Off
4117 35 clickVolume 0.7890865802764893 0.0
4118 36 clickPan 0.5 C
4119 37 On 1 On
4120 38 channelStripsPreInserts 0 Off
4121 39 postInsertsPreFader 0 Off

Will try to post a snippet on this as soon as possible.

1 Like

Never really got into Reaper. However, I did compare scripts from a well known dev for Reaper with the ones I’ve made for the same controllers. I didn’t see anything missing from mine, in fact there have been some users reporting the opposite. That of course doesn’t mean that Reaper script isn’t more complete, though I don’t really have a deep knowledge on its parts. It’s just that having a “complete” package is not the same as the functions actually used. I think that Steinberg took this road. At the same time, I do see additions from version to version, it’s by no means a finished (some users even think it’s abandoned) project.

Where would we be if Steinberg hadn’t opened up the ability to create VSTs? Surely the dev team at Steinberg, as good as they are, would not have created and released all of the now industry standard VSTs used these days.

While I have zero interest in anything AI related, some people do. And no matter how many times they get told AI is a dead end street they are going to keep going in that direction. Steinberg can embrace that, or stick their collective heads in the sand.

Anybody remember IBM trying to wall off things from PC clone manufacturers back in the 80s/90s? Eventually they got left out of the discussion and no longer make PCs. Short term makes sense but long term it eventually ends the same way for everybody.

2 Likes

This isn’t about controllers though, I think.

Oh, I don’t know. The OP made some questions to me about api handlers, didn’t really read everything here.