What Ai do you want from Steinberg?

Self explanatory
AI is coming. And for all the bad music that will be made. The artists who use it as a tool instead of a shortcut will be at the forefront of sounds we never Imagined.

Bitwig has Browser AI integration.

Fl Mastering & Stem seperation.

Rip x is a AI only Daw

What AI would you like to see Steinberg implement that you think could help you in whatever you need to achieve? Something that would be an overall net positive in the movement for more creation and artistry. As opposed to just AI for AI’s sake…
Very keen to hear
:point_right: :point_left:

2 Likes

Why not start with what you want from Steinberg in regard to AI?

2 Likes

If you’re asking :slight_smile:

-sound matching via imported audio (Detect Frequency, Rhythm, Midi & automation, everything, just from a drag and drop)
-Browser integration for all elements within cubase (user samples through metadata analysis)
-Full element stem separation
-Channel stip measures the audio or midi signal and creates chain recommendations based off various elements, Could be style, Genre etc.
-Flagship vst (Something akin to neutone) but with that Steinberg flavour
-Something akin to midi inserts using generative Ai for Automation/modulation combining
-Inbuilt prompter


Something like rip X that allows communication with host AI sites eg.So.vits (The new M4l)

A feature allowing users to collaborate at the same time simultainiously. As is been introduced in beat connect

This is long, long term tho.
It would be great to see when all ideas are put together what the most requested one would be…

This is already possible with SpectraLayers (and fully integrated in Cubase since 12).

2 Likes

Yes, don’t integrate SL functionality into Cubase directly.

Taking a cue from SL, however, some kind of processing offload to the GPU would be nice.

/Magnus

I’m still waiting for Cubase to go downstairs and make my coffee for me…

@Arne_Scheffler can we get working on that, I’ve been waiting since VST24!!!

1 Like

Sorry we only implemented to go upstairs to make the coffee. Please move your studio into the basement.

10 Likes

Glad you’re asking!

Two feature suggestions:

(1) Similarity detection for Media Bay: When selecting, say, a kick drum sample, show me everything else in the Media Bay that’s similar to that selected kick drum sample. There are some tools out there that show two-dimensional “similarity clouds”, i.e. that cluster sounds by similarity, so something like that, but cleverly and deeply integrated into the Media Bay would be great. It would make the job of trying to find just the right sound so much easier, because you could start with something in the ballpark, and then iterate to refine, instead of randomly picking through presets. Make sure to include sounds from VST instruments, too (you can just trigger them to get the WAV audio for your algorithms).

(2) A tool like holly.plus that allows me to train a neural network on a collection of sounds/samples, and then allows me to trigger it with other sounds, and it’ll try to match its output (much like voice transfer technology).

1 Like

None…

2 Likes

The first would be transformative. I’ve really only seen bitwig do this well. Every sample I have is auto evaluated by metadata and can suggest and organise based off this. Honestly the browser is impressive.

I guess what made me ask is among producers in my age bracket cubase is known as the “geriatric” program. The old “guard”.
Cubase 13 seems to be doing a good job at starting to change this perception )among the people I know at least). But to retain them, as well as the next generation of producer, there needs to be a marrying of the old and new and real investment in what the next generation of music might/could sound like…

*It was mentioned spectralayers can already do the stem separation fl can? If someone could let me know. Does spectralayers imitate this feature to an exact likeness?

… as opposed to “kindergarten” programs that spit out a finished piece with one mouse click and no creative effort?

Similar, just much better, and with GPU acceleration:
image
SL can works as an ARA2 plugin as well, allowing you to drag the individual seperations onto tracks in Cubase.

I knew someone would get upset lol. No. Not like kindergarten software. Yes there’s going to be a lot of bad generic music. But there will also be artists who innovate in ways we currently can’t comprehend. This Software like every other must understand, there is no stopping what is coming over the next 10 ish years. You’re either on or….history has enough examples. You can’t not innovate with the competition.
Saying this also makes me wonder what Ara integration for ai voice modelling would look like….that would be crazy :dizzy_face:

Thanks for the spec info!
Is that the one that comes bundled in cubase or only the full version?
If it’s only in the full version then I would stand by my statement

What Ai do you want from Steinberg? As opposed to just AI for AI’s sake…

I think your approach to this topic is naïve. You are asking for AI for AI’s sake. Otherwise, there’s no reason to randomly request “AI” as a technology solution without first considering what problem needs to be solved. It’s an arbitrary request.

It’s like asking Steinberg to come up with a product that uses iron and wood. Why?

If Steinberg decides to make some furniture then sure, otherwise, it’s the wrong way to frame a solution to a problem.

Additionally, many companies are exaggerating the term “AI”. I work in the field of generative/algorithmic production. I suspect a lot of products that are now labelled as AI are just using regular, generative processes we’ve been doing for several years.

There can be some potentially, interesting features that happen to use AI but it seems really off to force AI as a technology solution from the start. Unless Steinberg just wants to use the term for marketing purposes, which everyone is doing now.

3 Likes

Very good response. Appreciate it

Also, Steinberg already has stem separation in their product Spectralayers which can be integrated into Cubase via ARA. It uses AI but again, that’s an arbitrary marker of “usefulness”.

Is this the bundled or full version?

I don’t actually know if it’s bundled with anything. They likely want people to buy Spectralayers however. This may mean they won’t completely build it into Cubase.

I 100% agree with not integrating for the sake of it.
If I think of the amount of “AI” vst’s out there.
So far the only one that has met the benchmark and is actively used in my workflow is Nuetone. That’s my standard for a fx plug-in so far.
Massive capabilities for scoring from my play arounds.
I definitely don’t have the answers nor am the person to give them.

Riffusion.com uses text based generation to create vocal lyrics or just music based of the text.
Will some people make bad entire songs with this? Im sure if it.
But the way I use it is to make otherworldly abstract samples and then mess with those.

There really is some interesting stuff going on.
You still have to do a lot of digging to get to them tho.

As you said. It’s currently a buzz word. It close but not there. I feel like Ai is currently in its iPhone 1 phase

If any technology needs to “arbitrarily” implement support for AI it’s Max/MSP which means it would allow you to build AI based tools.

Personally, people have barely touched the surface of algorithmic/generative music processes: markov chains, serialism, stochastic music, probability.

I’m personally heartbroken people would rather skip these amazing subjects just to do something easier with AI.

Ooft. What a suggestion.
I’ve been playing around with stochas….very interesting.

I was in studios 301 in Sydney the other week. At the time they had a int. recording artist in session using Ariana grande BV’s for the release.
So it’s already being used with top records