Hi all! What do you think about the music industry in the near future from the side of AI, creativity and technical tools? My thoughts about it:
Generative music powered by artificial intelligence, represented by services like Suno, Udio, Soundraw, AIVA, Boomy is no longer just a toy. They create complete arrangements with vocals in mere seconds. However, the real tectonic shift in the industry won’t happen when AI simply learns to generate audio, but when it allows for the flawless conversion of that audio back into MIDI data. And that moment is very close.
As soon as reliable tools emerge for extracting clean MIDI stems (individual tracks) from AI compositions, the musician’s workflow will change forever.
The New Workflow: “Musical Re-amping”
Imagine this: A composer generates a dozen arrangement variations in a specific style using an AI service. Instead of taking the raw and sonically imperfect audio file, they receive a full package of MIDI data: drum parts, basslines, harmonies, and even the vocal melody.
Even now, we already have tools:
- Steinberg SpectraLayers 12 for drum separation from 1 audio: kick, snare, hh, etc.
- NeuralNote VST for manual extraction MIDI from raw audio
- Drum Replacer vst’s for replacing drums
- AI services for audio spectral restoration and enhancement
From here, a process begins that can be compared to “re-amping” in the guitar world. This MIDI material is imported into Cubase or another familiar DAW. In this scenario, the AI acts not as the final producer, but as an incredibly fast co-writer or session musician who has sketched out a “demo.”
Music creation becomes unimaginably accessible. The barrier to entry—once defined by virtuosic skill on an instrument or deep knowledge of music theory—is lowered. What comes to the forefront is taste, production vision, and the skill of “finishing” a track.
The Challenge for DAWs: Synergy or Death
This is where an existential threat arises for classic DAWs. Giants like Steinberg Cubase, Ableton Live, or Pro Tools were built as “blank canvases”—professional environments for creating music from scratch.
But if the primary task shifts from “creation” to “editing and refinement” of AI-generated material, the old approach could lose. New, “AI-native” DAWs will appear, built from the ground up around synergy with generative models.
To survive and stay relevant, traditional DAWs will have to urgently change their development vector.
In future updates, we must see:
-
Deep Integration: Not just “Import MIDI,” but seamless connectivity with AI services directly from the interface.
-
“Cleanup” Tools: Functions to automatically correct “dirty” MIDI from AI.
-
AI Assistants: Built-in AI arrangement aids that suggest ways to develop an existing MIDI idea.
In the coming years, the winner won’t be the DAW with the most plugins and usability, but the one that best combines human talent with the computational power of artificial intelligence. For Cubase and its competitors, the race has already begun.