Currently VariAudio can analyze an audio part and export MIDI pitch data.
The possibilities for expanding the analysis and data export are fantastic and deserve serious consideration. The audio analysis tools already exist in Cubase. What is needed is the programming to capture and convert the results into data from 0 to 127.
The most straightforward and easy example of how this might be used is for MIDI Continuous Controller data. I’m no audio expert, but Imagine if you could record a vocal into an audio track not for purposes of incorporating it into the finished product, but for purposes of generating MIDI CC data. You pick an easily measurable feature of audio like amplitude, then convert it in scale to a number between 0 and 127. You then export that data, and BOOM! You have something that can be assigned to any MIDI Continuous Controller, like Volume, or Breath Controller, or Expression. This allows you to control any parameter of any VST instrument using CC data controlled by the most natural, the most inuitive, the most expressive of instruments, the human voice. It is capable of far more than just pitch.
You would have a revolutionary MIDI controller. Why no one has done this is beyond me. It simply makes too much sense. I can control my voice a whole lot more than I can control a breath controller, or a MIDI fader or knob. Not only that, but there is immediate feedback. I am a VSL user. If I want to control the volume and expression of an orchestral instrument, I turn first to Velocity Cross-fading and use CC2. So instead of trying to record curves using some controller device, I simply sing the part I want to play, including the pitch of course, but also the dynamics. There’s no reason I shouldn’t then be able to export it to a MIDI track that captures very well the nuances of what I sang.
Other parameters could be captured and put to good use as well.
You could even sell it as a separate, revolutionary new plugin.