The real issue is the other way around. A Neural DSP amp sim is not an instrument, (it is in Logic, but it doesn’t make a difference) it is an insert, and it needs MIDI to control the pedals, same goes for any sim.
You have to make a separate midi track and send to the insert on the audio track. Normally though you want the recording of the guitar to be clean, so you send the performance audio track direct to a group, which then sends to an FX track with the sim, and maybe to more than one FX track in parallel, and maybe to a chain of FX tracks, because the CPU performance is surprisingly better when you put heavy inserts on their own FX track. You rout those to the audio track you are recording on. (I call that the “Tape” because it gives an analogy that is easy to think about) So now you have MIDI and audio all going the same place at the same time.
An instrument track gives you the ability to record midi on the same track as the instrument. But that doesn’t result in audio unless you send the audio from that track to group, or bus, and then to the audio track. If you are interested in the audio file, not the midi file (so you can turn off the instrument when the audio is created), then you have to do it like that. This is how a lot of composers work, and how you get massive templates with everything you need in them, you move everything to audio as you go. Maybe record in place, or maybe render in place, either way. Recording as you go is more efficient and simplifies the scripting (macro/PLE) in the template. Wouldn’t it be easier if this were all one thing?
The performance requires CC input (and notes for expressions usually but you can use CCs with expression maps), and that isn’t always to the instrument track. Sometimes it is to an FX track that the instrument sends to. So again, you have Audio and Midi always taking the same path.
I almost always have MIDI and Audio going the same place at the same time. There are only ever cases when you don’t need the Audio. There is never a case when you don’t need MIDI. Well, maybe there is if for you, and I should have said there is never a case when -I- don’t need MIDI. I don’t mic drums, they are MIDI, (except when they aren’t but then they still are MIDI too) I don’t mic amps they are virtual, I don’t mic anything but vocals, and then you also need MIDI. I guess there is also various acoustics, but generally those are to make sample instruments, and even then I need MIDI. In a full “in the box” setup that is the way it works. You always need MIDI.
The traditional solution is Quick Controls, but with guitars you tend to need more than 8. Not for a single song, but then the setup is different for every one. One needs to be able to have it as simple as a pedalboard and that means like 10 on/of and two variable, and Quick Controls are not enough. Likewise, it is good to have similar controls in similar places. So the effect you are controlling on a vox kit needs to mimic that on the guitar, otherwise you are fumbling around trying to figure out which one is which all the time. You don’t want to be in the middle of a take and have to read, or have just switched from guitar to vox and do the wrong thing.
With composing, you want the same expression on the same notes (or CCs), and the expression maps solve this nicely, but you still need all of that MIDI coming in over a different channel than the actual notes so they never clash, so again, you are routing MIDI. If you are playing strings on the left hand flautando and reeds legato on the right, then you need the routing to represent the difference, otherwise you get it wrong. That can be frustrating. So you can’t get away from all of the routing concerns all together. But if you could treat the non-note input differently than the note/aftertouch etc. then it helps, and expression maps allow you to do this. But imagine if you could tell the track where the notes were coming from separately from where the expression was coming from, then you could set up each expression map the same without concern of which hardware setting was going to be used. You could tell the track instead. It wouldn’t have to be built in and locked in to which hardware was being used, set up for which channel.
What this means is it would be nice to have every track have expression/note/audio as separate settings, and rout them all together.