Hello there music experts. I’m an app developer working with a couple of artists to build an iPad app with voice activated artificial intelligence. I got all the app parts working and you can talk to the iPad and it will talk back.
However, the context for this app is that the iPad will be housed in a sculpture (actually a seesaw) and the audio input an output will come through an external mic and speakers all routed through UR22mkII. So audio picked up in the mic needs to come into my iPad app for processing, then audio generated from within the iPad app is meant to get routed back out through the speakers.
In iOS development there’s the concept of an ‘AVAudioSession’, which is like the single main software object that controls how your app interfaces with the audio hardware. It does the selection of the appropriate hardware input on your behalf. When an AVAudioSession is activated, you set it into a category of either ‘Play’, ‘Record’ or ‘PlayAndRecord’ (there’s some other options, but they’re not relevant here). ‘Play’ and ‘Record’ work fine and the app automatically picks up the USB output/input from the UR22mkII, but if I try ‘PlayAndRecord’ it just doesn’t work.
Is this something anyone’s had any experience with? Is there some configuration of my hardware that might solve this problem?