I have been wondering a bit about the Cubase Audio Warp process, which I find not very intuitive. As far as I know other DAWs (e.g. ProTools and StudioOne) simply detects the transients in the audio material and then adjust them according to the quantize parameters (this may be a bit simplified) whereas in Cubase you have to detect hitpoints and convert them to warp tabs before you can warp your audio material according to your quantize parameters (In the Operation Manual p110 it says “If you have already set up hitpoints, these will be taken. Otherwise, hitpoints are detected automatically.”, so even if you “free warp” audio material without detecting hitpoints, I assume hitpoints are automatically detected).
If I understand the manual correctly the primary purpose for hitpoints is to be able to create and work with audio slices while the purpose for warp tabs is to be able to time stretch audio material. But as both hitpoints and warp tabs are generated from the same transients (if generated automatically) why do the authors/programmers of Cubase distinguish between hitpoints and warp tabs?
I think the concepts of hitpoints and warp tabs make the whole subject “Audio Warping” more complex than necessary just like the audio warp process (detect hitpoints - create warp tabs - warp) is more complex than it needs to be. I wonder what is the advantage of having both hitpoints and warp tabs compared to “operating” with only transients?
(Apart from the questions I find the functionality very useful although it took me quite some time to master it)