Fature request See Sample in MIDI Notes - helps with offset

Many of us are aware of the way most samples are recorded so that the beat on of the sample is after the start of the sample. This is called offset. It is to help capture the nuances before an instrument actually performs the note - we notice, unconsciously when it is missing. Particularly in orchestral or acoustic samples.
Each developer has different sample offset values and even within a single instrument the offset valuesc may change considerably, say between the offset value of a Stacatto and a Legato. Developers do not publish these values and it is up to the composer to ferret them out.
Setting offset at the beginning of a track is not enough, as each note can have a differeing value.

If, in the Key Editor, perhaps by some right click or hover, we could actually raise a picture of the actual note’s audio file, also with a slider, so that we could alter the specific notes MIDI start position (from an original zero) - so that the composer could adjust any note so the sound of the attack is actually on the actual beat and not after, this would enable the tightening of all timing.

Z

Apparently Stienberg, Digital Performer is Better at this, though I dont think it can handle a note for note on same articualtion:

Could you clarify? The image isn’t at all what you described in your OP.

It’s not the same - true. Yet it’s still better than Cubase. Look at the red circle.

In Cubase if you load a Keyswitch patch that has more than one articulation there is no way to have one offset for each articulation. You can only have offsets per track. It appears here that Digital performer can do this.
Staccatos for example might only need a short offset, whereas a slow legato a long one. Both are often found in one keyswitch. This stops composers using keyswitches, which generates single tracks in the project window and horribly clutters up the project with duplicate tracks for one instrument. It can become a nightmare if your melodiy lines feature mutliple articulations. I have argued before for offserts per articulation in Expression Maps which are now ten years more without an update.
My method is better than both Steinberg and Digital Performer - it’s about access to the offset of each individual note of each articulation. It would still be handy to have offsets per articulation, but, as Sonokinetic recently stated to me:

“Because these are live performances by real musicians the optimal point to switch phrases can vary a little between the products and between phrases within a product. So I’m afraid there is no golden rule here or standard offset!”

In other words samples created of human sounds are all individual therefore you need individual offsets for each note - not each collection of notes, even though generally setting all staccatos to a short offset would improve on a general basis.

Z