[SOLVED] Render in Place - inconsistent

A follow up to the on going two (2) topics in the Cubase General Forum

  1. New Project started - 96k - 24Bit

  2. ASIO Guard - Normal

  3. Ture Pianos was inserted as the Instrument

  4. Single Note MIDI C3, Velocity 100, Quantized to 1/4, Tempo 120, Repeat over 12 Bars
    5 RIP - As One Event, Dry, Tail Bars & Beats, Resolution 24Bit, Source Unchanged

  5. Track results as shown in attached screen shot.

  6. After one RIP is done, a different instrument is used to replace the original instrument (True Pianos) - Other instruments used include Halion Sonic SE2 (Original from Cubase Base 8 install), Kontakt 5 and Korg Legacy M1

  7. Track “exported” is single stem - Exported and “Import into Project”, all other audio engine as per settings of project.

  8. SIDE NOTE - other experimentation
    9a. changed different settings for ASIO Guard - results were also different.
    9b. enabled / disabled individual instrument track ASIO Compensation - results no difference

System Information
Windows 7 (SP1) x64 - workstation offline
Yamaha USB Driver v1.9.5, 96k, 192 Samples
All other info in sig.

Hope this helps… cheers all!!!

EDIT: Ooopsss… left of attached image


2nd EDIT: Pls find below an updated report for this issue (14 Apr 2016)

I can not reproduce this at all on Win7 64bit and Cubase Pro 8.5.15
I was curious myself now and did some testing with RIP and Batchexport
What i did:

I created an Audiotrack and placed a sample of a Kickdrum. I cut it to fit the grid and than bounced it to a new audiofile so that the event refelcts the whole audiofile. i than copied the event 10 times and placed it on the grid. Projetc is at 96khz 24bit. Asio Buffer is set to 1024 samples all the time resulting in 11 ms in/out latency

I than used 5 diffent Instruments to trigger the sample from the original Audio Track with. all at the same time like the original file.

This looks like this than

I than Rendered in Place the Original Track and the 5 Instrument Tracks simultaniously with 24bit with seperate events and tracks for each. ASIO Guard is at Normal.

this looks like this

The same with Asio Guard turned off

It looked the same with batch export

i can confirm that issue.

did several tests myself and was not able to get a 1:1 recording of any sort.

Thanks for posting this. Looks like the first RIP was quite accurate with True Piano, but the subsequent RIP with True Piano was not?

I could see several ways where RIP might not be to blame for differing rendering results:

Was it the same piano sample used in both cases? It’s possible different instruments could have different attacks and give results similar to yours (where there is variance in rendering results between them), even with RIP working properly. Maybe a sharp-attack instrument, like a clave or drumstick would be helpful to look at …

Or even the same instrument, given the same MIDI data, might not spit out the same audio, depending on the programming (for example, Jamstix may not spit out the same drum hits 2 or 3 times in a row).

Finally - a comparison of the same instrument RIP’d multiple times would be interesting to see. Even if they are “off” from the MIDI, if they are also “null-ible”, it might suggest the problem isn’t with RIP, but rather that different instruments render the same MIDI data different than each other.

Just some thoughts … it would be nice to get to the bottom of understand what caused such variation in your results!

Continued from first post this edit is dated 14-4-2016.

There may be some confusion as to how the original test was done. Hence the following is a totally brand new test. Pls find below, hopefully a clearer methodology used for the process.

  1. Started a New Cubase Project
  2. Created a single MIDI track - Single note C3, Vel = 100, 1/8 Length, 1/4 Beat, Repeated 4 Bars, Tempo 120
  3. Insert five (5) instrument tracks - True Pianos, Kontakt 5 (Alicias Keys), Korg Legacy M1, Real Guitars RealLPC & Steinberg Halion Sonic SE (Yamaha S90ES Paino) [Original Install from Cubase]
  4. Copy MIDI Track to each individual instruments (process of 5 times in total all starting at 0)
  5. Disabled original MIDI Track
  6. Proceed with RIP process for each instrument individually. (RIP Settings - As One Event, Dry, Tail Bars & Beats, Resolution 24Bit, Source Unchanged)
  7. All other settings as per the original experiment.
  8. ASIO Guard is On and Set to NORMAL.

Pls find the attached screen shot for this particular experiment. Each rendered instrument appears to have a different start time.


hope this helps… Also many thanks to all who have contributed.

I did unterstand your test. The problem with your test is that every instruments you tested used different samples and this way you can not exlcude the possibility that the differences are based on the samples themselfs. all samples might have different envelopes or just have a small pause before because they weren’t cut correctly.

The way i tested shows that there is no inconsistency throughout different Instruments when using the exact same sample for all.
In fact in any case the instruments were absolutely sample accurate with the original track.

We are looking out for different things here.

I am well aware of ADSR in samples and since i am using different instruments from different manufacturers it is very obvious that i am using that as the premise of this test; different manufacturers, different instruments, different samples and different ADSR.

what i am trying to establish are different start times. Zoom in, you will see that the start times for each does not line up.

If you so claim that this is due to ADSR, let me explain that the Attack starts with a fade in to the sample’s designated level and then the rest is as is. As said, zoom in and look at the Kontakt example in the attached picture… you will clearly see that the .WAV Release has crossed into the start time of the MIDI note just above. The next Attack is delayed because of that.

I don’t think we are looking for different things here.

You try to proof that RIP is inconsistent in timing when using different instruments with their respective samples. They all have different startpoints in your picture. thats true. but does this proof its an issue with RIP? No it doesn’t. If this is not what you trying to show than i really don’t understand your test.
In my test i used different instruments as well Including Kontakt 5. but instead with their own samples i used my own sample that i used on every instrument that i know the exact startpoint of. Every Insruments rendered the sample exactly in time when triggere at the same startpoint as the original sample.

So if you ask me there is no inconsistency with RIP according.
If you say that RIP does not render correctly in time than try it the way i did the test to see if there is a problem with RIP or just the Samples of your used Instruments.

The reason i mentioned that our conclusions are different is based on the observation of start time latency and not Sample Start Time. Pls find below all the .WAV diagrams taken off Sound Forge. All the .WAV files were taken from the Audio Folder as generated by Cubase.


Each .WAV diagram is zoomed to the best view within Sound Forge.

In each of the pictures, the start time latencies are several samples away from 0 with the closest of 8 Samples RIP from Halion Sonic SE. The rest are much further with Kontakt being the furthest at 531 Samples.

If the MIDI note that triggers these instruments starts at 0, technically all Sample Start Times should start at 0 too. However, given that all VST and VSTi have some form of latency, it is understandable that some delay is introduced. During RIP, the system should then manage and adjust or compensate for this latency to as close as ‘x’ number of samples; with the assumption that on this system said latency is at ‘x’ number of samples.

The inconsistency here is that the start time latency defers for all RIP instruments. This is true again weather Latency Compensation is enabled or not. Again this does not refer to Sample Start Time.

I’m not being difficult here to expect or demand 0 latency. The current workaround is to nudge the .WAV back to 0 by the said number of samples or by visually dragging.

Below are the MIDI file and all the 5 RIP .WAV files attached. You may review these in other audio editors too.


hope this helps…

RIP works perfectly fine here, when rendering VSTi it won’t be sample accurate as it is midi.
I have tested with the onboard VSTi’s and running render in place with different buffer sizes, and they nearly null.
I would (if I had an issue with RIP) take a look at the ASIO driver, maybe test with a generic driver to exclude the possibility of a driver related issue.


the problem in your thinking here is that you assume that all samples will perfectly start with the transient.

lets have a look at alicia keys samples you used in Kontakt 5

As you can see in the wave editor there is a gap of about 25ms from startingpoint of the sample to the transient of the actual hit of the note. In all these samples there is about 25ms silence upfront the actual transient.

maybe now you understand why your test environment is not correctly telling the truth. In order to reveal a problem with RIP you have to use a reverence sample you exactly know the starting point of. If you than use this file in Kontakt and trigger it at the exact same time like the original sample an they don’t perfectly line up than there is indeed a problem with RIP.

I recommended using a sharp onset instrument like clave or drumstick. I think ruining the test that way would help clarify how much of a problem there is, if any.

Hi Novikthewise, you put up a hard case and i thank you for presenting the light at the end of the tunnel.

i admit those were my assumptions all along. You are very right that i did not choose to use a custom reference sample to begin with. No doubt the results have been misinterpreted.

However moving forward i’m not sure where we should go from here. There were some who presented that RIP has issues, others said there are none. There was a post that also said that “normal audio mixdown” has issues too…

https://www.steinberg.net/forums/viewtopic.php?f=230&t=75106 take a look though…

Honestly, i never suspected anything out of the ordinary using RIP. I have worked it so much into my workflow - simple MIDI to Audio (Dry) nothing more. I got suspicious as there were several posts of late on it and contained within one of the post was a confirmed bug. Hence i took it upon myself to look into it.

Perhaps its time to put this to rest until someone else has something more definitive to contribute. It may be more efficient that we trust the system and go about as usual while letting this pass. Many thanks to Peakae too. I know you have also contributed in the other discussions.

Maybe the MODs would like to have the final say and consider this issue solved? - user error…

many thanks all… Cheers!!!

One thing that can make an offset are asio drivers. There are still some drivers that use use safety buffers that are not reported back.


thank you all - I would leave the thread open and move it to Issues -> Miscellaneous, I followed it with interest, might be useful for other users. If it can help someone to resolve or understand an issue, that is added value!
@keysz: you can add [solved] to the title if you like.

About the thread https://www.steinberg.net/forums/viewtopic.php?f=230&t=75106 and in particular the image https://www.steinberg.net/forums/download/file.php?id=10504. The last sample in the upper event is ‘cut’, I do believe the audio engine ‘rounds up’ the render to the ‘full’ sample, being the ‘smallest unit’.
If an issue exists there, I think it has to do with the graphical representation of the event, rather than the event itself.
The difference here would be really small, though, as looping it 100 times would yield a total difference of less that 3ms (considering the full sample’s length, at 48kHz).

Thx for reply!

In your experience, is that enough to cause phasing?

Good question, in the vast majority of cases, I’d say no.

In cases where the rendered file is layered on another identical sound source, looped several times and panned at the same position, it might, and most likely will after some repetitions, create phasing. I don’t see a use case here, as one usually layers (at least slightly) different sounds to get a certain result. But with particular sounds it could yield unwanted outcome. Layering a kick, for example.
But again such a difference (the 3ms I was mentioning) would only occur when looping the same event 100 times, with one event ending where the next one begins (in case of a kick, I usually don’t have events one next to the other, but that’s just how I work) - you could start to hear some artifacts after several repetitions in this case.

I didn’t have time to think about, let alone test, all of the use-cases and scenarios that open up here :neutral_face: