Feature Request - Console emulation within Cubase mixer :)

I wouldn’t even think in those cases. If an 80-bit word gets truncated, it’s only going to affect the last several positions at most. When it’s converted back to a 32-bit word, that truncation will be rounded and therefore “lost.” IOW, inaudible

But I admit my understanding of how digital works is quite limited, so…

Well it’s converted back to a 24bit world, not a 32bit word. The other 8 bits are multiplier bits ( or is it 7bots and a sign bit - can’t remember ). Which basically means that sample values at low levels still have 24bit resolution.

Check. But let’s say a particular word that has 24 bits of info and is the added with another 24 bit word, and then averaged, and it causes the last 2 bits to be rounded or truncated. Even random noise I would think introduces this degree of inaccuracy. I question that that degree of difference can be audible.

THD+N of course is used to measure analog devices but as you asserted earlier there seems to be something of an “N” factor even with digital. That “N” can also be measured, and I assume, even demonstrated audibly by some sort of null test. My theory is that this difference won’t be heard – it wouldn’t even generate a waveform at all

how do mean averaged? You cannot have an average when there’s only one value, i.e the result of the addition.

and it causes the last 2 bits to be rounded or truncated. Even random noise I would think introduces this degree of inaccuracy. I question that that degree of difference can be audible.

Well the two samples added would have to be 12dB over 0dBFS in order to for them to spill over the 24bit signal path. but as they are added in 80bit registers there won’t be any clipping until things are returned to the 24 bit.

Bear in mind that adding the above signals would give a results that has 26 bit resolution. 32bit float means the multiplier bit doesn’t clip but loses the bottom 2 bits, meaning you at laest retain 24 bit resolution

THD+N of course is used to measure analog devices but as you asserted earlier there seems to be something of an “N” factor even with digital. That “N” can also be measured, and I assume, even demonstrated audibly by some sort of null test. My theory is that this difference won’t be heard – it wouldn’t even generate a waveform at all

I would agree it wouldn’t be heard.

I don’t know what bearing this has on the discussion, but since someone mentioned EQ and Compression…

When I recorded my heavy metal version of O Come Emmanuel I used the Kuassa amp simulator to provide the distortion. What I did was record through an outboard pre-amp / compressor > Mackie 1604VLZ3 > M-Audio 2496 > Cubase. But since it’s hard to record any sort of metal without the distortion, I turned on the channel monitoring so that I could hear my playing as it would end up.

What I discovered was that the track had a lot of static on it to the point that Tom Zartler noticed it himself when he offered to help me EQ it.

Long story short: this happened on and off, but then it happened again recently. What I discovered was that I was using the stock Limiter on the master bus. When I turned that off the original, undistorted track had no static on it. (So I turned it off and lowered the master bus level for the purpose of recording obviously.)

The point is that perhaps there is some issue internally with Cubase with the digital processing such that even the lower order bits are losing precision.

How do you mean ‘static’ - to me static is a random distortion, i.e crackly noise. Loss of bit precision if heard would result in signal dependent distortion.

If so I would put that down to being a bug?

I may be wrong about the steps to reproduce. But if I’m not I’ll try to record an A/B track tomorrow, i.e. play the same line with the limiter on and off with the channel monitoring turned on in both instances. You’ll hear what I’m talking about.

Foolomon: Do you think this bit from the manual (Page 74) supports your questioning whether Cubase loses precision even with lower order bits:

“… switching between linear and musical time base results in a very small loss of precision (introduced by the mathematical operations used for scaling values in the two different formats). Therefore, you should avoid switching repeatedly between the two modes …”.

… or is that a different issue?

Thanks!

Interesting. I was not aware of this caveat in the manual. Until just recently, I had never used musical base, until I had to confine a rubato piece to some semblance of a grid. After I switched the audio tracks to musical base, they didn’t sound the same, and I could hear some “warbling” in the audio. Which bothered me, since the audio is still the same duration as before, only the underlying tempo map had changed

I wouldn’t know. I just thought that my observation was interesting given the current discussion.

Some compressors and limiters will sound distorted if the release time is set too short, this wouldn’t be a bug but by design as some analogue ones do this as well, eg the 1176.

Well this was using the Light Peak Limiting preset. (I still use Cubase 4.52, for reference.)

Does this mean loss of precision of the timings of events, or loss of precision of the digital audio signal

I am quite sure they are talking about event timing. It strongly suggests that internally the raw event time stamp is not kept as part of the event data structure.