How does Wavelab use available RAM?

Here we use a Win 11 PC with 128 GB RAM.
When loading several files of 1 GB up to 10 GB, WL does not seem to use the available RAM to the max. Why?

How does WL handle those files?

It would be nice to see WL use all the available RAM for its purpose.

If the files are 10 GB what do you expect that Wavelab should put into the rest of tbe memory?

Well, if we load 5 Files, each 10 GB, I would expect that the available RAM decreases by 50 GB. This does not happen. Why?

The question is: Does WL make all its calculation in RAM or does it rely on some drive swapping?

WaveLab never loads a file completely in memory. That would make no sense.

Why that?

10 years ago I would have agreed, but not now in 2026.

A modern Computer with 128 GB or even more memory is nothing special anymore.
To have it all in memory would increase the speed significantly.

How about giving the user the choice of how to work?

This is of course for optimized performances. Allocating a lot of RAM could in fact lower performances. But this is a programming question that does not have a place here. This is all about virtual memory and caching. I will just give a simple analogy.

Think of it like reading a giant book: you don’t memorize every page before you start reading (that’s loading into RAM). Instead, you keep the book on the table and just look at the specific page your eyes are on. The computer ‘maps’ the file so it can grab exactly what it needs, exactly when it needs it, without filling up and slowing down your system.

2 Likes

Thx for the brief explanation of why…

We came to this issue, while converting several terabytes of wav into flac.
It seems to be more efficient, to have as much memory in use as posssible when using 8 or more tasks in batch processing.
We had some files of over 30 GB and those would fit easily in memory at once, instead of picking of chunks again and again.
(Besides just converting, there were some other tasks like normalization et al in the process queue)

This is the first point where loading a complete file in memory slows down, instead of increasing speed.

If the processing task has to wait until the full file is in memory it takes a lot of time, even with SSDs and fast memory. Loading by pieces means the data processing can start immediately when the first piece is loaded. When this piece has been converted it can be written back to disk.

In the meantime the task has loaded more pieces of the file and continues also immediately with processing. So there are parallel tasks (CPU and I/O) working on the file, without one waiting for the other.

Because this leaves room for more files and the conversion process is a sequential task anyway, it is possible to load several files at once to take advantage of multiple CPU cores.

As @PG1 wrote, there is more to this (CPU Cache, Memory Mapping, etc.) than just loading everything in memory and thinking it will speed it all up.

And do not compare this to what is known these days as In-Memory Databases, that is a complete different world.

Sorry, but this is just nonsense.

It just takes some seconds to load 30 GB at once into memory.

The conversion process for 8 tasks with 8 (or more cpu cores) takes place in parallel (or at least should).

So, using 128 GB memory, take 10 waves a 10 GB = 100GB all in memory. That’s what we do today, not 10 or 20 years ago.

Seems like some people still live in the late 90s or early 20s.

I thought that something like this would come back from you, now you also start insulting.

You have not understood a single bit of what I tried to explain, basically the same as you did not understand what parallel data processing is in the Cubase thread.

In the future you are now on my mute list, it is wasted time to answer any of your postings.

Best prove that you do not know anything about operating system technology.

2 Likes

Yes, that is nonsense, indeed!

1 Like

Depends on the storage, right?