I have been curious about the way Wavelab 9.5 and 10 use memory when rendering and have made a few diagnostic tests. I am hoping that this post might answer the question “Does Wavelab 10 have a hard memory limit?” and result in future Wavelab versions having faster rendering times.
Here are some statistics on the montage I’m working from today:
- 96k64Bfp clips, 10 clips, approximately 4-6 plug-ins per clip, 50 plug-ins total.
- The size of Wavelab in memory is 11.4 Gig according to my process monitor.
- My habit is to render a selected audio range by double-clicking in the clip and rendering from edge to edge.
- When I render a clip from the above setup, Wavelab spins its wheels for about 58 seconds before beginning to render. During this time I can see the memory usage grow from 11.4 Gigs to 18 Gigs before the little progress bar shows up with the time estimate of processing time remaining for the render. [ If instead of rendering a time selection, I select a clip and render the selected clip (union of selected clips) the same 58 second pre-load and memory use is observed.]
- If I render a time selection from a clip with no plug-ins, the same 58 second pre-render wait time and memory loading happens although the actual render time is milliseconds.
- If I copy a clip with plugins into an otherwise empty montage (while keeping the source montage open) the memory use grows from 11.1 to 11.7 Gigs, and the pre-render interval is only about 5 seconds
From what I see, it seems that Wavelab must be processing and loading all the clips of the montage into RAM before it renders out the selected time range. It seems that it would be more useful if Wavelab was aware of which clips were actually present in the selected range or selected as a group and instead only pre-loaded and pre-processed the selected audio. As you can see from the above tests my total render time is far faster if I’m copying and pasting my clips into an empty EDL for rendering, and this has become my ordinary working method.
Once in a while, when working fast under time pressure, I will forget to copy/paste the clips and just hit render. Inevitably this happens on complex projects with many clips open. These projects may have a static memory load of 20 gigs, and when Wavelab pre-loads the audio into RAM the memory use may go to 30 Gigs before Wavelab crashes without an error message. My workstation has 128Gigs of fast memory, so this brings me to my first question: Does Wavelab have a hard limit on the amount of RAM it can use/address?
While there is an abort button during the render process, there is no option to abort during the pre-loading part of the rendering process, and this is when the high-memory use shows up. All you can do is watch the spinning wheel, knowing that Wavelab will crash, and hope that you have a backup.mon that’s fairly recent. Could we have an abort button added at the beginning of the render process?
In trying to diagnose this failure I did once see a little green leaf icon show up next to the Wavelab process in the process monitor before the crash. I have a suspicion that even if the Wavelab can access the memory, Windows10 may have interpreted the high memory use as being related to an uncontrolled background process and aborted as a safety measure. If this was the case, is there a way to tell Windows that the high memory use is expected and allowed for a process?
This is all well and good so far. I have developed some ways to work around the high memory use associated with the plugins I use. The most difficult time I had with this workstation behavior was when working on a 5.1 surround project for a 2 hour video program (Wavelab 9.5). In this case the files supplied to me were of the full program length. The only way I could get through rendering this project was to split and clone regions of the audio program into separate files and then pasting the processed audio back together. The DAW I was working on at the time only had 32 Gigs of RAM and my working montages were limited to about 11 minutes.
Any help or experience y’all can give me with dealing with this is appreciated.