Same here. The first is an SSD in my case, the others are regular HDD’s.
… though things like the big orchestral sample libs will easily demand way more RAM than most users would currently have. And I imagine most current Cubase users wouldn’t even have enough RAM for completely loading the dozens of GB of a modern sampled piano.
IIRC, East West stongly recommends using a SSD for its new orchestral libs like Hollywood Strings.
For East West’s older orchestral sample lib “Symphonic Orchestra”, they used to recommend using multiple HDDs, to avoid the bottleneck of a single drive - eg strings on one HDD, brass, woodwind and percussion spread amongst two or three others - the idea being to avoid situations where more is being demanded from one drive than it can handle, leading to audio glitches.
OTOH, with older/small sample sets, and in spite of what’s normally recommended, I’ve occasionally run several MIDI tracks using samples kept on the same physical drive (separate partition) as the OS, or on an external USB HDD (not usually recommended - firewire or eSATA being better), with no glitches caused by the rate that data can be fetched from the HDDs.
But I think my next DAW will have at least one SSD for samples. I hope they’ll be a lot cheaper when the time comes. (Or perhaps, by definition, the time won’t come til the SSDs are a lot cheaper. )
Too true and remember how Windows’ memory addressing works ie unlike Linux, much more a potentially unstable situation.
What does that mean? I’ve never had a problem with Windows “memory addressing”, even with my x86 setup and the infamous 3GB switch, 'worked just fine for me.
With my ‘new’ x64 setup and 8 gigs of RAM not a glitch even though I’m pushing it with some large Omni multis and Trilian patches, not unstable IMHO.
I’d like to know what you mean as well.
No RTFM, but maybe study some computer science?
“Computer science”? I’m still struggling with my guitar playing and only give a hoot as to how ‘stuff’ works in my real, practical life .
I studied almost a decade computer science in the same university Linus Torvalds did (and at the same time) and watched closly his process of developing Linux … but still has no idea what you’re talking about.
I studied Computer Science for a year and a half before switching faculties in university (I’m not that good at math).
Explain what you mean!
How is it that linux builds a memory pointer table better than windows?
He may refer to:
- paging replace algorithm policy
- disk cache handling policy
or (what I think he’s talking about … but not sure because he refuses to clarify):
- Window’s policy to divide virtual memory addresses to “user” and “system” memory depending on the most significant bit of the memory address (inherited from VMS, which IMO is The Best Operating System Ever)
- Something else I have no idea about … and would really like to know.
I would also like to know, if there’s something crucial performance difference between Linux and Windows in cases #1 and #2. I know what’s the difference in case #3, but I would call it minimal … or unimportant.