Once the samples are read into memory, where they are taken from matters little.
I put everything that’s Kontakt on a separate drive, because Kontakt does disk-streaming. Everything that doesn’t stream goes on the OS drive.
I don’t think I have much or anything that I know of that streams that isn’t a Kontakt library. That includes Spectrasonics, Reason, Toontrack, so these go on the OS drive. They are all fully-loaded into memory as the project plays back as-opposed to Kontakt.
I would rarely use HALion, but I believe it is very similar to Kontakt, from what I understand.
I’m working in 2GB so must use small sample sizes, but libraries can be deceiving in terms of what they really need RAM wise.
Kontakt of course has DFD but unless there’s a setting I’m not aware of I’ve never touched it but I did notice at times that renders often missed certain sounds that had not yet been triggered so as to load.
I don’t know anything about Halion 4 because the demo was so useless in totality and I do say that with a heavy heart as I normally dig everything Steinberg does.
To add I do have some samples on another drive and due to the prices for computer hardware coming down, I have relegated my old IDE drives specifically for this purpose and use SATA for the OS, DAW and other installable applications.
Keep in mind the continual improvement in both hard drive / controller performance and in the OS’s ability to more efficiently access the drive.
And there is a point, which may have been reached by cutting-edge computers, where what you put where doesn’t really matter in other than extreme circumstances.
And in those extreme circumstances, in addition to your four-five-six-ad nauseum drives, you would also have a separate controller for each drive. In this vein, even if you’re only using the now-standard three drive complement (OS / Audio / Samples) you would maximize your performance by putting your OS and Samples drive on the now-standard two ports on one controller, with your Audio and CD / DVD drives on another controller.
The spinning-disk hard drive has traditionally been (and probably will continue to be) the “weak link” in computer performance. But there is a point where improvements in technology enable a product to meet a specified requirement, and as I noted above, in some instances we are already at that point. Which means that while additional performance might provide an infintesimal performance gain, there really is no need for improved performance at a specific point in time…
Having more HDDs might speed up the project load time a bit, but if all you’re worried about is playback, anything that doesn’t stream can go anywhere.
Stuff that streams during playback, especially where you need high polyphony should have its own HDD.
EDIT: actually, I should clarify. You can put your non-streaming and streaming stuff from one app on one HDD. This is because the non-streaming stuff is loaded, then not referenced during playback. The streaming stuff is. (I know “duh” right?) What you really want to avoid is having your streaming samples on the OS drive, or on a drive that another streaming sampler would read from, or from your audio project drive, during playback. Your audio project drive is just another type of streaming sampler. So, keep all you BFD, HALion, Kontakt stuff apart from each other. It can however live on the same drive as Toontrack, Reason, or Spectrasonics. You want to avoid 2 streaming apps from trying to fight over the same hard drive at the same time.
It depends whether the controller chip is supporting all or only a couple of drives.
Actually, I said But “now standard 3 drives” and “now-standard two ports on one controller” ? They are both now standard?
First allow me to clarify…
I was referring to mainstream desktops; I haven’t really dug into laptops much. In the last five or so years, all of the Dell and HP desktops I have looked at had two SATA controllers, each with two drive ports. Sorry I wasn’t more clear in my previous post.
And I know that some of the motherboards used for high-end “custom” builds sometimes include more than two controllers…
For me the question is: are you having any issues with the setup you’re using? If not I wouldn’t worry about it
It isn’t just so samples can stream. I have large orchestrated songs that used to take about half an hour or more to load. Sometimes samples would get skipped or load with errors and I’d have to go through and check patch by patch to make sure they were working. It could take 45 minutes to an hour just to get back to work on a track.
I moved the core orchestra samples I use to an 256GB SSD. I did a test against a current project that was getting annoying to load. When the samples were on a standard HD (WD Raptor 10k) it took ~17 minutes. Moved the samples to SSD … <2 minute. I’ll take that any day of the week.
But on the other subject. The number of instances of sampling plugins that I can easily manage without having to tweak streaming settings has also improved dramatically.
It may be my imagination, because I haven’t done a real evaluation, but it seems to me I don’t get renders that are screwed up by sample error anymore. But, really haven’t been set up this way long enough and done enough exports to categorically make that claim.
On motherboards that don’t have SATA, I normally have one hard drive per controller port as Master, and a CDROM as a slave, as this enables data access in almost all situations.
There are some older MBs where that configuration is terrible for an audio disk. On some controllers a, CD-ROM as a slave will do preemptive interrupt check every x-ms even if no activity is going to the CD-ROM. This is especially troublesome for some of the cheaper CD-ROM drives that would do the interrupt even if you turned off the auto-start function or other such configuration optimization’s available at the time.
hehe, well freeze actually didn’t doesn’t do anything that would help. You have to freeze/unfreeze a significant number of multi articulation configurations all the time. Basically, you are trading a marginally faster load time vs. constantly fucking with the tracks. I think freeze is silly. Bouncing is much more functional for this process.
Eventually a sample will be read into RAM, and depending on how the OS handles virtual memory addresses as opposed to physical memory addresses; this will determine the overall cohesiveness of the system particularly if you are running a network.
DFD/Freeze are and will always be only suited in resource constrained environments however the latter (if properly designed) may be useful in the “conversion” of MIDI to audio.
My understanding of the purpose of Freeze is, as padawan states, to free up resources, but that doesn’t necessarily mean that it’s just for systems that are short on horsepower. My system is fairly hefty, and I use Freeze all the time. It renders quickly, and can be turned off instantly if one needs to make changes.
I can dig it
There is an option to unload a VSTi from the rack but yet serves only to introduce further delay into the creative process, of which I can most certainly do without.
I have gotten around this in part by my choice of hardware synthesis, that being a Roland Sonic Cell however while I did prefer the Motif concept it does seem that Yamaha being yamaha, they arbitrarily decide when to no longer support something even though in this case Steinberg is the developer.
I never use Freeze personally and never bounce in the area of MIDI unless it’s a final render, and as far as memory is concerned I’m doing it with 2GB of 667MHz DDR2, so my sample quality is poor, which I why I was so looking forward to HAL4 because of it’s awesome sample conversion engine but because the demo basically shows me nothing in the way of what can actually be achieved music wise, I have resigned myself to lowfi but on the other hand accurate sample data in the form of HAL3 OEM and the Terratec X-table.
Instruments that have their MIDI tracks set to ALL will receive all data from every midi port unless there is a setting on the VSTi to filter key ranges for example or controller data but me being a poor musician, I can only play one sound at a time even using two hands (one hand allows me to write from either the bass or a melody section but I normally don’t use them together unless a bass part is to be extracted).
Anything I write using a synthesizer is always converted back to a piano sound, (hence my annoyance at Steinberg for not demonstrating anything worthwhile in this area other than sheer programming brilliance for the VST instrument itself) as are chords converted to guitar samples so synthesis rarely comes into anything I do as I can always use orchestral sounds for additional melodies or song sections.
For this exact reason I now use computer based samples, so as to obtain separation in the audio for volumes (rarely pans, as often it is in the samples themselves). Of course MIDI automation can help but the key for me is being able to change between different sound sources as various perspectives are needed, without having to reprogram.
If your synth has multiple outputs, then it is possible to at least perform tracking tasks, such as many of the later hardware samplers available, but I avoid this using the Motif/Fantom concept as needed but eventually it all returns to VSTi since software instruments have an inherent flexibility that allows me to get on with the job of songwriting and composition.
I did agonize over the digital output on Motif but still opted for the Fantom generator due to the fact of their being an external editor, rather than being fully plugin based as I didn’t want to be tweaking hardware.
I’d be most interested to see what you do with the VST System Link, as that is the exact type of system I would like to pursue at a future point, since it means I can combine older operating systems with newer technologies and not be so concerned about keeping up with the latest software although I will always opt to have at least one up-to-date system including DAW software.
The great thing about the Steinberg’ ethos is how with the last seq. C5, the update was made after C6 had arrived and as such any exigencies that were not accounted for in the previous product cycle could be tested and implemented against in a final cumulative patch, with maximum backwards compatibility and scope for forwardly compatible systems.