Hard drives and partiioning

I was just curious to see the common method everyone is using for storage of operating system, Cubase, Virtual Instruments, plug ins and recorded media. I am adding a hard drive to my system and was just wondering the best way to utilize it. I was thinking Windows and Cubase on one drive…everything else on another. Or maybe adding another for plug ins or virtual instruments only…just curious and would love some feedback…Thanks

That’s the way I do it, OS & programs on the main drive, recording files on the other.

I also have another internal HD for backups ( plus others elsewhere, I like to have at least 3 backups for safety )

You could partition the main drive so the OS & programs are on the first partition & Samples etc on the second but I prefer to have separate drives.

Paul,

Using separate partitions on the same HDD for program/data that will be accessed in the same sessions is forcing the heads to move more than necessary, increasing the required latency. SSDs do not suffer from this as there are no heads to travel.

However, the more separate drives that the info can be spread across, the more can be read/written simultaneously.

The basic rules are:

  1. Data used in same session → separate drives.
  2. Data not used in same session → can be in one partition, or in a separate one on the same drive.


    teebau,

Basically:

  1. OS & programs.

  2. Projects.

3 and the rest. Samples.


If you are using a dual boot system, where one boot partition is for general purpose use, and the other for DAW use, drive 2. can also be used for documents, etc, even on a separate partition.

Hi Patanjali ,

I know that, was just using that example as an option if extra drives weren’t available :slight_smile:

I use a multiboot system with 3 OS’s on the main HD, using hidden partitions, and 4 internal drives.

I have done it that way for yrs, and personally would not do it any other way :slight_smile:

I’m installing a new faster hard drive in my system. Is there a way to set it up so I can boot from the new drive just to run cubase and my plugs with all of the necessary tweaks and keep it from accessing ie and updates and have the other drive boot as a regular pc for internet use and ect…? If I can how do I set it up to do this?

I prefer to have all partitions visible. It makes it easier to:

  • make comparisons if having a problem with a program’s setup on one partition.

  • do virus checking on the DAW boot partition’s files without actually installing it on there.

  • backing up the whole computer in one go.


    Many a time I have downloaded program updates to my general boot partition’s Download folder (becasue it is much faster to the SSD than to my NAS over EoP connection, but then forgot to copy them to the NAS (where I keep copies of ALL program installation files) prior to booting into my recording partition. With visible partitions, it is easy to access those downloads from the other partition.

The only issue is making sure which way around I am doing things when copying between the two, or when deleting!


If one also does backups of all installation files, I suggest storing a file with a program’s key code in its name with them. The best way is to open Notepad, immediately Save As… with the programs name, version and code in the filename, then close Notepad without typing anything in the file. The file occupies one directory entry but no disk space.

My system is set up as follows…

7 internal hard drives (C drive and 3 raid 0 arrays), 2 external drives.

C drive is a 60 gig SSD (OCZ Vertex).
Silent, low power req’s, VERY FAST read rates, rather average write speeds (writes are generally slower on SSD drives anyway…the newest OCZ drives claim very high read/write speeds…around 250MB p/sec but from what I’m reading, they have some qual control issues right now). Generally speaking, you’ll see read rates of 200+ MB p/sec, write speeds of around 70MB p/sec. SSD’s are not indestructible however. The storage locations in them eventually start to fail after lot’s of write operations. Most have controllers built in to them that try to spread the write operations around to reduce the average number of writes to any given storage location. In short, exceptionally fast reads, average writes, silent, not indestructible but good ones seems to be quite reliable. Very expensive per gig of storage.

D drive= DVD/RAM

E and F drives,
Each of these drives are made up from a pair of Western Digital 600 gig Caviar Blue drives, combined into a Raid0 array (in other words, 2 600 gig drives coupled together to look like a single 1.2 terrabyte drive. I use these raid0 arrays for streaming sample libraries (NI Kontakt3/4 and BFD2 libraries on 1 array, EWQL and Omnisphere libraries on the second). The original rationale was that raid0 arrays would give me blazing fast access of streaming samples. This is a pretty demanding aspect of DAW operation. Think about, the system needs to be able to access hundreds of small chunks of audio, spread out across the surface of the disk and has to provide them to the system darn near instantly. In hind site, I probably did overkill here. While the raid arrays provide read/write speeds of 200+ M/B per sec I have some concerns about failure recovery. Though I have complete backups, it would require replacing 2 hard drives instead of 1, going into the systems raid controller and configuring the new drives etc… At some point, I’ll probably pull these and replace them with single drives. Point being, to put large streaming sample lib’s on their own disks.

G drive
Project drive. This is another raid 0 array but it’s made up of a pair of 300gig Velociraptor drives. Again, in hind sight, probably overkill with the raid0 arrays (blistering fast…read/write speeds of 250+ M/B per sec and similar concerns about failure recovery) but I had the drives already and I was shooting for max performance so…

External drives
I have several external hard drive cases. An Ultra case that I got from Tiger Direct ( http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=3804512&CatId=2780 )which works very well and an IcyDock enclosure. The Icydock is great because it’s screwless…Flip a locking tab on the front and the case opens up and ejects the hard drive. Slip a second drive in, close and latch the case and it’s set.

The external Ultra case holds a 2 TB drive that’s used for system backups, the IcyDock is used for client session drives.

I have full backups of the C/E, F drives and structure the G (Project drive) such that I I have folders for my own personal stuff and for any client work I’m doing. I make regular copies of these folders to the backup drive and also create RAR archive versions of important client project folders that I burn to DVD for long term safety backups.

All the best,

Karl

Hello Karl,

I found copying between Vertex SSDs to be very fast - certainly faster than to or from the Raptors I used to use.Even copying 10+GB to the SSD of my lowly Atom-based Sony Vaio P Series laptop from my i7’s Vertex was all in excess of 100MB/s, which is about wire speed for a Gigabit connection. Are you sure you weren’t copying to your Vertex FROM a HDD?

By the way, I have formatted my drives with 64kB sectors for several years. Even with copying several GBs of mixed-size files between the Raptors, I had 30% better transfer times than with the default 4kB sectors. Also, HDDs have lousy 4kB write speeds compared to modern SSDs.

I still doubt that RAID0 adequately handles interleaved accesses to lots of samples.

Given the low prices of HDDs, RAID1 would probably work better for samples, because (assuming the controller/software can intelligently handle multiple seeks) there would be more heads available, giving a higher aggregate read rate, and the nearest available head would always be used, minimising seek times. Since no writing is involved, the slower write times are irrelevant. Only problem is the extra heat, vibration and noise.

The OCZ Vertex drives are advertised with read/write speeds of better than 270 MB/s. Oddly, when I run either HDTune or ATTO against my OCZ Vertex, I dont get any kind of spectacular write speeds. Read speeds are very fast but dont hit 270+ MB/s.

Why this is, I dont know and quite honestly, I’ve never cared enough to pursue it. My machine has proven to be absolutely rock solid reliable…often running 10 hours a day, loading/unloading/massive editing of multiple projects and I cant remember the last time it hiccupped. Secondly, I’ve yet to have a project that caused it to break a sweat.

I’m afraid I’ll have to differ with you here. RAID 1 is a mirrored array. One drive is written as an exact mirror of the other. Provides redundancy but I’m not at all sure it gives a data throughput performance advantage. RAID 0 combines 2 drives, splits the data across both and achieves substantial performance increase at the price of lower mean time between failure (almost twice that of an individual drive).

While I’m not an expert on how a RAID 1 array performs read’s, for a drive with 10,000 rpm rotational speed, I believe it takes 6ms for a platter to make a full rotation. If RAID 1 can use whichever drive, of a pair, happens to get the desired track/sector first, this is going to be a fairly random potential speed advantage over a striped RAID 0 pair and i would imagine that both systems would likely have an average start of data transfer (ie…how long does it take, on average, before a requested block of data is actually spooled off the drive) that’s about the same. In this case, it would seem, if both methods can start spooling data off the drive in about the same time…then the advantage would fall to the RAID 0 arraywich, once it get’s the desired track/sector under the heads, can spool it at far faster rates .

Stripe size plays a part in the performance with, as I understand it, smaller stripe sizes being preferable for storing whats expected to be relatively small chunks of data (ie…individual database records or individual samples). As I recall I striped my 2 sample streaming arrays at 64k and the project array with something larger but it’s been quite awhile now and I really dont recall off the top of my head.

My real point was that, if I were to build the system again, I dont think I’d bother with the raid arrays. I dont know that the increased performance, in fact, translates to real improvements in system performance…at least not to a degree that offsets the lower mean time between failure rate risk of two drives paired together.

Karl,

The xxD tests are done with virgin systems newly set up so that nothing will get in the way of the test. Running them on a real, working system is unlikely to give the xxDs a chance to reach their unfettered best. Even copying files between drives does not reach the raw drive sppeds because the OS is managing the transfer, and does speed up and slow down the rate throughout the process, so that the user (potentially) does not lose usability of the computer while the transfers are taking place.

This is where theory can confuse more than empirical data. I want to see real world sampler and DAW RAID data rather than theory or benchmark tests that test raw speeds or for completely different scenarios to the massive, parallel, real-time streaming scenario of DAWs.

Multiple streams means not sequentially moving out whole files, but doing a sequential read of one section of each of the audio files, then going back to do it all over again, repeatedly. The heads are never going to be in the same area (of a HDD) for very long, so there will be a lot more seek time (=no data read or written). With very heavy streaming, I suspect that the amount of interleaving will bypass the benefits of RAID0 by substantially reducing the number of sequential reads or writes. With such greater reliance on minimising seek times, I think that proper RAID1 (allows interleaved reads) will help a lot (for samples). HDDS are cheap enough to use for redundant data storage in RAID1 if it gets a few ms less latency. RAID 1 would be slower for writes, of course.

Having libraries non-RAIDed means that if you you have a backup clone of the drives, and a drive fails, it is simply a matter of swapping that one over. With a single disk failure in RAID0, all drives need to be replaced at the same time. Can you just swap over a complete RAID0 drive set transparently to the controller? And how do you make a clone RAID0 set if one only has a single RAID capability?

If the DAW is your livelihood, then using SSDs with backup SSDs are worth it.

All I can tell you is that if run HDTune or ATTO against a single hard drive (in this case a Western Digital 640 gig drive) I’d see read speeds stabilize at around 80-90 MB/s (on a drive with a large sample library loaded on it). Run the same thing against a RAID0 array with the same sample library on it and the read speeds are pretty close to doubled.

I noted the downside risk to a RAID0 array (higher probability of failure, difficulty in recovering/replacing/rebuilding etc…) and noted that, if I were building the system again, I probably would not go this route.

At some point in the future I’ll get around to rebuilding it and will probably replace the raid arrays with single drives. SSD’s may or may not be the choice. They remain very expensive and I would like to see what kind of real world performance improvement SATA 3 might provide. Possible that a spinning drive, on SATa # may be a better price/performance thing.

Karl,

SSD are hitting the SATA2 Limits, but HDDs are not. Therefore, SATA3 will benefit SSDs more than HDDs.