Computer DAW Power, where are we?

I had reached my ideal machine, but then I bought some more software, and then I had to start watching how much I was running at once, etc., then I realized I needed a more powerful machine, and I had reached my ideal machine, but then I bought some more software, and then I had to start watching how much at once, etc., then I realized I needed a more powerful machine, and (return to top)

The cycle remains the same. Computers get faster - developers create software that’s more demanding and imagine their programme is the only one running in the machine . 64 bit seems a great answer but then you have developers reluctant to change too much too fast as it costs.

link 80 of these and you’ll be fine as they are texas instrument ,highly recommended by soundcard users don’t you know

http://news.cnet.com/i/ne/p/2007/1970s-comp-21-550x382.jpg

I’m actually on my third; the first maybe doesn’t really count, but it ran Cubase LE for a while, and I was always struggling with the load. I still use the second as my “field” recorder – it’s a core-duo laptop. It can easily record lots of tracks at once; I’ve done as many as 9, but I have room for 16 inputs if I ever need them. I was “inspired” to get a new workstation by Superior Drummer eating up a lot of processing power. I am amazed at the magic this computer performs. I can have several virtual instruments going at once along with effects, and play along on guitar with a buffer size of 64. So it should hold me for a while. If I understand correctly, the number of cores helps, but an individual process may still be limited by the speed of an individual core.

Depending on how your own requirements are changing, upgrading your DAW now and then makes sense. But constantly upgrading hardware is not practical for most of us, as it costs heavily. With the ability to bounce to audio, it may not be necessary to be able to run 40 vsti at once whilst tracking 5 audio inputs at low latency simultaneously… :laughing: but for some, its never enough (and then there’s the capability junkies, wanting potential but never really needing it) … all depends on what sort of work you are doing.

Me personally, I rarely max out the box, though I do come close now and then… so eventually, more processor will be on my shopping list You can manage those high load times pretty effectively, so I don’t see it as a stopping block for the most part. (Currently using a Q6600, which is old news these days!) The 8 Gigs of RAM gives me some flexiblity when I am in 64bit Cubase, however I often write in 32bit, as I have not made the transition completely, and I still don’t max it often.

If I could afford more, I would probably get it, I might just be one of those junkies I mentioned above :blush: :wink: GAS and all that!
:sunglasses:

A perfect encapsulation of the dilemma, this :sunglasses:

That isn’t true. The cycle is caused by the unavoidable increases in technologies, performance and capacity. As long as we have research and companies willing to take advantage of that research, the cycle won’t end. It really isn’t something Steinberg or any other technology/software company controls at all. They do benefit from it however. As do we.

Fortunately my needs aren’t that great and on top of that… since all my music is just for myself I don’t have some producer or band member etc coming up to me wishing I was doing either this or that. My DAW is setup just how I like it and it does everything I want and need. I gave up chasing the dragon many years ago.

Don’t let the technology get in the way of your enjoyment.

This is where your desire for something to be so and the reality of software/hardware development run headlong into each other. Computer Scientists have been trying to create completely portable and extensible languages since the beginning of programming languages. Maybe one day some team will figure out a domain language that is abstracted enough from the hardware its running on, and dynamically extensible enough to provide not only completely optimized operations on that tremendously abstracted hardware, but also be totally open to operational paradigm changes that become available as the underlying technology advances. But, I can assure you, that day isn’t today.

Clearly, it is not optimal as a consumer to have this be true. Because it means every time someone comes up with a faster, prettier, smaller, more flexible piece of hardware, the software guys are going to implement a new version of whatever they are flogging on that new hardware.

The real interesting part with distributed computing (like the buzzword o-the-day cloud computing) is that the line between hardware/software is blurring. Embedded development (writing the software that makes a device work) used to be clear cut work. As SDKs and APIs into hardware have become more accessible, and the those abstracted languages I was talking about have more access to the inerds, we are definitely moving into a weird space. You can now see some of the “release” problems that plague software (get it out the door, we’ll patch it later), is starting to show up in hardware. 20 years ago, embedded software was a rock solid process. You screw up the prom, you cost the company zillions. Fun times.

Definitely not. We’ve been using decades trying to accomplish this goal. And the result? We’re still using programming language/hardware paradigms from 1960’s (languages derived from B and Smalltalk and the same old VonNeumann architecture with general purpose registers). Go figure!