The modular computer concept - for today & tomorrow

Oh please! How about taking off your foil hat?

Now. You want a modular concept. PC architecture has been like that from the start. Modularity went even so far that you were able to upgrade next generation processors on the old motherboard during '90s. How popular was this approach? Well … you can find out by trying to find those system nowadays.

Why are today’s PC architectures are cheap beacuse their modularity is limited. It’s all about tight integration which makes them dirt cheap. Modularity only skyrockets the prices and also makes it impossible to optimize system performance: it’s stupid to put newest generation processor to the systme bus, which can’t make full advantage of the latest technology … etc … etc.

But even today’s PC hardware you can recycle quite a lot from your previous generation product (in most cases): power supplies, disks, display interfaces, audio interfaces, etc. What’s not compatible are: processors, motherboards and RAM … but then, these are the componentswhich would be the worst bottleneck in an upgrade, if you take some of the earlier generation ones into your new system.

How long was ISA the bus standard? :wink: MCA (Microchannel) sucked in terms of market share because of IBM’s licensing terms, but ISA in spite of the inferior technology lasted for 10 years before EISA took over, then PCI, then PCI-Express, and let’s not forget the video card specific bus technologies: AGP, etc.

While I appreciate your perspective, Steve, you have to realize that it’s not greed that has resulted in computers being built the way they are. Instead, computers (and by that I mean motherboards, buses, interrupt implementation, memory, disk interfaces, etc.) are designed the way they are to maximize performance while preventing the price point from becoming cost prohibitive to the average consumer. When I bought my first IBM PC-compatible computer in the early 90’s there was a general rule that "your dream machine will always cost $4,000-5,000. Now that number is $2,000-3,000, and that’s for a dream machine. You can still get incredible performance for $1,000-2,000 easily if you build it yourself.

If you had a truly modular design - the way you are describing it at least…I would argue that PCI-Express is a modularity enabler as is EIDE / SATA, etc. - then the performance would decrease substantially because of the design compromises that would have to be made to meet this goal of modularity, again as you describe it.

As soon as technology starts being more static, e.g. we quit figuring out how to squeeze more FLOPS out of a CPU without a corresponding increase in heat, etc. then I’m sure you’ll be able to start upgrading your computer without having to buy other parts to go along with it.

Until that time, progress progresses. And you either learn to accept it or get left behind.

You’re still missing my point, though. The point I was making was that there is already quite a bit of modularity in hardware designs. When was the last time you had to upgrade your motherboard because of a new bus spec? PCI Express has been out since 2003. ISA was the primary standard (if you ignore Microchannel it was the only standard) for a similar amount of time. DDR3 was released in 2007. SATA (a.k.a. EIDE) first appeared in 1994! Etc. Given your stated shelf life of 4 years for a computer all of these standards have exceeded that length of time.

Finally, I still stand by my statement (as a hardware design minor in college…I used to design VLSI chips and related stuff) that a more modular design as you are describing it will result in decreased maximum performance and, given overclocking and other performance tweaks substantially decreased maximum performance potential.

hmm … I read what you are writing, but it just seems everything you are asking for is already exactly how it is. You can plug and play CPUs of similar architectures. Then they make adapters to support them. Heck, until UAD quit support for the UAD-1, I was running the PCI cards in PCI express slots using a PCI to PCI-Express adapter. I can buy rack mount system designed for almost any specific use and keep them for years. Until 2 years ago, I had a x86 (think pre-286) computer that I used as a FAX server. I re-use CPU cases because they are manufactured to industry standard sizes and layouts. etc…

Systems are already completely modular. You can pay extra for extra modularity. You can pay less for tight integration. Or you can even pay MORE for tighter integration and less performance (hello Apple :stuck_out_tongue:), had to get that in.

Well, I’m not holding my breath for an end of obsolescence … planned or otherwise. :wink:

Nit picking I realize but UAD was hardly the first manufacturer of solutions architected like theirs. General purpose DSPs had published SDKs going as far back as the late 80’s / early 90s with applications written on top of them to do stuff like…

…be a 9600 baud modem. :laughing:

That’s a true story though. (IBM’s RTP location produced the consumer oriented modem that was DSP based.)

The current trend along these lines is a programming language (OpenCL is one example) that allows developers to run applications on…[wait for it]…their graphics card.

This is because frequently GPUs are more powerful number crunchers than CPUs since the latter is used for general purpose computation while GPUs focus on number crunching.

I see what you’re saying, but I don’t see how (for example) Vienna’s master/slave architecture is related to modular hardware design. And even if it were I’d counter that ReWire does the exact same thing except that it is more general in its applicability than Vienna is because ReWire can be used to connect a slave computer to run any number of VSTs.

But I digress.

If I’m reading you correctly, you’re simply asking if there are other ways to eek out more…

…uh…

…more what, Steve? Shelf life?

Like you, my computer is already 3 years old and will last me another 3-4 years at least before I buy another. I don’t know how I can ask for more to be honest when I’m already far beyond what is considered the norm currently much less when I finally retire the machine I have.

One drawback to the “concept” is the speed of electrons via a bit of wire. The current processing speeds actually make it important to have certain components close enough together so that any propagation delays are minimised.
Another would be devising a future proof communication protocol hard and soft between your modular components.

This is exactly the type of stuff I am referring to.

Also, Steve, you should read this:

You’re basically describing a “hive mind” concept for computers now, which really is nothing more than a supercomputer that is designed for home use.

Again, you can already do this. I think its the university of Colorado has something like 10,000 Apple IIs parralled together. But, the OS that runs them is an ongoing resource for doctoral thesis.

Your understanding of how that will benefit you is completely flawed. In order for the ganged CPUs to do you any good, you have to have an Operating system that knows how to provide parallel and serial processing across the available CPUs. That kind of process scheduling is immensely complex and fraught with blocking and racing condition issues. Then you need someone to write software that knows how to utilize the specialty OS. In your example of 6 PCs hanging on the guys wall, he is using it as a rendering farm. The video system is managing the farming. Those rendering farms are EXPENSIVE, even though the underlying computers are commodity priced.

As can be seen in this diagram http://en.wikipedia.org/wiki/File%3aWarptable.gif warp factor 10 would require an infinite amount of power to go infinitely “fast” thus they never achieved warp factor 10 as it’s an impossibility. :laughing: :laughing: :laughing: :mrgreen:

oh, and the rendering farm is exactly the opposite of modular. It’s a one time buy.

Sorry to harp on this … my Masters is in Computer Science … so the OCD in me kicks in :slight_smile:

the computers are the cheap part. The video rendering system that is being driven by those computers can cost 100 to 10000 times the cost of the computers.

And yes I was assuming that a new OS would bevrequired to control my above fantacy computer array, where simply having an easy interconect method for multiple regular desktop computers, or a custom cabinet like I described above, housing multiple mobo’s. New OS versions come out on a regular basis, software & hardware manufacturers un the future could make this a ‘standard’ at some point. I believe if this was a reality, almost everyone would make use of it, from gamers, to Cubasers etc…anyone that could use more power.

Again, there are universities whose Computer Science departments do nothing but evolve these concepts. The problem is they are extremely difficult to implement. And, they are universally single purpose constructs.

But back to rendering farms (I’ll have to research up on that term some more) isn’t this just networking? …as how computer DAWs are used with say, FXT, VEP or any others? Basically each computer doing their ‘own’ tasks…one machine used for a VSTi and another for audio etc?

You are comparing two separate scenarios …

  1. rendering farm … no they aren’t simply networked together … there is a master video rendering machine that is “farming” out subportions of the total map to a machine to render … when the sub machine finishes, it sends back its piece, the master stitches that to the total and farms out another section to the sub machine. The scheduling, stitching and synchronization for that is hardware intensive. It is ULTRA complex for video where pixel to pixel mapping integrity between frames has to be kept. And, in this scenario the results are discrete (same in same out every time). In audio the events are continuous … much more difficult to do continuous variability across a scheduling map.

  2. FX teleport … You are farming a discrete value for a single output. In other words , an application (VSTi) on the other end is being remotely told to provide its discrete output, the results are brought back to the host and synced up. This is the simplest form of farming. And as you’ve probably found out, it has severe bandwidth and synchronization issues. And that’s with just 2 or 3 computers. It is a horrible mess with 10 … 50 … 100 computers. It is exponentially more difficult to add computers, not linearly more difficult.

A simpler example is RAID controllers …

Everyone assumes that data will be written faster if you split the job across multiple drives. Lets say you a have a 3 drive setup. Surely, its faster to write half the data to each drive simultaneously and the parity bit to the 3rd, than it is to write the entire stream to one disk sequentially? NOPE. Remember there is a controller in front that has to divide the stream at a “known” place and pass some to one drive and some to another and an offset bit to a 3rd. Then you have the limit of the buss width sending the data to the controller and the bandwidth of the controller to the disks.

Now, what if you have 6 drives (split data between 5 and parity to the 6th) Since you are only sending a 5th of the data to any disk that should be at least 5 times faster right? NOPE … The controller now has to track 6 data streams to completion … what happens if stream 4 fails? etc…

multi-processing is incredibly complex and a very interesting CS issue. It just doesn’t work the way people think it does.

Let’s not even talk about how incredibly limited the performance of that “wall art computer” would be compared to even an off the shelf system from Dell due to the fact that you don’t have all of that wiring going on.

Cray, which built the first supercomputers in the 90’s, used to hand wire every one of their machines so that they could ensure that the interconnects between components were the absolutely shortest they could be. Why? Because even though half an inch on a wire isn’t much of a performance boost by itself if you put enough sand in a backpack the damn backpack gets heavy.

So, again, nice concept but wouldn’t be practical especially for a performance hungry crowd like Cubase enthusiasts.

You don’t give up do you, no matter how many times you’re told that what you’re suggesting is currently impractical.

You’d have better luck waiting for the holodeck to arrive. Then you can just make musicians appear and have them play the music for you while you record it on your Protools 23 HD|9 system.

This is a blade server farm… each handle is a Mother Board that you can get in about a gazzillion configurations. This farm can be connected to a shared TB backplane with massive bandwidth. You could run FX-Teleport on 12 to 40 slave units. You can upgrade or re-purpose each blade and each blade component. However, you would need several new pocket books …

here is what one can look like. Although there are many, many, many, many different kinds/models/target industries etc…

Seriously, the things you are imagining in this thread are either already available or being heavily researched.

Actually this is the non-intuitive part … it does increase processing power, but there is a loss of performance per unit added. So it isn’t 1 server = 1x performance, 2 servers = 2x performance, 3 servers = 3x performance. The performance loss ratio really depends on what you are doing.

This “blade farm” as I understand it, requires like you said before a specialized and verg expensive server operating system. But how exactly are these “blades” (which pretty much seem to be seprate individual computers) connected together?

That’s the backplane I was referring to. Again, there are lots of backplane technologies. Some use fiber, some don’t. You pick the backplane that performs well for the task at hand (again, fit for purpose).

I’m assuming since you mentioned FX-Teleport that it’s Ethernet connections?
If one had the means for these multi-thousand dollar commercial purpose built server systems or super computers (not me) would they be appropriate for DAW use?

Well “Ethernet” is really protocols not RJ-45 connections. And yes, there are ethernet carrier backplanes. But, it is not usually NIC cards and Switches/Hubs. Although, there are network concentrator backplanes that work exactly how you picture networking in your head. But, those are not nearly as efficient as dedicated comm backplanes.

(Now bare with me) But then, this leads me to believe that one could (or have to) use Win XP since specifically mentioned FXT, which works under XP only. So basically one could build say, a blade system of ‘kinda-sorta’s’ which at least LOOKS like that picture above…that is, one big tower/rack with several motherboards/w CPU’s etc, possitioned in a row like blades, but is not unlike what DAW users are already doing…it’s just housed all in one large tower, rather than separate home type towers. With one large towe cabling would be much shorter.

This is the part I was trying to explain … Think of the image rendering farm picture you linked. The CPUs are irrelivent to the cost and performance of the system. It is the over-all architecture supporting a process. So yes, those blades could all be running Windows + DAW and FX Teleport. You could stack RAM in them and use them for Romplers. You could connect to large/fast SANS (Storage Area Networks) for Sample Streaming. None of that is the problem …

The problem is the process for communications between the systems. Those are always fit for purpose, not generalized. Generalized = crappy performance. So for FX Teleport, you would add a comm protocol backplane with massive IP bandwidth. This is a single function backplane. It would be useless as say a database transaction farm for large transactional systems.

EDIT: And “fit for purpose” is extremely modular :slight_smile: but it is also extremely expensive, because you don’t get economy of scale. Lots of R&D and production costs for technology that will obsolete as it is being installed.

The answer to your question unfortunately is YES, or It depends.

Think of it this way. When you have a lot of traffic, you have to have a traffic cop. But, the cop has to know what kind of traffic it is. Sometimes the cop might actually have to look at the contents of the trucks and tell them they need to go to different destinations, or may have to line blue trucks on the left and yellow trucks on the right … Or, all blue trucks must be behind yellow trucks if a green car came through first, otherwise blue trucks go first.

Backplane is just a fancy way of saying you have a set of roads with a traffic cop(s). These traffic cops handle incredibly complex or high volume problems.

What would it sound like if PC 1s audio played before PC 2s and PC 3s audio came in randomly? That traffic is moderately easy to handle with 2 or 3 computers. But, what should happen if PC 2 has a hiccup? How do we keep the stream real time? Do we pause all streams? Do we buffer and add latency to give PC 2 a chance to catch up? How does PC 2 know that it just fucked up, since it’s just sending a stream of data?

Backplanes can be dumb as rocks, or provide incredibly complex management systems.



:wink:

Get a room module? :stuck_out_tongue: