The modular computer concept - for today & tomorrow

:laughing:

Steve, itā€™s not that. But it seems that (no matter how many times youā€™re told that your idea, while a nice concept, is impractical and give several reasons why) you donā€™t seem to want to take no for an answer.

This may be an admirable quality if youā€™re someone with a hardware design, Computer Engineering, Electrical Engineering, or some other relevant background (because then youā€™d have the wherewithal to prove the lot of us wrong). But if you donā€™t then I have to scratch my head and wonder if you couldnā€™t be recording some awesome shredding pieces for us to admire instead of waxing philosophical like you do.

<3

Iā€™m sure youā€™re all familiar with Mooreā€™s Law. Mooreā€™s Law has proven to be very accurate, but itā€™s currently running out of steam; we just canā€™t make these transistors etc. any smaller ā€“ itā€™s not a limitation of know how, itā€™s an actual, physical limit. However: two new developments will soon be taking computing into regions of computational speed and power only dreamed of previously: nanowires and graphene. Nanowaires are basically transistors that are built vertically, so they maximize space limitations. Graphene, however, is something altogether amazing. It is a material that is visible to the naked eye, so it can be worked with quite easily, and yet it is exactly one atom layer in thickness. Because it is only one atom in thickness, electrons move across it with much greater speed they do over silicon, plus, they generate almost zero heat. Just recently they have figured out how to make a transistor out of graphene; when this material becomes commonplace, it will revolutionize electronics even more than the transistor did (or so they say)

Yes, and when absorbing Mooreā€™s observation into a more philosophical phrase; necessity is the mother of invention (Plato), we will naturally see some major progress in the field of computing in the near future.

Like previously stated, Steveā€™s dream is really only impractical today. Tomorrow will yield a new picture in which it (Steveā€™s dream) may be the de facto standard.

Our current state of computing modularity is pretty lame. I mean, measured by what we are currently capable of itā€™s great, but it is limiting. With software evolving into different patterns than previously pre-programmed, it will also add to the modularity pattern which, as pointed out earlier, is very important for this to progress into something better.

Unfortunately (like swamptone alluded to) with money driving planned obsolescence, I think that this mega-trigger (no pun) will have to be something unexpected, rather than pre-planned.

My complaint about computing tech ā€“ personal computing, that is ā€“ is that the ergonomics or so quaintly Baroque ā€“ little tiny screws and motherboards that donā€™t exactly fit, etc etc. Iā€™m told that server ergonomics are MUCH more user-friendly and ā€œswappableā€

Steve, I like your lack of stagnation. :wink: Without visionary people, there would be little progress.

Today, we are still talking in terms of the PC tower (figure of speech) and an Operating System as two separate things, and that a computer needs a Motherboard, some CPU, some storage, and so on. We think in these small terms, because itā€™s how it currently works, like instructions sets, micro code, bus, RAM, etcā€¦

What if we logistically separate these components differently. Maybe part of the motherboard largely becomes software instead. Maybe storage could be the actual file system, instead of in the OS. Blah, blah, blahā€¦

Anyways, I believe that we do not integrate enough between hardware and software, and therefore a barrier of thinking exists. For example, instead of trying to delegate small instructional level code between cores, one could have entire code segments being distributed across Steveā€™s little cubes, rendering the speed invariance less critical. This is already being done in various ways, e.g. as stated earlier when small snippets are fed into graphics cards for example. This was even done by Atari in the early 90ā€™s (the Atari Jaguar). However, I am talking of a more autonomous software system in of itself, delegating logical components, maybe even entire applications or parts thereof, or suits of categorized entities.

I would say, Steve, that the future is on its way. :slight_smile:

This is the part that I keep objecting to. Computer Science people DONā€™T view it this way and are not viewing it this way. Your view is purely consumer viewpoint. And guess what, the consumer typically have no idea how the stuff works. You get really simplified abstract explanations of how something works, and like a lot things, people then assume they know how it actually works.

What if we logistically separate these components differently. Maybe part of the motherboard largely becomes software instead. Maybe storage could be the actual file system, instead of in the OS. Blah, blah, blahā€¦

We call this the HAL (Hardware abstraction layer). A simplified version is the boot camp stuff that lets you run just about any OS on pretty much any Hardware structure. Think of things like the Nintendo emulators and so forth. This is exactly what they are doing.

For example, instead of trying to delegate small instructional level code between cores, one could have entire code segments being distributed across Steveā€™s little cubes, rendering the speed invariance less critical. This is already being done in various ways, e.g. as stated earlier when small snippets are fed into graphics cards for example. This was even done by Atari in the early 90ā€™s (the Atari Jaguar). However, I am talking of a more autonomous software system in of itself, delegating logical components, maybe even entire applications or parts thereof, or suits of categorized entities.

As you pointed out, this is already being done. However, the examples you provided are discrete work objects. These are relatively easy to manage in a distributed manner. Another example is Folding at home. They parcel out complex but discreete units of work to be done on millions of computers and then integrate the results on the other side.

Again, discrete work is EASY. Varient work is incredibly complicated. You should look up Quantum computing sometime. Parallel processing theory is one of the hottest topics in CS. But, just like adding cars to a road doesnā€™t necessarily increase the number of people the road can carry in a given period of time, adding processors to a job, doesnā€™t necessarily make it more efficient.

That is complete bullshit. If that were true you would still be using an abacus. :smiley:

This is where knowing nothing about a subject gets you in trouble. That statement is false in Math, Computer Science, Statistics, Philosophy and Physics, just to name a few.

Two somethings, donā€™t necessarily do more than one something. Seriously.

This theme is a marketing red haring. We are going in circles. Software has to be written to take advantage of the multiple cores. Most software is extremely inefficient with multiple cores. So running on 3 cores may give you only 130% performance. Again, the easy part has been done. Hardware has also been doing this for a long time. How do you think you multiple cores? Blade servers etcā€¦ Again, your imagination is just now catching up.

However, your imagination did not have to put the work into understanding how these things work and why they work. So,you are thinking in terms of behaviors that seem logical to you, but really are not. The logical parts are already working. The illogical parts are the parts that just flat out donā€™t work the way your brain wants them to. Not because some CS guy isnā€™t creative enough. But, because the process does not benefit from the change.