PC versus Mac...

I totally agree.

The probs come (on both platforms) when you start
to attach (hard and software) 3rd party stuff.

At that point you are ‘riding a buckin’ bronco’.
now if I can only stay on for 8 seconds…

Seems for many folks, Macs are easier to ‘tame’.

{‘-’} the computer whisperer

Ehh, DPC latency anyone?

Aloha H,

Did some digging/research and cant find any reports of this on Macs.

Seems to be a Dell/Win prob.


{‘-’}

I have a mini mac and its doing great for me. I used to have windows xp and I had to reinstal windows every 4 months. It was extremly trouble ful. With my mini mac for 1000dollars I get 50 tracks easily with eq and comp on every track at 24 bit. I never had a problem with my mac. It has run for 3 years now. The only problem I had is with my east west platinum orchstra. It wont run right on my mini mac. So I am thinking of going to pc with windows 7 and sixcore 16gb of ram. Cost as much as the mini mac to upgrade to 64 bit. So I am thinking of doing it to get my orchstra working. I just wish mac would decouple their hardware from their software. Maybe the day will come. Mac pro is just too expansive. It´s a marketing scam. Maybe I will wait and go to pc in a year. So I would go for a windows 7 with sixcore and 16gb of ram at 64bit. Its the best deal.

…therefore putting software debates aside…the discussion would boil down to $$$ - for a $1000 which would buy a bigger DAW?.. :sunglasses:

You can barely get an Iphone for that price :sunglasses:

You won’t hear about DPC problems on MAC because it’s a Windows OS Mechanism. I’m sure the MAC OS has some system of it’s own to perform a similar task.

…if they throw in ear bud headphones at least my studio monitors are sorted… :laughing:

I really really tried hard not to take the flame-bait :wink: here,but I could only make it 3 weeks since reading this. I do appreciated that you put “lesser” in quotes… I do not know of one single feature, aspect of the core Windows operating system that I would call greater than it’s unix counterpart. (Not the lack of quotes!)

There’s a lot more to the history of OS X, Carbon, Darwin, XNU, Mach, microkernels, POSIX and BSD than just “Unix”. There is actually very little that has changed since the advent of WIndows NT in the early 90s - which was originally inspired by the SAME Mach kernel that is the core of OS X. Oh, ya, they dropped all non-intel architecture support. They introduced the “Palladium Project” as Vista, and backed out of that. They almost had a real file-system, but it doesn’t even support symlinks (not truly, even in WIndows 7). Oh, it does look pretty now, and it finally runs smoothly, thanks to Moore’s law. They still allow driver calls directly into the kernel with insufficient protection. Oh, and the driver model - two levels of interrupts for everything… OMG… I’m ranting. :blush:

Dang, I ate that flame bait, hook line and sinker… :laughing:

Hope i do not get banned for this, but a pictures says more then words!

Its worth noting that all the developers, both steinberg and the virtual instrument companies build their stuff on PC and then port it to Mac. What would you prefer? The original or a translation? :wink:

Commodore 64 FTW!

Why is your reluctance to move to an os that looks a BIT different a reason to move to one that looks a LOT different?

windows 7 x64 is a fantastic OS for a DAW, I was on the fence when XP was officially fazed out as I’d not gone to vista,so I went OSX leopard for a year or so. when the beta of win 7 x64 was released I downloaded it out of curiosity, it was great. When the official version was released I built a new DAW and switched back to windows from OSX.

I still have both at the studio here but my main everyday working DAW is a windows 7 x64 machine :wink:



MC

BTW, so you don’t keep your underwear in a bunch, if you re-read the post you will see that the “lesser” reference was in regards of Mac vs Unix.

The reason Macs historically were better for real-time processing was because their Motorola processors were RISC processors, which is more expensive and less programmable but faster at whatever it is designed to do than CISC processors.

Ever since Apple abandoned its contract with Motorola and went with Intel, the only difference now between PC and Mac is the operating system. Both machines have the same bottlenecking problem of CISC but we now have much more powerful CPUs that just muscle through the issue.

Mac OS is a very good and stable operating system. Since OS X, it is based on Unix, which historically has been an OS of choice for mission critical applications, like telecom servers. Apple is far overpriced for its performance. Apple is a world of devices that talk well with one another but don’t want to talk to outsiders.

All stigma of Windows instability proverbially went out the window with Windows 7. Windows 7 now shares the same kernel as its server OS counterpart, so Microsoft now has half the code maintenance than it once did and has more than double the demand for stability because the same kernel in Windows 7 is used in mission critical servers around the world.

The way I see it:

  • If you want to use ProTool, use Mac.
  • If you want to shell out a bunch of money because you don’t want to learn how to work a computer, use Mac.
  • If you’re not afraid of basic concepts of technology and don’t do stupid things on the Internet that invite viruses, use Windows.
  • if you really know what you’re doing, have the time (aka family support), and patience, and want to be more virus resistant than Mac, use Linux. (I’m posting this from my Ubuntu netbook :ugeek: ) Steinberg does not support Linux, but there are DAWs available. I even managed to get ReBirth RB-338 running on Linux.
  1. Motorola 68xxx are not a RISC processors. PowerPC processors, which were later (from 1994?) used on Macs are RISC, though.
  2. RISC processors are not more expensive than their counterpart CISC processors. One of the main reasons on developing RISC processors was ability to make simpler (ie. cheaper) processors with superior performance compared to CISC ones.

What is this “bottlenecking problem”?

BTW, modern Intel x86 processors are basically RISC processors with additional hardware to decode CISC instructions to RISC ones (extreamly simplified explanation).

I would never choose any Unix as a real “mission critcal” OS. Unixes are great OSs for their extreamly simple programming interface (everything is a file) and has good enough stability to services needing high availability, but for real mission critical systems, like airplanes’ FBW, nuclear power plant cooling control, etc … no way!

Thank you, Jarno, right on the money.

I don’t know enough about CISC vs. RISC to get into it. My understanding is that the goal of RISC processors was to implement them in jobs that are very repetitive, like batch bank transactions where it’s the same instruction over and over again, which leads me to my next response below…

CISC processors, including the x86, must reprogram themselves for the next operation between every single execution, even if the software sends the same instruction over and over again. In RISC processors, it’s sort of like RPN calculators, where you set the instruction up front, and then push through all the numbers each using that same instruction that was set once at the top of the job. This reduces a certain kind of bottleneck.

An example is if you had to sum many numbers together, in RISC you set ‘sum’ and then push all the numbers through, but in CISC you set ‘sum’ between every operation. On regular calculators, you have to hit the plus sign between every number and then the equals button to push the operation through, so that the CPU inside reprograms itself to ‘sum’ ever time because it’s designed to expect another instruction even if it will be the same as the previous. Even if you have a short cut offered by the hand-held calculator of pressing equals button between each number, under the hood the CPU still reprograms itself.

In a Reverse Polish Notation (RPN) calculator, you set the program type once, and then can push through as many operation of the same kind as you like without reprogramming between each cycle. This is, from my understanding, how a RISC processor is designed to work.

As you can imagine from this example and assuming that my understanding is correct, the need to set a operation type of ‘sum’ between every single cycle would essentially be a type of bottleneck. It’s also easy to extrapolate that a design like RPN would not suite well in a Desktop, but it would work well in a number crunching server.

So the point isn’t just a set of instructions and simplicity, it’s a philosophy of when to set the operation type.

Planes and nuclear power plants’ cooling control are certainly more critical systems than the “mission critical” systems I hear mentioned in my career in healthcare IT. I understand the entire point of Bell Labs developing Unix was for better multi-user CPU-time sharing (a form of multi-tasking) than any other OS could offer.

So it sort of makes sense that Apple went with a *nix OS for their answer to the modern multi-tasking computer, the personal workstation. Microsoft failed for many years to completely succeed to arrive at the answer to multi-tasking, but now that the Win7 kernel shares the same kernel with Win Server, then I think we all can feel more assured that it’s a stabler system than ever in the past because I believe Microsoft wouldn’t knowingly put a lower quality kernel in its server OS than it could.

My conclusion still stands as I stated above:

The example comparison of the calculator does not really apply to RISC in the way you stated. RISC vs CISC has little to do with outside “purpose” per se, like “batching bank transactions” (assuming that by “batch bank transactions” you mean just that) since there are so much more involved in a bank transaction than a snippet of machine code on a single machine. (Though I understand it was merely an example of a loop/repetetive task.)

(I guess one could rather say that, not just the “batch bank transactions” perform better, but the entire bank application as a whole. Same with the calculator.)

In short: a CISC instruction is essentially “microcode” running inside a CPU (which is why it is called a complex instruction), to complete its “task”. A RISC instruction is “straight” code (if you will), which when combining such instructions, can be tailored by an compiler to specifically suit the “task” better than the generic CISC instruction. The reason this is, is because a snippet of RISC code performs the “task” and nothing more, whereas the CISC potentially does “to much” (since the “microcode” has been preprogrammed to generically complete a “task”).

Simplistically, the word reduced in RISC essentially mean that single instructions complete within one clock cycle, whereas CISC could have a single instruction execute over several clock cycles (executing microcode).

A simple example: take multiplying two numbers (preloaded into two “registers”). In CISC, there would be one instruction doing this, and that’s what it does. The microcode that executes the multiplication, does what it is programmed to do, and does it the same way always, regardless of how the application was compiled. Say this multiplicaiton instruction takes 30 clock cycles to complete. The RISC multiplication, when compiled, can be specifically optimized based on various knowledge of the multiplication at hand, and could (for arguments sake) be performed in 15 clock cycles, simply because it was customized for this one snippet of code.

The overall result is that the entire application runs faster (and more efficient).

This was some time ago and nowadays the arguments are more or less moot. CISC performs basically as good as RISC, advanced compilers utilizing CISC better, etc., and so the advantage is no longer what it was.

Lots of information is available online if deeper knowledge is desired.