How does 64 bit software development work?

Hi,
When I was in college, writing software for two different systems was basically the same, but you used a different compiler to translate your code (FORTRAN, or BASIC, C++, or whatever) into the machine language for the OS of choice.

Does it not work basicallly the same nowadays? Why does it take so long for developers to get a 64 bit version of VST plug-ins? Is it just because they don’t want to splurge for the 64 bit version of the SDK? Or, is there something inherently different in the coding?

Years back, I wrote several small programs (really, just for fun) using Visual C++. I still have them. I can’t help but think that if I wanted them in 64 bit, I would go buy Visual C++ 64 bit version, open up the project file, make the tweaks necessary, and compile. I am in a completely different line of work now and haven’t written a program in years. Am I that far off?

Just wondering…
Sincerely,
J.L.

example
In a 32-bit OS a float and a pointer to a float are both 32 bits
In a 64-bit OS a float is (still) 32 bits but a pointer to a float is now 64 bits

The size of any data structure which includes pointers must change
And a new data structure size could break existing code in a way that might not have been considered during the initial design in the previous 32-bit world …

… und so weiter

peace y’all
pj

In that case the initial design of the software is BAD! I can’t think of situation, where you would write code which depends on pointer size in source level (unless you’re writing an OS kernel or device driver, of course).

Do you think that ASIO is not a kernel level driver framework?

Also, I can’t comment for sure, but I also imagine any device drivers (whether ring 0, 2 or 3) may have new constructs that they need to be aware of for 64-bit development.

Finally, I’m not sure a float is 32-bits on a 64-bit OS. I remember when an integer was 16-bits originally and was equivalent to a short integer (in C / C++) but when the 386 was released it changed to 32-bits and was equivalent to a long integer.

I’m sure there are other notable differences between the two operating systems.

ASIO? Yes. VST? No!

Well … what’s “float” or “double” or “fancyfloatingpointtype” is defined by compiler, not OS (may be different on versions of the same compiler on different OS platforms, but that’s a different story). But most of the modern (C and C-derivate) compilers define “float” as IEEE single precision and “double” as IEEE double precision (or architecture-specific equivalent) formats.

Yeah, you have a good point. I forgot that float is based on the IEEE specification.

Aloha guys,
even tho’ I do not have a clue as to what you guys are talking about,
still I find this thread to be fascinating!

I have always had tremendous respect for those that can ‘speak’
another language and it seems that is what is going on here.
Right on guys!
I have always loved listening to the ‘singsong’ of another
language that I don’t understand.

Trying to learn to programme ‘puters at my age would be a total waste
of time/money and common sense but reading conversations like
this one provided by you guys, sure get my ‘old ass’ learning juices flowin’.

Tanx guys. Sending much Aloha.
{‘-’}

Curt,

What we’re talking about is how a computer thinks of numbers. Compare it to how much detail you remember about a particular experience. Your recollection is limited to a certain amount of detail.

Computers are the same in a sense, except that the programmer can specify how much “detail” should be remembered. We know it as precision, and it describes the amount of space a number takes up in memory. Remember that computers think in binary so…

  • A 16-bit integer goes from 0-2^16-1 (65535)
  • A 32-bit integer goes from 0-2^32-1 (whatever that is)

Floating point numbers, though, are different in that there’s no easy way to specify where the decimal goes. So an organization called the Institute of Electrical and Electronics Engineers (IEEE) came up with a way to represent floating point numbers in a binary fashion. Essentially they are comprised of three parts: the sign; the mantissa; and the exponent.

  • The mantissa is the number itself.
  • The exponent is the power of 10 that is represented by the mantissa.

The sign is 1 bit.
The exponent is 8 bits in size and goes from -127 to 128.
The mantissa is 23 bits.

In total this is 32-bits (4 bytes). Specifically, this type of floating point (the fact that you specify the exponent is in essence causing the decimal point to “float,” hence the name) is called single precision and can only represent up to a certain number of decimal places. If you need more precision then you go to a double precision floating point (called a “double” vs. a “float”) when basically has more bits for the components and is 8 bytes in size.

Note, however, that because we think in decimal and computers think in binary there are round off errors that theoretically could occur. Old ALUs (Arithmatic Logic Units, which used to be separate chips) and modern CPUs (with the ALU stuff built in) have corrective logic to avoid round off up to the degree of precision guaranteed by the underlying data format.

Does this clarify?

Perhaps one of you knowledgeable gents can answer/explain this:

The UAD plugins are 32-bit and the company has said little about providing 64-bit versions… I’ve read that this is because the SHARC chips the UAD cards use are 32-bit in design. Does that sound like a “sound” explanation? :laughing:

I’m most definitely not a coder, but isn’t the UAD GUI the problem, as they’re 32bit and I would have thought stored on the computer. the audio processing thats done by the SHARC chips would still be 32bit and should have no consequence on 64bit compatibility. They already have 64bit drivers but still only run as 32bit plugs.

So wouldn’t UAD just need to update the front end part for full 64bit compatibility?

It’s probably way more complex that this, but then who really knows?

No no no…The UAD chip is undoubtedly a DSP, which is like a fancy schmancy calculator that is optimized for speed. If the DSP is 32-bit, it means that it operates on 32-bit integers only (and possibly 32-bit fixed point decimals, i.e. where the decimal is always inferred to be at a specific position…contrast this with the floating point decimal that I described where the special format of the number is required to allow the computer to understand how to interpret the data). Therefore, 64-bit integers would have to be calculated in a special fashion which is considerably slower than using a native “add these two numbers” instruction.

This is why you haven’t seen a native 64-bit plug-in. You can’t just update the driver and be done with it.

Yeah but, couldn’t they write 64bit compatible front end and still pass 32bit to the dsp?

Definitely! Lots of 64-bit device drivers still pass 8-bit data to devices.

We have to make difference between:

  • Software architecture bit depth (data size we are processing). This is what we are talking about, when we argue if 64-bit mixing engine sounds better than 32-bit one.
  • OS architecture bit depth. This is basically only about memory address size (and amount of memory) the OS is capable to handle.
  • Hardware architecture bit depth depth, which can be divided to 2 categories: data size and address size. Yes, these can be different as seen for example in '70s and '80s “8-bit” microprocessors, which had address size of 16 bits and original “16-bit” 8086, which was 20-bit addresses.

When writing normal software (or native VST-plugins), software designer doesn’t have to think of any one of these. It’s compiler’s and OS’s job to take care of these.

When it comes to UAD-style plug-ins things get a bit more complicated. You’ll have 2 layers:

  • Native part, which is just like any software. No need to even know, if your hardware/OS is 32- or 64- or even 19754-bit. Just recompile your source code and you’ll rocking.
  • Device driver part, which communicates with OS and hardware. These may have 2 different data sizes, as it’s in case with having 32-bit UAD DSP chips and host computer’s 64-bit processor and 32-bit or 64-bit OS. But even here you should have only very tiny bit of your code, which depends on OS bit depth.

In other words UAD are just dragging their feet over releasing 64 bit compatible (no bridging) plugins.

No they’re not dragging their feet. They have stated that they are in the process of converting the plugs to 64 bit on their Facebook page. They have a lot of plugs to convert which is why it’s taking a while.

Other than better compatability with 64 bit hosts - the drivers are already 64 bit compatible - obviously there will not be any other advantages, plugin counts are limited by the card etc.

Here’s UA’s Facebook message re 64 bit:

Universal Audio Re: the Great 64-bit question: With over 50 plug-ins in the UAD catalog, 64-bit support is simply taking us a while to complete. We know it’s very important to our customers, so it’s a very high priority for UA. Rest assured, 64-bit compatibility is coming, but a solid date is yet to be announced. It’s worth noting that UAD plug-ins will work with any 64-bit DAW, as long as the DAW supports a bit bridge. (We know many of you guys know this, but just in case.) Generally, there’s always a balance between UAD system improvements and plug-in development. For v.6.0, we focused on getting seamless Pro Tools / RTAS integration; 64-bit is coming. We know this may not be what everyone wants to hear, but thank you for your patience. / UA

Mark

Hard to tell. They of course may have been writing bit-depth-dependent code, when not needed to do so in the past, which makes things complicated. And if they have lost the programmers who originally wrote that software (and didn’t write good documentation on the code), they may be in REALLY deep sh*t. Same goes if the people, who wrote the original OS-specific device-driver code, are now long time gone.

But this is just speculation. Not a single outsider will ever hear the truth behind these kind of things.

In ideal world, if everything has been done “by the book” from the start, transition from 32-bit to 64-bit is just peace of cake. In reality … well … I’ve witnessed both of the cases: converted a HUGE software package from 32 to 64 with just entering one command: “make all” but also had banged my head into a wall for weeks with a tiny piece of software when converting it from one minor OS variant into another … and finally re-writing it from the scratch.

I would guess that Pro-Tools/RTAS integration has taken precedence over full 64 bit compliance. Thanks for the info Mark.

I can see why but in my world it would be the other way round :mrgreen:

Using UAD plugs under a 64-bit regime and a bridger is NOT a solution; typically, only one instance can be launched, and the CPU load is much larger than using them in a 32-bit environment.

WE NEED 64-bit UAD! YESTERDAY!!!

Aloha f
and yes it does!
You are quite the teacher and I thank you for that info.
I was alway good at math stuff in school.

This part really intrigued me:

Note, however, that because we think in decimal and computers think in binary there are round off errors that theoretically could occur.

Sounds like this is where the fun/action is.

As I said I’m a lil ‘long in the tooth’ to learn all this ‘geekdome’
but I think I will check into a book (audio book) or two on the subject.

Tanx again to all you guys.
It’s nice reading posts by some really sharp minds.
{‘-’}