Open SCOPE DSP platform released!

Open SCOPE DSP platform released!

Today, the German company Sonic Core released their awesome SCOPE DSP platform (formerly developed by Creamware) as an open DSP platform.

This opens doors for third party developers to write/port their software and plugins for/to this DSP Audio platform.
Wouldn’t it be great if Cubase/Nuendo integrates in the SCOPE technology using the OPEN SCOPE API ?

for example:

_"Parseq - A SCOPE powered DAW.
Parseq is a new music creation software supporting a seamless SCOPE DSP hardware integration. It is one of the first applications utilizing the new OPEN SCOPE API.

Features
Just like in a common VST host you are now able to load the SCOPE synthesizers and FX directly into the tracks of the Parseq DAW, create and arrange MIDI sequences for the synthesizers and create effect automations.

All channel strips are calculated and mixed on the SCOPE DSP hardware, just like you are used to from the old SCOPE v5.1 software. Additionally you can add tempo synchronized audio tracks, resample the SCOPE output and export whole tracks as audio files. On top of that Parseq offers full VST support and latency correction to unite the power of SCOPE with all the possibilities from the VST world in a fully integrated, easy to use software application.

Concept
Parseq is designed to be as open as possible with the core application developed by AudioBiscuit and the possibility for users and 3rd party developers to extend it’s functionality through a custom plug-in format (for example to create new integrated sequencers or channel strips for the mixer) and an extensive LUA scripting engine._


_"For SONIC CORE “sharing our talents and resources” means to open our technology. It is our seed to The New World of Music. It is revolutionary: if you ever had a vision of an excellent DSP hardware powered audio solution in mind,it can now be realized by using the OPEN SCOPE API and SCOPE DSP hardware.

The first two applications currently being created based on OPEN SCOPE are the SCOPE 6 studio environment by Sonic Core and the innovative Parseq by AudioBiscuit. Both applications are designed to be “open” standards, and can be extended through everybody who wishes to implement his ideas.

What SCOPE is about:

SCOPE is an excellent and powerful DSP Audio Platform consisting of hardware and software running on a computer. Since years SCOPE is well known for its excellent, analogue-like sound quality, its outstanding flexibility and variety. In the past SCOPE only was able to load plug-ins that were created on the SCOPE development kit. SCOPE was a closed system. Now, with OPEN SCOPE, every imaginable DSP hardware based audio solution is possible.

SCOPE also means a huge talented and loyal community of creative and professional musicians, producers and sound designers as well as developers around the globe. Without these friends, SCOPE would have never become to what it is today. So, SCOPE actually means this: love and passion for music, for true people and for true sound. SCOPE ever was magic. It ever was a cult – through the people who are connected to it.



What is OPEN SCOPE in particular?

OPEN SCOPE is the most powerful DSP audio hardware open for 3rd party “visionaries”.

develop any kind of software application that requires a strong DSP hardware
use any C++ library to create an application using SCOPE hardware
the software is easily adaptable to MAC OS / iPad / Linux / etc…
implement the SCOPE DSP hardware in your system, whatever it planned to be:
a guitar effect rack, a synthesizer, a digital mixer, an IO interface or (as we did) an XITE.



OPEN SCOPE GUI API

The OPEN SCOPE GUI API (Graphical User Interface / Application Interface) provides a way to create GUIs for OPEN SCOPE applications and plug-ins.

Features:

OPEN SCOPE is the most powerful DSP audio hardware open for 3rd party “visionaries”.

no limits to what a new GUI can look like
create modern, up to date interfaces
add the features you are missing
based on a cross platform framework for OSX, Linux, iOS etc.
any C++ library (e.g. JUCE, QT, VST GUI etc.) can be used
completely detached from the previous SCOPE SDK
no longer limited to controls provided by the previous SDK.

JUCE

To use JUCE is one way to create GUIs for OPEN SCOPE applications or plug-ins. We provide a demo application using the awesome JUCE library from Raw Material Software (used in many VSTs and music applications), a full doxygen documentation and a quickstart guide to get you started. Using JUCE to create the GUI allows the use of an easy to handle wysiwyg editor."_




Open SCOPE DSP platform:
http://sonic-core.net/joomla.soniccore/index.php?option=com_content&view=article&id=335



_

Would be great indeed! Congrats SonicCore! :slight_smile:

I woud love to see some kind of integration whether it is coming from Steinberg or Scope 6/Open Scope developers. Best would be to have Scope Plugins as VSTs or VSTi’s while still having the amazing Routing capabilities of Scope available, as well as the possibility to tweak the signal somewhat before it enters Cubase.

Cheers anyway Rolf

Hi,

i don’t want to be too negative, but i really think that the days of DSP Hardware are over.
Native is the way to go, at least for me…

Rgards,
Paul

I was just thinking the same thing. They thought of this a bit too late now that we have CPU’s that offer MUCH more computing power than any DSP hardware on the market. Even VEP won’t be necessary in the not so distant future. They should’ve done this when UAD-1 was still around. Maybe then SCOPE would’ve been a successful platform, who knows?

Dead in the water

Hippo

I totally agree Rolf, this is like using the best of both worlds: Scope SFP-mode and Scope XTC-mode… it’s my biggest SCOPE-feature-request. XTC mode never became a mature thing.
Gary Bogdanoff (Sonic Core) recently told me that they don’t develop the XTC-mode any further in the immediate future (probably because of the Open SCOPE concept) so i hope Steinberg will do a step forward providing the abillity that SCOPE DSP plugins can be integrated in the VST host, like VST’s and VSTi’s already are…

I thought that you were right about modern CPU’s and native processing… until a month ago… i decided to give SCOPE a try again after many years of working with quality VST’s only.
It became absolutely clear to me: the difference in sound quality is HUGE
(otherwise i probably did not post this topic :wink: )

Why is this posted here instead of the Lounge? :unamused:

It may be that the sound quality of the SCOPE plugins is better than what’s out there (and that’s debatable considering how good VST equivalents have become in recent years). But we’re talking about processing power. The reason why DSP hardware like SCOPE and others were created was to 1) offload processing power from the CPU, 2) assure system stability via a closed environment (Pro Tools), 3) create a better copy protection scheme… and not in that particular order.

Like I said, modern CPUs are powerful enough to handle prety much any task you throw at them. And, if you still need more, there’s VEP and other similar solutions. The other thing is the fact that there are companies out there that have created plugins that are just as good, if not better, than those from UAD, Duende, etc. Just look at Nebula, PSP Audio, SoftTube, and even Cytomic as examples. Because of these reasons most people prefer to use plugins that don’t impose hardware dongles on the user. There’s no need to offload processing power anymore, so the only thing left is sound quality. If SCOPE offers such superior sound quality compared to native offerings, than that is a different story (I highly doubt that though).

Modern CPU’s have MUCH LESS signal computing power than modern DSP’s, they are built to process completely different set of data, you have not seen any graphic cards built around generic CPU’s since the 1980’s because it makes no sense, same for DSP’s, even budget soundcards have custom DSP’s on board rather than rely on CPU power even though that is available.

Why do you think Steinberg/Yamaha et. al., put DSP chips in their audio interfaces and not micro-controllers ?

Scope for instance has zero latency from input to output (while processing) all the latency it has is created in the DSP/Host interface if you are using it with a DAW, how close to zero latency do you think your PC can manage from input to output while processing at the same time?

Comparing processing power on these is apples and oranges anyway, CPU’s are good at moving bits around so great for recording, sampling, time slice/convolution reverbs, delay and other tasks that are primarily moving data with not too much processing on the spot, they are hampered on modern PC’s to a degree by the architectural and OS’s support for real time operations but close enough. DSP chips are useless at most of that stuff but excel at processing complex signals, so mixing, natural and electronic instrument emulations, synthesised reverbs and compression etc. (some tasks like EQ have some problems on both modern CPU’s and DSP’s due to lack of data widths).

It is more helpful in an audio situation to think of an audio oriented DSP like the AD Sharc as having more bandwidth than a normal CPU rather than looking at the raw processing power since it is not really comparable (they are simply not performing the same tasks), they are also designed to help you with issues like RT response that is something that is not high on the list of a generic CPU.

In the real world the difference is quite noticeable, I mix all my stuff through an 11 year old Creamware card bought off ebay for 20 pounds, sound ridiculous given that I have 2 modern quad core computers with mid/high end dacs hooked to them, but the fact is that the mixer on the creamware has more bandwidth (aka sounds significantly better) than any of the native CPU ones I have tried, the difference in sound is well noticeable even to amateurs. A similar situation is in regards to the synth models, the Minimoog plugin for the system is closer to my real minimoog than the VST’s I have tried to replace it with, the VST’s sound kinda dead next to the DSP model, and seriously dead next to the real thing. (In fact I think those VST plug in makers should be forced to change their ads to “reminiscent of” rather than “sounds like” or “emulated”, I downloaded a Oberheim OBX emulator demo recently that reminded me more of an solina string machine than the OBX sitting next to my computer)

The annoying thing is that the Creamware synth and mixer models are over 10 years old, if CPU power was the main thing limiting the VST’s in the past they should sound hell of lot better than the old Cremware models by now but things are sadly not as simple as that.

A similar situation is in regards to the synth models, the Minimoog plugin for the system is closer to my real minimoog than the VST’s I have tried to replace it with, the VST’s sound kinda dead next to the DSP model, and seriously dead next to the real thing. (In fact I think those VST plug in makers should be forced to change their ads to “reminiscent of” rather than “sounds like” or “emulated”, I downloaded a Oberheim OBX emulator demo recently that reminded me more of an solina string machine than the OBX sitting next to my computer)

Comparisons as an “argument for” may be devalued depending on when one grew up with the technology. You or I might notice but a twenty something may not care that much. Though my jury is still out as to the importance of this development. I can see points both ways.
Does it make life simpler? Or does it introduce something that, while superior, involves further study and yet more operations to apply before product output?
Interesting.

Fantastic, show me a mathematically correct EQ processing of a 196kHz 24bit signal across the spectrum …


(Hint, it can’t be done in anything close to real time on a 64 bit processor), what you meant to say was probably “modern CPUs are powerful enough for me.”, which is another sentence altogether

Think of a simple VST 2.4 based one oscillator synthesiser that differs from the usual only in that every parameter can be modulated by any other parameter input, function output and sound output, a modern processor , lets say an AMD 6 core, would be able to run probably thousands of voices of this synth concurrently if it was used as a normal synth, however if all modulation options were used at once on a 16kHz signal it would probably struggle with just one voice and definitely do so if you were using high bit/frq. rates and upgraded the unit to VST3.5 spec that has higher resolution of control signals.

A modern CPU does simply not have enough parallel processing power to handle multiple complex calculations like this stacked on top of each other, DSP farms however solve that simply by adding more DSP’s and executing in parallel which is not possible to do easily on a shared memory based system and impossible to add more cores, the DSP chips are designed to automatically share tasks between them (the ones in the Scope system anyway, slightly more complex on a moto 56***).

In the analogue world this sort of a synth would be trivial to make and all responses would be real-time, no-one would make it simply because it makes no sense to spend money on modulation options were we could add another oscillator, better filter and so on but this is just an example after all to get people thinking. But at the same time this sort of VST instrument is neither far fetched nor difficult to code, apart from the fact that no-one seems interested in doing anything except emulations of already existing instruments these days these is the simple truth that to emulate or create complex response systems like feedback synthesisers and stream synthesisers current CPU’s are just not anywhere powerful enough.

Even fairly simple software like Composers Desktop Project has modules that are not able to run in real time, how can we have enough processing power on our computers if a RT activity like music making cannot be performed in real time?

Absolutely true, but that is also really another issue, perception vs reality vs expectations, the problem I have is a sort of the assumption that I cannot hear the difference, that it does not matter to me or that it is not measurable for that matter. In consumer audio an understandable enough since functions and convenience are as big a factor or bigger factor than anything but in semi-pro and pro audio the audio “quality” is a part of the functionality of the unit, subjective as that may be

Are we is ancient enough to have literally “grown up” with the minimoog … (god, I feel old now)

For my part I though I was answering a post in the lounge, :blush:
Sorry … someone move the discussion

Reiknir,

Alright, fair enough. But have you seen what is being done with native plugins by those companies I mentioned earlier? For example, PSP recently released a Pultec style EQ that, according to a group of people at the Gearslutz forums (and there are audio files available there if you’d like to take a listen for yourself), sounds better than the same EQ emulation made by UAD. Another example is ‘The Glue’ by Cytomic. This one is an SSL 4K Buss Compressor clone that sounds extremely close to the hardware and even has lower distortion than the same emulations sold by UAD or Waves. On the synth side, you have U-he with the recent release of Diva, which is an analog synth emulator. U-he also has ACE, which is another analog synth with very analog sounds. We’re at the point were native plugins are capable of giving similar results as the hardware they imitate, and that is all being done without a hardware dongle and with CPU effecient coding.

Also, a modern CPU can handle projects with 100+ tracks, 100+ heavy EQ’s and Compressors, and multiple synths at very low latencies. No, it’s not real time. But it is also not as bad as you make it sound. There’s really no need to process at 192 KHz either, that is just pure marketing BS. Lots of plugins offer oversampling processing as an option if need be. And even if they didn’t, you could record at 88.2 KHz or 96 KHz and that would give you more than enough “resolution”.

Now, don’t get me wrong. If there is a sound quality difference between DSP plugins and Native plugins, than that’s all good. But I highly doubt it’d be that much different considering the numerous hardware vs software comparisons that can be found out there. Especially considering the price difference between the two platforms. Once you get to the point of diminishing return, is it really worth the extra cash? YMMV.

Aloha,

Kool thread guys. Lots to consider.
{‘-’}

Also, a modern CPU can handle projects with 100+ tracks, 100+ heavy EQ’s and Compressors, and multiple synths at very low latencies. No, it’s not real time. But it is also not as bad as you make it sound. There’s really no need to process at 192 KHz either, that is just pure marketing BS. Lots of plugins offer oversampling processing as an option if need be. And even if they didn’t, you could record at 88.2 KHz or 96 KHz and that would give you more than enough “resolution”.

I have to agree mostly here. You would maybe use 192 kHz if you were developing something more “expensive” and needed that reference but human ears would really not notice although a machine’s “ears” may detect something useful for another purpose (designing a microphone maybe?). But that is not an area where the average Cubase user, and on this forum especially, will go to very often if at all.

I suspect there may be just a hint of “willy waggling” going on here. :mrgreen: (“Mine goes up to 192.” Hm?)
It’s an advance but whether it’s an advantage I’m not sure.

any news on this

In beta, the SQ development team is only 2 guys and they have been busy on the bread an butter side of the company (Broadcast software), q1 2013 possibly