New intel PC

Cool, thanks very much!

Thank you all!

This is all very interesting!

Not that it really matters, but given you’re using language like “never experienced anything like this” and “harnessing the raw power of this setup” I think other people here should be aware that the Ultra 9 285K only has 8 performance cores. I won’t say it’s a “poor” choice for a DAW system, but where performance matters is in real-time processing - so performance cores matter here. And not to cloud-up and rain on the “PURE GOLD” parade of Process Lasso, but if you didn’t have 16 efficiency cores getting in the way of performance-core processes, you may not need to worry about it in the first place. But sure, with an 8:16 p:e ratio, I can see that affinity features could be necessary, even if self-inflicted.

Great system, of course, but I was thinking that maybe a tad less hyperbole would let people focus on the details more. They may find other options (like AMD Ryzen) may end up being better solutions in the long run. Again, not taking away from the system, but just expanding the scope a bit.

Hi there Thor.
I am not a technical expert, meaning I do not actually know just how much better the P cores are vs the E cores in in percentage.
But in my findings this far even the E cores are more than able to pull serious weight in Cubase. I can see this via the monitoring in task manager. Cubase is really able to distribute the projects well across the 22 remaining cores.

I think maybe another reason I am finding this system so fluid, stable, responsive and downright awesomely powerful :smiley: is the fact that all the cores are real here, in other words; no hyperthreading.

Yet another reason might be that I in my setup via Process Lasso have “banned” Cubase from using P cores one and two to let windows have these alone while using/assigning 6 P cores and 16 E cores for Cubase exclusively.
To restrict Cubase and Windows from using the same cores seems to work really well here.

I am positive that the ability to set CPU, I/O and memory priority according to what a program like Cubase actually needs (in stead of letting windows 11 guess at this) in Process Lasso really helps too.
I TELL windows how to behave with Cubase, I do not let windows guess at this

Another thing that I KNOW has helped a lot from my previous computers is that I now use a Thunderbolt 3 audio interface instead of an USB 2.2 interface
I am also sure that the drivers presonus has writtten for the Quantum 2626 are very well programmed.

I’ve been using Cubase since 1998 and started building my own computers from the bottom in 2002 or so. What I have seen EVERY time I’ve started using a new computer has been a relatively modest increase in power when it comes to Cubase use. Pleasant enough and sorely needed but not groundbreaking.

The computer I used for about five years before this new rig was based around an AMD Ryzen 7 3700x with 32 gigs of ram, ssd drives and a Slate Digital VRS8 audio interface connected via the optional pci express adapter.

A powerful machine for sure but this new rig based around the Intel core ultra 9 285k cpu literally obliterates it in Cubase.
I am not talking a modest and pleasant upgrade here, but a massive one.

I know this is not because of the CPU alone but rather the synergy between all these newer components but surely the CPU plays a major part here.

In short, I’ve never been more happy with a new music workstation than I have with this.

All the best, Kim

Awesome! I couldn’t be happier for you! :slight_smile:

Yeah, I don’t know what “pull serious weight” means. And that’s really what my point was in adding to the thread. The previous posts sounded like you were indeed a technical expert and were not only making recommendations to others, but actually “guaranteeing” performance with Process Lasso, etc. But “pull serious weight” is just a subjective term, which insofar as DAW performance and choice of hardware, is rather meaningless. I don’t mean that in a critical way, just an empirical way. Unfortunately, “looking at task manager” tells you (in my opinion) next to nothing about optimizing real-time audio processing.

Yeah, that’s the problem, actually. In general, each track runs one thread while processing audio. It’s really “each serialized instance of real-time effects processing uses a single thread.” Each track’s latency compensation contributes to overall project compensation of course, and thus your slower, less efficient “efficiency cores” are slowing everything down. It’s kind of the opposite of what you (well, “the DAW community”) wants. Those other 16 e-cores have HALF the L1 cache of the p-cores. And the L2 cache is also half, but it’s even shared among 4-core clusters. So even if you had the world’s fastest p-cores, it wouldn’t matter at all as the project latency will wait for the slower e-cores to catch up.

It would be interesting to see if just the 8 p-cores with all the e-cores blocked worked even better than having the 16 e-cores, but that would take actual empirical testing. Interesting thought-experiment though.

But right on, and glad you like the new rig!

If the 285 performs better in DAW Project testing (Vince 
's site) than other chips (Core Ultra 7 265k CPU, e.g.), would that imply that there’s more that needs to be considered than simply comparing the number of performance cores between chips?

Well I am running much heavier/bigger projects at lower latencies now than I could with my old AMD rig so I guess I must have done something right putting this new rig together :wink:

I would say that is rather “empical testing”

Oh, it doesn’t just “imply” it - any meaningful determination demands it. I don’t know “Vince’s site,” or what operational metrics they use to generate benchmarks. But if your takeaway from me saying “performance cores matter here” was “there’s nothing more one need to than compare p-cores to e-cores,” then I think you should re-read what I said in context. The entire over use-case needs to be considered, empirical data reviewed, and results communicated in a meaningful way. Saying “System1 obliterated System2” is functionally worthless data.

The reason I chimed in was that data was being thrown out and classifications determined with ubiquitous, subjective terms, even in the face of pretty clear deficiencies in a solution purporting to have been built with a particular goal. Actual detailed information serves to help people verify data on their own, and use that data as part of an overall determination they make on their own.

1 Like

And you would be wrong, sir/ma’am :slight_smile: It’s nearly 100% subjective. But look, it’s OK. You’re happy with it, so rock on.

Well I still have my old rig up and running too.
When I see that the same project makes the older AMD rig almost groan to a halt, while the new Intel rig just chews through it at 32 samples latency with no hiccups that actually is an objective observation.
It’s not something I “feel”

Yep, you’re halfway there - you’ve observed, now you have to measure. THEN it will be empirical. :slight_smile:

Please don’t feel like you need to “prove” something to me. I was just giving my opinion for the benefit of others who may take your strongly presented data as postulate. If you want to compare system performance using metrics of “groaning” vs “chewing,” that’s absolutely your prerogative. The fact that I wouldn’t have purchased that system to use a DAW has nothing to do with why you did. I DO think that the data is a bit “after-the-fact” and has a hint of “justification” to it, but that’s a reasonable reaction in the absence of actual, empirical data.

I’ve contributed in a manner I hope was valuable to others, even if what I said isn’t what you wanted to hear. It’s all good, and I’m authentically happy for you and your new system.

Yeah, sorry, I was lazy and didn’t post the actual sites in my original post.

3 Likes

Thanks for the reference - indeed, they’re using the DAWBench test for simultaneous instances of the SGA plugin. And thus is the double-edged sword of DAW “performance” testing. Arriving at a benchmark metric (like instance count before drops) presumes a stipulated, standardized control method. Instance count is probably as good as any for a benchmark, but along with the control is the implicit understanding that it’s not real-world. It’s a benchmark. Does the 285k’s 607 instances vs 9950’s 539 instances result mean anything for production DAW projects? Not really, but potentially.

I think the more relevant real-world consideration is that Kim already indicated that in order for him to get his 285k to “obliteration” mode was to use a 3rd party product to manually accommodate deficiencies in the CPU model. So I don’t think we need to look much further beyond that to see that while benchmarks can serve to provide generalized performance data for specifically structured tests, it is in fact the details that really matter.

I mean, the fact that the benchmarks you cited don’t even discuss the differences in p-core and e-core architectures just within the 285K (or any of them) kind of shows how generalized it is. They talk about how p-cores actually make the difference, but don’t even bother talking about why.

Anyway, my input into the discussion seems to have reached it limit - I just wanted to thank you for the follow-up citation and address your question :slight_smile:

2 Likes

Here are my results in general

Tests are still ongoing, though.

2 Likes

Sorry for the late detailed test results.

1 Like