Cool, thanks very much!
Thank you all!
This is all very interesting!
Not that it really matters, but given youâre using language like ânever experienced anything like thisâ and âharnessing the raw power of this setupâ I think other people here should be aware that the Ultra 9 285K only has 8 performance cores. I wonât say itâs a âpoorâ choice for a DAW system, but where performance matters is in real-time processing - so performance cores matter here. And not to cloud-up and rain on the âPURE GOLDâ parade of Process Lasso, but if you didnât have 16 efficiency cores getting in the way of performance-core processes, you may not need to worry about it in the first place. But sure, with an 8:16 p:e ratio, I can see that affinity features could be necessary, even if self-inflicted.
Great system, of course, but I was thinking that maybe a tad less hyperbole would let people focus on the details more. They may find other options (like AMD Ryzen) may end up being better solutions in the long run. Again, not taking away from the system, but just expanding the scope a bit.
Hi there Thor.
I am not a technical expert, meaning I do not actually know just how much better the P cores are vs the E cores in in percentage.
But in my findings this far even the E cores are more than able to pull serious weight in Cubase. I can see this via the monitoring in task manager. Cubase is really able to distribute the projects well across the 22 remaining cores.
I think maybe another reason I am finding this system so fluid, stable, responsive and downright awesomely powerful is the fact that all the cores are real here, in other words; no hyperthreading.
Yet another reason might be that I in my setup via Process Lasso have âbannedâ Cubase from using P cores one and two to let windows have these alone while using/assigning 6 P cores and 16 E cores for Cubase exclusively.
To restrict Cubase and Windows from using the same cores seems to work really well here.
I am positive that the ability to set CPU, I/O and memory priority according to what a program like Cubase actually needs (in stead of letting windows 11 guess at this) in Process Lasso really helps too.
I TELL windows how to behave with Cubase, I do not let windows guess at this
Another thing that I KNOW has helped a lot from my previous computers is that I now use a Thunderbolt 3 audio interface instead of an USB 2.2 interface
I am also sure that the drivers presonus has writtten for the Quantum 2626 are very well programmed.
Iâve been using Cubase since 1998 and started building my own computers from the bottom in 2002 or so. What I have seen EVERY time Iâve started using a new computer has been a relatively modest increase in power when it comes to Cubase use. Pleasant enough and sorely needed but not groundbreaking.
The computer I used for about five years before this new rig was based around an AMD Ryzen 7 3700x with 32 gigs of ram, ssd drives and a Slate Digital VRS8 audio interface connected via the optional pci express adapter.
A powerful machine for sure but this new rig based around the Intel core ultra 9 285k cpu literally obliterates it in Cubase.
I am not talking a modest and pleasant upgrade here, but a massive one.
I know this is not because of the CPU alone but rather the synergy between all these newer components but surely the CPU plays a major part here.
In short, Iâve never been more happy with a new music workstation than I have with this.
All the best, Kim
Awesome! I couldnât be happier for you!
Yeah, I donât know what âpull serious weightâ means. And thatâs really what my point was in adding to the thread. The previous posts sounded like you were indeed a technical expert and were not only making recommendations to others, but actually âguaranteeingâ performance with Process Lasso, etc. But âpull serious weightâ is just a subjective term, which insofar as DAW performance and choice of hardware, is rather meaningless. I donât mean that in a critical way, just an empirical way. Unfortunately, âlooking at task managerâ tells you (in my opinion) next to nothing about optimizing real-time audio processing.
Yeah, thatâs the problem, actually. In general, each track runs one thread while processing audio. Itâs really âeach serialized instance of real-time effects processing uses a single thread.â Each trackâs latency compensation contributes to overall project compensation of course, and thus your slower, less efficient âefficiency coresâ are slowing everything down. Itâs kind of the opposite of what you (well, âthe DAW communityâ) wants. Those other 16 e-cores have HALF the L1 cache of the p-cores. And the L2 cache is also half, but itâs even shared among 4-core clusters. So even if you had the worldâs fastest p-cores, it wouldnât matter at all as the project latency will wait for the slower e-cores to catch up.
It would be interesting to see if just the 8 p-cores with all the e-cores blocked worked even better than having the 16 e-cores, but that would take actual empirical testing. Interesting thought-experiment though.
But right on, and glad you like the new rig!
If the 285 performs better in DAW Project testing (Vince âŠ's site) than other chips (Core Ultra 7 265k CPU, e.g.), would that imply that thereâs more that needs to be considered than simply comparing the number of performance cores between chips?
Well I am running much heavier/bigger projects at lower latencies now than I could with my old AMD rig so I guess I must have done something right putting this new rig together
I would say that is rather âempical testingâ
Oh, it doesnât just âimplyâ it - any meaningful determination demands it. I donât know âVinceâs site,â or what operational metrics they use to generate benchmarks. But if your takeaway from me saying âperformance cores matter hereâ was âthereâs nothing more one need to than compare p-cores to e-cores,â then I think you should re-read what I said in context. The entire over use-case needs to be considered, empirical data reviewed, and results communicated in a meaningful way. Saying âSystem1 obliterated System2â is functionally worthless data.
The reason I chimed in was that data was being thrown out and classifications determined with ubiquitous, subjective terms, even in the face of pretty clear deficiencies in a solution purporting to have been built with a particular goal. Actual detailed information serves to help people verify data on their own, and use that data as part of an overall determination they make on their own.
And you would be wrong, sir/maâam Itâs nearly 100% subjective. But look, itâs OK. Youâre happy with it, so rock on.
Well I still have my old rig up and running too.
When I see that the same project makes the older AMD rig almost groan to a halt, while the new Intel rig just chews through it at 32 samples latency with no hiccups that actually is an objective observation.
Itâs not something I âfeelâ
Yep, youâre halfway there - youâve observed, now you have to measure. THEN it will be empirical.
Please donât feel like you need to âproveâ something to me. I was just giving my opinion for the benefit of others who may take your strongly presented data as postulate. If you want to compare system performance using metrics of âgroaningâ vs âchewing,â thatâs absolutely your prerogative. The fact that I wouldnât have purchased that system to use a DAW has nothing to do with why you did. I DO think that the data is a bit âafter-the-factâ and has a hint of âjustificationâ to it, but thatâs a reasonable reaction in the absence of actual, empirical data.
Iâve contributed in a manner I hope was valuable to others, even if what I said isnât what you wanted to hear. Itâs all good, and Iâm authentically happy for you and your new system.
Yeah, sorry, I was lazy and didnât post the actual sites in my original post.
Thanks for the reference - indeed, theyâre using the DAWBench test for simultaneous instances of the SGA plugin. And thus is the double-edged sword of DAW âperformanceâ testing. Arriving at a benchmark metric (like instance count before drops) presumes a stipulated, standardized control method. Instance count is probably as good as any for a benchmark, but along with the control is the implicit understanding that itâs not real-world. Itâs a benchmark. Does the 285kâs 607 instances vs 9950âs 539 instances result mean anything for production DAW projects? Not really, but potentially.
I think the more relevant real-world consideration is that Kim already indicated that in order for him to get his 285k to âobliterationâ mode was to use a 3rd party product to manually accommodate deficiencies in the CPU model. So I donât think we need to look much further beyond that to see that while benchmarks can serve to provide generalized performance data for specifically structured tests, it is in fact the details that really matter.
I mean, the fact that the benchmarks you cited donât even discuss the differences in p-core and e-core architectures just within the 285K (or any of them) kind of shows how generalized it is. They talk about how p-cores actually make the difference, but donât even bother talking about why.
Anyway, my input into the discussion seems to have reached it limit - I just wanted to thank you for the follow-up citation and address your question
Here are my results in general
Tests are still ongoing, though.
Sorry for the late detailed test results.