This is not so much a Cubase12 issue as it is an issue with my file server and networking I believe.
Occasionally, when I make a render from and into a SMB share on my Synology NAS, some audio layer will sound choppy and glitchy. This is mostly with plain audio layers that use an imported WAV file which I pulled from my SSD audio library (which then subsequently gets copied into the /cubase/audio folder automatically - which is a good thing.
I think the problem is either with MacOS (13.2) or with the settings of my Synology NAS. I have optimized them to work best with my mac:
I’ve been using this setup for years. Now, since I am using MacOS Ventura on a Mac Studio with a bonded LAN connection via two 1GbE ports, I keep having these weird issues.
There is a known SMB problem in Ventura that is currently being worked on:
But like almost every time something problematic is reported about macs, it is always slightly different than from what I am doing and noticing. For example, networking between macs in my network is no issue for me at all.
Some other issues are:
random SMB disconnects (very rare)
file permission issues
file not found issues (which resolve after restarting cubase)
If anyone here has experience with Synology devices as part of a studio setup, I would appreciate your insight. Thanks in advance!
I can’t say whether it is related to this problem as I have no experience with synology (nor Mac), but I would generally advise to set the minimum SMB protocol to at least SMBv2. SMBv1 is inherently insecure, slow and unreliable.
I’m using a Synology Rackstation together with Windows workstations and that works flawlessly.
One thing to do is, like @fese already mentioned, set the minimum SMB protocol to SMB2, or even directly to SMB3 if there are no old systems in the network.
Another point to check is the MTU rate. Why do you set it to 4000? Are you sure all devices in your network are using this MTU rate? Synology clearly recommends to make sure this matches all devices and the default is 1500
Yeah, I have set a higher MTU rate as I figured I am dealing with a lot of big video and audio files that need to be streamed when working in a project. The MTU is for my cabled connection only - which connects directly to my mac - no other devices. Everything else connects via Wifi (which I use only for minor access from other devices for obvious reasons)
I assume you have a switch somewhere in your network, this setting is the one that basically all switches support without issues. Other settings, like the Dynamic link aggregation require specific support from the switch
Second one: Dynamic Link Aggregation. No switch, just a direct cable from the mac to the NAS (10m cable)
There is just this weird thing in Ventura right now: with Apple’s design change of their system setting panel, the Link Aggregation panel has completely vanished. You can still configure it via the Terminal though.
So you have two network interfaces in the Synology NAS configured as Bonded Interface, but use just one network interface in the Mac Studio?
That has no effect at all, because your connection is still a single 1 Gbit network. In this configuration you should configure the Synology interface as a single port, because each port in the Synology hardware supports 1 Gbit and not more (unless you have a NAS with more powerful hardware).
To get the full 2 Gbit of the bonded interfaces you need to connect both ports of the Synology to some other device, the Mac Studio has just one port (10 Gbit). You need a switch in between where two cable from the Synology and one from the Mac Studio are connected.
No that is not the case. I use the 10GbE port on my Mac studio in conjunction with a second 10GbE adapter connected via one of the Thunderbolt ports at the back. Two cables go directly to the NAS which also has 2 Ethernet ports.
I use them exclusively for that. This has worked well in the past year until these glitches started some time after upgrading to Ventura.
This is the adapter I use on my Mac:
Setting up the Aggregate was quite a hassle with the new Mac Studio. It had worked fine with my older mac (a hackintosh) using 2 different ethernet cards.
It turned out that there were some hardware quirks related to Apples new M1 architecture. Users were reporting that they wouldn’t get fast enough transfer speeds and dropouts would occur randomly.
This had to do with the M1 now handling all the ethernet traffic, bypassing the ethernet chips in the connected dongles as part of a “compatibility mode”.
The new D-Link adapter I have now comes with its own driver and does not go through the M1 chip. It is fully supported and has been running fine for months before I upgraded my OS to Ventura.
Well, I suppose you shouldn’t upgrade your OS so quickly there is a reason why many producers have outdated macs in their studio…
Ok, using the second adapter makes the difference and Apple aggregate devices require LACP.
The problem is that you are using two different technologies, direct network and thunderbolt, and I would always try to avoid something like this. I would always go via a switch here, that avoids the complexity of the aggregate config in MacOS. The single port has more than enough power with 10Gbit.
That may be an alternate option, indeed! Thanks for pointing that out. I will see how that SMB2 setting helped and if I run into glitches again, I’ll check out some switches! Is there something that you can suggest?
It depends on how much you want to spend. I use Cisco 350 switches, but they are managed. That means you need to get used to the Cisco network operating system IOS (yes, that means Internetwork Operating System, nothing from Apple).
But there are other vendors out there with powerful hardware. In case the switch doesn’t support LACP and you are using a single port in the Mac Studio you can switch the bonded network in Synology to Adaptive Load Balancing. That distributes the network traffic across the bonded network ports.
The thing to know about LACP is that it doesn’t automatically double your network transfer rates. A single TCP connection is always sent across only one link and thus be limited to the speed of that. Depending on how your LACPs are configured on each side, it might well be that pretty much all packets are going only over one cable. This is defined by the load balancing algorithms which are based on combinations of the source/destination MAC addresses, IP addresses and ports. For a LACP between only two systems, you should chose a setting that includes source and destination port to make any sense. And even then, depending on the protocol you use, it might be that the connections don’t use both links at all, e.g. for SMB, you have to make sure that SMB multichannel is enabled (if available).
It might just be that your are better off with adaptive load balancing on the synergy side (I have to admit though that I don’t know how exactly that works).
Or, as @JuergenP wrote, keep it simple and just use one link, for most purposes, that’s perfectly fine, and in reality it is actually not that easy to saturate a 10G link (unless you are using NVMes, even SSDs are more likely to be the bottle neck)
But whatever you do, test it. Use something like iperf to test network throughput and monitor the traffic on both uplinks to see whether your configuration actually improve something. Also monitor your typical use case scenarios and see if both links are used at all with the protocols you use.
it doesn’t show me much, just that it is working. link aggregation never “doubles” your linkspeed of course but I was curious, so I set up a n iperf3 server to see what speeds I am actually getting.
then I did an rsync and copied a 2GB file from the synology to my desktop:
OK, I thought you wrote that you used 10GBE, but it seems I misunderstood and you only used the 10GBE ports, but the connection was 1GB because the NAS can’t do more?
Iperf looks what you’d expect, rsync a little low, one of your devices is probably a bit underpowered for encryption, most likely the synology.
No my NAS has only 2 x 1GbE ports. The reality is, Synology has only very few DS models that come with 10GbE out of the box - I’d love to see a new model from them where that is standard. I guess their argument is that no HDD will exceed the transfer speeds of a 10GbE connection, but with faster SSD’s this is slowly getting irrelevant. Anyways, different topic, unnecessary ramble
If you have problems with SMB, have you tried NFS? Won’t help you with the problem that you’re still only using one of your links, but maybe it’s more reliable in your setup.
Yeah I’ve read about this. I am still testing if the other SMB setting changes anything. I haven’t had any glitchy issues but there weren’t a lot of renders since I changed that setting.