NotePerformer 4 Released

Here’s some background. I understand that many will be surprised by this, but knowing all the details, I have 100% confidence in the path we went for.

With NotePerformer 3, we have already reached peak maturity for what a lightweight library can do, at least in my hands. We already made minor, incremental updates to them over many years. If you look around, there are almost no modeled options for strings or percussion, they don’t usually approach the state-of-the-art sound quality for the casual user, and they’re not necessarily lightweight either. Modeling high-quality strings and percussion is not within reach of current technology as I know it.

Deep-sampled libraries recorded in a natural space were our only viable alternative for dramatically increasing the quality beyond what we already had. The libraries we support with NPPE (NotePerformer Playback Engines) represent the state-of-the-art in deep-sampled libraries (there’s a lot more, but this is a subset). Experienced companies with the right skills and access to the right people and venues recorded them. We could never produce a deep-sampled library to match on the first attempt. Even if we did, NotePerformer 4 would be like this, but with just a single deep-sampled library to choose from that’s mandatory, require a powerful system with lots of RAM, and with a mandatory upgrade fee and a built-in reverberation that would be very disturbing for users who don’t want the natural ambiance. With the current solution, everyone stays on the latest technology for free, and the lightweight technology runs side-by-side with the optional third-party options.

There’s also a problem of sound fatigue. If we did create a new sound library, our users would quickly grow fatigued with that, too. It makes more sense to have a modular engine that can support many VST libraries recorded in different venues or produces sound with various technologies, which we can update with state-of-the-art sample library technology.

Another angle is that NPPE uses our factory NotePerformer sounds as real-time guides for the NPPE engines within the engine itself. We spent a decade tweaking those sounds for balance and articulation; I have very high confidence in the balance of NotePerformer 3 and wouldn’t want to gamble with changing them at such a critical point of development since it offsets the entire ecosystem. Even if we changed them, I’m not sure what to change, and I don’t know if it would be for the better, but there would be many opposing views. As was always the case whenever something in NotePerformer changed.

The YouTube demos of the playback engines represent the out-of-the-box sound using the Mix/Main perspective of the library and the built-in reverb (only a touch of reverb) because we don’t want to give the impression of a sound that requires customization beyond that of an ordinary user. We have managed multi-microphone support for applicable libraries, which is very powerful. There’s so much that can be done to customize the sound beyond our baseline renderings, which is the same process as working in a DAW, but much easier to do in NPPE.

There’s little change from the user’s perspective to the core software, but a lot changed under the hood. The support for native Apple M1 and VST3 for Dorico were significant NP4 updates which we released prematurely out of necessity. There were stability issues with OpenGL in Windows that only affected some users with specific drivers, so NP4 was equipped with a new Direct3D-based engine which is a significant change. These are all very high-effort updates with little return for most users. Moreover, NotePerformer has almost no known bugs, so there’s no natural reason to have a long list of fixes.

This introduction of NPPE is also a significant shift since it’s a natural position for supporting big band libraries and other styles requiring a separate playback-rule engine. Nothing prevents us from supporting modeled instruments by other developers, although they present unique challenges that must be resolved.

I must also be said what’s unique about our software was never the sound library but how we integrate it with notation. It makes sense for us to become an intermediate host that applies our technologies to sample libraries by other developers, as an asset to the ecosystem, rather than trying to dominate that space ourselves.

37 Likes