Nice thing about NP engine is that when you quit it just quits. Unlike VEP which takes forever to quit, stupidly being ‘clean’ and unloading all the VST’s nicely instead of just dumping them as it should.
Restarting after an update is now a bit of a pain as I have to keep going through the VEP shutdown/startup sequence.
Thanks for those examples. They make a really good point.
Fundamentally, I don’t see why NP couldn’t adopt that style. It would obviously take some R&D, and perhaps there is some insecurity in this area because I think most people here are from a more classical background.
P.S. I have played bass trombone in our community orchestra for 25 years, so it is not that I dislike classical music, or disrespect it in any way. It is just a different genre. It makes perfect sense to me that Arne would begin with a focus on classical styles, and has done some remarkable things with NP4. I’d like to think that, perhaps with some help from the community, NP could evolve to allow selectable personalities (modern jazz, Broadway, pop/rock – who knows, maybe even country or polka. Personally, I would pay good money for a solution that works well – but maybe not the $2300 that Fable wants.
One of the main reasons we took this route is so we can support jazz-oriented playback in the future. If we support a third-party library such as Atomic Big Band, we can create special playback rules without the classical elements, and without disturbing the baseline playback. We’re not there yet, but at least we have the platform.
Obviously a complete solution would involve quite a bit of work, but I think a massive improvement is possible with a few simple rules:
Jazz musicians use their tongue like an accent. Accent all tongued notes.
All offbeat eighths followed by another note are accented and slur to the following note.
Simple line shaping so ascending lines crescendo, descending diminuendo.
Accented staccato is marcato.
Any skip up by a 5th or more increases the dynamic for that note 1 degree over the baseline dynamic. A skip down by a 5th more more decreases the dynamic for that note 1 degree below the baseline dynamic.
At the end of the day, the proof is in the pudding. The bottom line is that if you don’t want to tweak the score, Note Performer sounds better than almost any other solution for standard orchestration.
So at some level, it doesn’t really matter how it gets the results it gets, or how much we understand it. Download the demo, fire up your score, and press play. Do you like what you hear? If so, there you go!
I was initially disappointed to hear the direction of this new release, but I’m sold after hearing what it can do.
I would add, a string of consecutive quarter notes on the beat usually has a slight separation between the notes. One of my early mentors called this “ventilation”, which I think is a pretty good description. This is really common in the Basie style, but is a fairly safe rule to apply to any jazz unless notated otherwise.
Not quite. NotePerformer always included sound generators that were partly (or completely) based on samples or the analysis of real recordings. What makes NotePerformer more useful is that we include an engine that interprets the score very effectively by having a one-second read ahead of the score. Plus, we have statistics and a series of performance rules that render phrasing, dynamics, note lengths, etc. much better than unprocessed playback.
The sound generators we include have limited sound quality but benefit from continuous dynamics. The limited quality is not because our sound generators are bad but because continuous sound generators are not better than this. It’s a technology field that’s only seen limited improvement since Yamaha’s Physical Modeling instruments in the 1990s. Yamaha discontinued development because it reached a plateau, despite assembling some of the smartest people from Stanford and the synth industry to work on the project.
The problem is that all tone generators generate one-dimensional sound. There’s a realism gap between one-dimensional tone generators and real samples because acoustic instruments excite rooms in three dimensions. The timbre is different from various angles, but also, the direct sound is different from the ambiance. This aspect of sound is not just very difficult to simulate, but no reverb technology handles the spherical spreading of sound. Even if the technology existed, it would probably require a super-computer for a single instrument, so no one’s bothered to pursue it. In my professional opinion, this technology gap might never be closed. More likely, advanced A.I.-based methods that generate music audio directly will mature since that’s in the scope of machine learning.
NotePerformer had already reached this plateau with NotePerformer 2. There was nowhere for us to go. Meanwhile, the state-of-the-art (in sound quality) is deep-sampled libraries. They have the same signal chain, musicians, and seating positions as when you record a professional CD. The sound is exactly the same as when you hire a world-class orchestra. The problem is that they have a consistency problem; it’s difficult to organize ten thousand samples and even more difficult to make musicians play consistently for thousands of notes with no reference point. Still, this is the state of the art with all its limitations.
After NotePerformer 3, we had long since reached the plateau and could not improve our continuous-dynamics sound sources. Still, we had the interpretation system and rule engine that made our software unique. At that point, I decided we shouldn’t hold this technology to ourselves but adapt it to support other sound developers on the market. Their products have different strengths and weaknesses than our sounds but can’t normally be used effectively with a notation program, at all. With NotePerformer 4/NPPE, we overcome these problems and massively improve the situation. That doesn’t mean the library is perfectly well adapted for musical notation or can do anything NotePerformer can do. It only means if you want to use a sample library because you enjoy that library’s sound, now there’s a way to do that through NotePerformer 4, which is massively better and more convenient than attempting to use a library directly in the notation program.
I was initially disappointed to find that it only works in bounces. But after working with it I’m sold too - the design makes a lot of sense, use the synthesized old NP (mute the engine) when playing in, then unmute the engine for bounces and better playback. The strings are so much better.
OK I’m a software engineer in another life - @Wallander still I can’t help being curious as to what technical reason prevents you from using anything but stacato (or any fixed articulation) in live playback? It must have to do with ‘read ahead - knowing what’s coming to know what to play now - but … ah, ok, if it is readahead then using external VST’s you can’t ‘time machine’ it as you can with synthesized. Can be entirely off bass (ha ha) here too.
Me, too! However, after reading @Wallander’s clear and understandable explanations of why the company moved in this direction and then trying NP4 with VSL Synchron Prime and the related NPPE template, I am sold on and excited by this development.
I am on a trial of the Synchron Prime library partly because of its comparatively small storage size (65 GB) and modest computer requirements but am also very happy with the playback—the sampled sounds are pretty nice using VSL’s Synchron Player along with the available expression map but really wonderful using NP 4 and NPPE. I have also had very prompt and helpful input from VSL (particularly “Andreas”) and have been impressed by VSL’s provision of Dorico expression maps, although those are unneeded with Sychron Prime and NP4, at least.
The reason is the legato patches for long notes are monophonic. They can only play one note at a time. Overlapping notes automatically triggers a legato transition. So if we use them for note input, you only hear one note in a chord. Previewing chords are more important than previewing long notes. Both are important, but previewing chords is non-negotiable.
Yes, but there can be a significant RAM penalty for many libraries. The architecture of this type of program is a labyrinthine puzzle. It’s important to have uniform solutions between libraries, or you risk introducing failure points or high-maintenance functionality at the expense of other development. We will try adding a way to get plain NotePerformer playback on input, for those who need long notes.
Thanks for the explanation - makes perfect sense, I agree with your choices. Life of a software engineer is never easy and everybody always wants more!
FWIW - since I’m working in orchestral scores chords are rarely as the instruments all have their own staves (excepting keyboards and some percussion of course, but with them most of the time it seems they’re not playing chords either). Isn’t that the same for most ensemble scores in Jazz, band etc? At any rate if you wanted to put in a UI boolean switch to choose chords versus sustained in live input that would be fine, or if you can find another solution to it would also be appreciated.
Hi, hope I can add in some questions. Since I’ve never used Noteperformer. Short answers are fine if I’m repeating someone else’s question.
1-If I work out a piece in the stock note performer, how easy is it to then convert that piece to the high-quality library sounds, and how much additional tweaking is necessary once you do that to get it to be the same, but with better sampling.
If I sign up for the noteperformer trial 4, do I also get the 1-hour time with the high-quality libraries.
How much difference is there from working in dorico pro, to dorico pro with Noteperformer as far as learning curve?
So, exactly how much tweaking would I have to do once using NP4 and THEN put it in a DAW. How much more would be left to tweak in the DAW that I won’t be able to do using all the tools in NP and Dorico assuming I use NP4 and Dorico to maximum capability?
I’ll offer some comments, but remember I’m the one who titled this “NP4 for dummies” as I am one.
I think that should be relatively easy. You just need to purchase the library and the interface for NP. There may be a process of selecting which instruments route to which libraries. I’m not sure about that, but you shouldn’t have to change anything in Dorico.
That may be a problem. NP does not emit MIDI, so you can’t move any MIDI information from NP4 to a DAW. I guess you could capture the audio, but I think you would be limited to a stereo feed, which is probably not what you want.
The idea is that it should be easy, yes. In the last few days, some early problems with our software have been reported and confirmed. Hopefully, the process will become easier.
The trial and the full version can both run the engines in an infinite number of 1-hour trial sessions. You must restart NotePerformer once per hour, and you must reload the playback engine, too.
I’m not qualified to answer that. Since I mostly work with the technical aspects, users have a much better idea of learning curves. Using NotePerformer 4/NPPE is essentially the same process as NotePerformer 3.
Our target user most often uses notation only or may bounce audio stems for a DAW. We can’t export MIDI because we use dozens of instances from where we background render audio, analyze, and process in the audio domain, so there’s no 1:1 conversion to MIDI.