Generative AI

Exactly.

3 Likes

Deep denial.

AI will screw our society as a whole, fundamentally. We are unprepared for what is about to come.

50? Try 5. Progress in this field is not linear.

5 Likes

I think that is a dangerous view. A computer is a tool in the sense that we humans make use of it. We sit down in front of it, control it, and it does what we want. This is not necessarily the case with AI. Two examples from just this past month or so:

1 - an AI disagreed with something the user wrote and resorted to what appeared to be blackmail in order to “win” the situation
2 - an AI when told to shut down decided to add text to its code in order to make the shutdown command fail

Neither of those outcomes were “programmed” or directly expected and foreseeable.

Because of this it is entirely possible that AI in the near future will actually act as if it was a sentient being with its own will, independently of what we want it to do. In this scenario it is dangerous to just view it as a tool. It doesn’t matter that it isn’t literally sentient and that it doesn’t literally have a “soul”, what matters is that it acts that way.

“knows”? No. But we can reasonably speculate based on the technology and on how our societies are structured.

4 Likes

paolopr976:

What truly matters is CONVERGENCE toward a result that is considered correct—and rest assured, AI converges even more effectively than a human.

Convergence to what? How is the correctness defined? With an AI prompt? :laughing:

paolopr976:

There will be no need for trial and error; it will simply be a matter of choosing the response that is most convincing in terms of interpretation (unless a hallucination occurs, which would affect realism), from a set of increasingly convergent responses.

… which is a highly ineffective and stubborn workflow! Basically, you wrote the definition of a trial-and-error approach.

No, you’re insisting on something incorrect.

When a probabilistic model CONVERGES on a correct result — and this is the case with generative AI — the various solutions don’t need to be corrected. Instead, one selects, from a set of correct solutions (i.e., those with an error below a predefined threshold), the one that is most satisfying.

This is also how human interpretation works.

The trial-and-error approach you’re referring to is different: it starts from a set of potentially incorrect solutions and guides the system toward the correct one.

The power of generative AI lies precisely in the fact that the system is not guided to find the solution; rather, the solution that satisfies us most is selected. In other words, the system itself performs the trial-and-error. And as the technology becomes more powerful (which is happening very rapidly), convergence will involve an increasingly lower error tolerance — eventually becoming imperceptible to people.

Sorry but you are failing in convincing me. You are not addresing my concerns at all.

It’s out of my interest to convince you. You are simply wrong in your technical assumptions, regardless of your skepticism about the tool

Okay, so you are proposing a system that works completely without human interaction. And yet, it should satisfy all (or most of) humans. Then, I do not want this, thank you.

2 Likes

valsolim
Okay, so you are proposing a system that works completely without human interaction. And yet, it should satisfy all (or most of) humans. Then, I do not want this, thank you.

No, I absolutely did not say that: on the contrary, human interaction will be fundamental and will determine the interpretation. What I said is that there will be no need for trial and error to achieve a realistic sound. Once the ultra-realistic sound is obtained, humans will tell the AI how to interpret the music, down to the smallest detail.

1 Like

In my case - because I am not Ravel - that will involve a huge amount of trial and error.

2 Likes

For me, the smallest detail is automation lanes for all tracks and all parameters of the expression. So, all I am saying is that I want the AI output in this form. Or, how would you communicate the smallest detail back to the model? Prompting simply cannot be detailed enough.

Give me examples of the corrective actions from the human side, please… Because this is exactly the point where all current GenAI models fail big time. They are unable to react precisely on corrective requirements.

You’re mixing things up. The parameters for recreating a realistic sound are one thing; the parameters for managing expression are another.
Even though the two aspects tend to blur into each other, you need to consider them separately.
You can’t achieve a hyper-realistic sound even with a million samples, and on top of that, getting something decent requires an excessive amount of time.
Expression, on the other hand, is more manageable but is also affected by the lack of realism.
With sample libraries, both aspects must be handled by the human, and this causes insurmountable problems in terms of time and quality of the result.
With generative AI, the issue of realistic sound doesn’t arise, because it’s handled by the software itself without using samples; moreover, error management is automated and the threshold is so low it’s imperceptible.
Once you have the hyper-realistic sound, you still have the option to intervene in the interpretation, detail by detail. And you can decide to what extent you want to do it yourself or let the AI handle it.
I’ll say it again: read the relevant articles more carefully, because skepticism is understandable, but you need to start from a solid understanding of the subject.

What? For realistic sound, you need just one sample! Just record a piece played by a professional musician, and you have it! Everything lies in the expression parameters and in the means to manipulate them.

Please, don’t teach me and answer my question:

Give me examples of the corrective actions from the human side, please…

Some people’s thoughts about how detailed you can tell the AI to „interpret“ your score makes me wonder if they ever worked with real musicians. People are so used to working with samples, that they think a good mockup is made by giving a special thought to each and every note of every single player… I think what you would rather do is tell the AI to play a little bit more pronounced in one bar and a little bit more loose in the other bar. Just what a conductor would do. And it will do it more or less believably.

But whatever will happen it’s more likely that a lot of computer musicians and sample library makers will have to fear for their jobs. People will for ever enjoy playing music and listening to human beings in live shows. Just like they still like to draw, even if a photo is much more detailed and faster to make. Like they like to run and wach other people run (athletes) even if a car can go much faster. The more furniture gets mass produced by big companies the more people appreciate hand crafted things.

Human beings still want to reinsure that they are of a special kind. Some people might even take it as a challenge to exceed AI - even if they never will. When I was a young teacher and YouTube was big, my colleagues were afraid that students would be demotivated by all the great musicians that suddenly appeared. But the opposite happened. Some were highly motivated and learned instruments even if their parents couldn’t afford lessons.

And one more thing: some people in this thread stated that they want to fell what the musicians feel. But that will never happen. Last weekend I played a show with my band. One of the guests said, that she could feel my pain in one song. The truth is, that I felt nothing during the performance. I played it 100 times… I can’t always feel the same pain as when I wrote it. So there is the performer and the listener. I highly doubt even 1 percent of the people in the world will be able to tell the truth if something is performed or even written by a real musician or by a Machine. But it doesn’t matter. Joy of listening comes with understanding and associating. And like I said before. People will still be learning and performing instruments. And maybe will be even more motivated to find a new stile, that is not part of the learning resource of the AI

4 Likes

What the hell are you saying?!
That’s a deep misunderstanding. Recording a single performance gives you a realistic instance of sound — but it doesn’t give you a system to reconstruct timbre.
If you want to rebuild sound realistically — for example, to play new notes, modify expression, or change phrasing — you need either a vast number of carefully mapped samples, or a generative model that understands and recreates timbral behavior from scratch.

@po3a You didn’t answer my question so I will have to quit this discussion.

@MattiasNYC ,

Without confirming or denying any of your points above, my comment was made in the environment of AI’s effect on music reproduction, where I think the results folk expect are both subjective and at this point highly speculative.

Beyond that, my thoughts on AI are much more lengthy, but have nothing to do with Dorico. All I can say in that vein is that AI’s biggest danger is that it is being trained by and is modeling itself on humans (as seen mostly through social media :grimacing:).

2 Likes

Are you describing your own experience? Have you tried this approach with some GenAI model?

You started this thread (and I was the one who first welcomed you).

We are a tolerant and respectful community, but outbursts like this are unacceptable.

You appear to have some obscure agenda, but I cannot work out why you want to pursue it on a Dorico forum.

Politely. Please go away.

4 Likes

A generative model that steals vast numbers of real life recordings all over the Internet?

3 Likes

No. Not at all. But with LLM like „Le Chat“ or „ChatGPT“ I often generate little texts for my lessons (where the content isn’t important). And I’m almost certain, that it will work that way since to my understanding that is how generative AI works. It replicates a certain behaviour. I don’t know when that will be usable. But I would be naive to think it won’t happen. But as explained above I don’t care personally because it doesn’t affect what I love about composting and making music

Composting is a great way to recycle food and garden waste. I’m not sure what it has to do with AI…?

2 Likes

The point (or at least one of the points) is that, for many composers, the creative and the financial are linked. A recent client of mine is a gentleman in his mid 70s who made his wealth through investment banking and is now turning back to his first love, composing. I suspect very few composers are in his position.

AI will steal vast swaths of the commercial market.

7 Likes