AI Prompt: "Can you write me a string quartet, "Eleanor Rigby"-style, to fit a song I've written?"

Yeah this is what’s happening. And it is because the AI trained on those works. Fair enough.

But how does that fundamentally differ from what we humans do? If you asked me to write something in the style of Eleanor Rigby, assuming I have the chops to pull it off; I’d only possess that ability because I’ve listened to and studied Rigby & lots of other musical works. Why isn’t this considered a similar level of theft? Maybe it’s related to the AI being much more efficient at devouring information than we are?

1 Like

This point has been made several times earlier in the thread, but I’ll try again, with different words. :slight_smile:

The theft, or copyright infringement, does not happen in the result stage, no matter if the result is 100% human-made or 100% AI-made.

What is being referred to as “the biggest theft in music history” does not refer to users creating their own content with AI. The users are not the “thieves”. The theft in question refers to the earlier stage where the AI model itself is created, and how it is created.

For an AI to “train”, it has to scrape the internet for every piece of music available. It is a mass copying of likely every recording in music history, copyrighted or not. For something like Suno to even work, it has to physically copy, import and process the copyrighted recordings, because otherwise it has no data and cannot generate anything. So not only have they “pirated” all music in existence, they have also edited and modified it – and completely without consent – to work to their benefit. This is the mass theft I’m talking about, and it is on a scale that is completely overwhelming.

The difference: Humans are not physically copying copyrighted music, to be able to be inspired by it. AI, however, is totally dependent on that. Which again means that something like Suno only exists because of massive copyright infringement.

As you say, humans can make similar sounding music to existing music, and we have always done so. This is a natural part of making music, we are always inspired by everything we have ever heard. It is only natural. Often it is also the point, like in homages, pastiche, parodies, etc. All this is unproblematic, and not touched by the existence of AI.

For a “soundalike” to be illegal it has do be downright plagiarism, that is purposefully copying a main melody that is note-for-note identical to another, and claiming it as your own.

(Excuse me for starting to sound like a broken record. :slight_smile: I appreciate you all in that we can have a friendly discussion here on the Steinberg forums, a stark contrast to the dark corners of Facebook… :zany_face:)

2 Likes

I heard this argument before; it doesn’t wash, sorry.

Learning your craft, sometimes painfully, and paying homage to people you love listening to, is 3 billion light years away from scamming, thieving companies enabling lazy people who want to call themselves “musicians” ,to create trash by writing words on a computer…

Copying someone else’ musical work outright, or plagiarizing it is copyright infringement.

2 Likes

Yes, I get this and it is at the core of my question. I totally agree that the problem is in the training phase and not the end result. But my point is that the way I learned how to write music, my personal training so to speak, is not all that dissimilar from how the AI learned. Yet we look at them in very different ways. My question is what differentiates them? I’m not saying we should treat them the same, I think they are quite different too. But I can’t really explain why they are different.

I’d get your hearing checked then, because I’m not arguing anything. I’m asking a question. We have two different things which also have some significant similarities. Yet one of those things we (rightly) regard as morally corrupt while the other is perfectly normal. Why is that, what makes one OK and the other not?

For what it’s worth, I think having an AI write music for me makes as much sense as getting a robot to eat my dessert. That’s the fun part, why would I give it up?

But if I could get it to clean my studio…

I see what you mean on some level. But Humans and AI are fundamentally different kinds of learners. As you know, we don’t even fully understand how human learning works at a neurological level. By contrast, we do understand AI: it is a human-designed system…

AI learning is fundamentally statistical…probabilistic modeling. Training on datasets means detecting patterns and corelations in vast amounts of existing data and generating new stuff based on probabilities. Human learning is not crunching unbelievably big datasets…it involves meaning-making, goals, memory, curiosity, constraint, and the ability to create concepts that are not reducible to prior examples. While the inputs (exposure to music) may look superficially similar, the process and nature of learning are not…

1 Like

Is it, though? I think this is more our own personification of how an AI works, because how it actually works is inconceivable to us. So we personify it by calling it “training”, just to try to grasp what is going on. I think in reality it has very little to do with human training or learning.

I’m struggling with why this is still a question, sorry. One method requires that you commit to industry-shattering massive copyright infringement, while the other doesn’t.

I think they are not the same thing at all, we are just perceiving it that way, to try to grasp how AI works, in the imagined setting of how it would work if AI somehow was human.

1 Like

Can you please, in detail, explain to me how AI is trained to learn music?

Please site your sources.

I am looking for a definitive, concrete answer with links to your answer that solidify, without a shadow of a doubt as to the mechanics behind the learning technique of a machine.

Which learning technique enables such programs to create what takes a human, hours, or even days, just a few seconds.

I am looking for the complete mechanism under the bonnet of companies such as Suno.

With thanks

Little side note: Thanks @alexis for your honesty to share your thoughts/doubts on using AI as a tool and bringing up the question how to declare it as such. :+1:

1 Like

That’s a tall order! I don’t have time to give sources as I would do in an academic paper, but you can easily check this yourself. Anyhow, i know basically how it works. AI music models like Suno are trained on super massive datasets of music and audio so they can learn patterns—like rhythms, harmonies, and styles—without being explicitly told music theory. During training a network adjust its internal parameters so it can predict musical elements from data, and when you give it a prompt it samples from what it has learned to produce new music based on *probabilities* rather than fixed rules. This is why it can create a song in seconds—because it statistically recombines patterns it has seen in training rather than “composing” the way a human does. So it *seems* like it’s doing something amazing, when in fact the only amazing thing is the unfathomable amount of computer power used in the process --and how it terrorizes the environment.

Why on earth would you think I could do that?

That makes sense that we’d anthropomorphize the technology because it seems like it is almost magic under the hood. How does observing a bunch of items turn into the capability to make knock-offs. It somehow infers the process used for creation out of only the end results.

I think I should weigh in. I’m fully against and angered by AI music.

First of all, what AI can do is copy the patterns that it detects in the music it is fed. At this point, most of the music was generated by humans. Not all of it was good music, but that’s a matter of taste. And most humans can’t create something original or emotionally awe-inspiring. Most are also absorbing and copying. So most of it sucks. But not all of it.

Those who succeed at creating truly original music should not be run over by AI content. If human music is gradually replaced by AI music, then that will be the end of music. AI will only learn from the music it has previously created itself. The death of original human musical expression. AI does not have any capacity for originality.

Are we really ok with over-running the human creative impulse? I realize that composing music may well be a dead art, but I hope there are some humans left willing to put their original mark on their output. Nothing in AI will do that.

I think that music is a form of communication that we’re not very good at explaining at a conscious level. But I hope all reading this would say that music triggers an emotional reaction that none of us can adequately explain.

AI can only simulate this level of communication.

I am not interested in music generated based on patterns found in the current output of humans, with no actual emotions involved.

So I reject all AI music. We make the only true music.

2 Likes

A couple of guesses, but I thought so anyway.

Be with the times, man, accept the inevitable, don’t be a dinosaur, the machine learns and so do humans, what’s the difference, adapt or die, man, bla bla bla…

On a lighter note, Bandcamp has banned AI-created music. Anecdotally, a film director who used AI to make his first (and I put it loosely) “trailer” called me because AI music sounded terrible to him.

There is pushback to this nonsense.

Maybe the future is that some people will be hired by AI companies to create/produce music in order to train the AI.
I wonder if orchestra musicians feel like this when they are hired for a recording session for some Kontakt library.

Great future

This?

I wonder if this might all end up being a false alarm because the music AI creates really isn’t up to snuff and it costs too much. Right now generative AI costs are heavily subsidized, but that can’t last forever. I suspect the novelty of AI generated music causes us to overrate its quality and not pick up on aspects that will become annoying over time - the way that AI generated imagery became dominated by hyper-realism. Maybe there really isn’t a large enough market for expensive meh music to make it a viable business.

Remember back when Facebook changed their name to Meta because we were all going to soon be living in the Metaverse. Recently they laid off everyone working on the project.