A provoking thought i keep having.
Since “ai” (id rather call it just an aggregator/scraper but yeah) is just taking input from everything humans have done on the internet and spitting out an amalgamation, how exactly can we say its any different from an average musician who has influences from hundreds if not thousands of bands?
There’s many, many, songs out there that you can tell are inspired by others. How can you differentiate this from prompting the slop generator to “make a song similar to x artist and throw in some drum parts similar to y artist”. I myself definitely know the music I write has a sound to it that can be traced to a lot of groups I listen to, even if I just sit down and start writing I will naturally sometimes come up with something that sounds similar to music id been really into that week, for example. Of course I notice this and then work to change it up if possible, and many times others dont even hear the same influence I did in the end!
The only difference i can think of is, if you took a human baby, put them on an island with no music and a caretaker, and gave them a piano, they would create something. Of course a computer could never do this.
Peaceful discussion if we can. 🙂


There are a couple ways to approach the argument: we can talk about the art LLMs can produce (and whether it should be called art), and we can also talk about the long-term ramifications.
The arguments about what LLMs can produce are weaker. Art is subjective, and trying to quantify things like “originality” and “soul” is difficult. Plus, as you mentioned, there are plenty of successful artists that are arguably untalented. Ultimately, LLMs can produce something that some people want, at least somewhat. That being said, I would argue that a drum machine on its own is soulless–and I think Prince would agree. It’s the other pieces that make it something more.
The stronger argument is the other one–the long term ramifications. Unlike everything that has come before (synths, sampling, etc), art has always cost someone something. If nothing else, it takes time and effort for a person to create something, and there’s some measure of skill involved (EDM, for example, takes skill in composition rather than performance).
LLMs can produce “art” for negligible (immediate) cost. This is pretty new. And it’s undercutting an already slim market. The likely long-term effects include thinning it further, to the point where “artist” is untenable as a career.
What makes that different from other areas where technology has replaced human efforts? The big difference is that LLM art depends on the human artists creating art. The more prominent LLM art becomes, the less human art is created, and the worse LLM art becomes. It’s like a snake eating its own tail, or a factory that uses its own foundation as raw materials–it’s a self-destructing system.
Another argument to be considered is motivation: the people who are gung-ho about LLM art are typically so because it means they don’t have to pay humans to do the same thing. Which is less problematic in other industries, but given that art is often a form of emotional expression (as opposed to something like a manufacturing job), there’s a stronger argument that maybe the art should be left to humans.
I think it was summed up nicely by someone who posted something along the lines of, “I want AI to do the mundane tasks so I can spend time making art, not the other way around.”
Totally agree.