The ownership problem of AI art

Who owns AI art? Is it the person who put in the prompt? What about the developer of the AI program? Does the artist whose work trained the AI program get any stake? What if the artist’s art was used to train the AI program used without their permission?

These are just some of the questions that will shape the future of AI art. Right now, many of these questions remain unanswered or unasked, so we are essentially in the Wild West of AI image generation. That could change relatively soon though. 

It’s easy to miss if you’re not paying attention to art spaces, but there is already a mini war going on between artists and the tech sector who’re promoting the technological revolution of AI art.

The outcomes of this war will shape the future of AI art. Will we remain in this Wild West era a while longer or will AI image generators be forced to change their policies due to artists’ concerns?

To understand this war and its potential outcomes, we need to take a closer look at the questions we’ve already posed and dive deeper into the issues with how AI art programs actually make their art. That’s exactly what we’re going to do in this article. 

How is AI art made?

All artificial intelligence models work by being trained using existing information and developer fine tuning. In the context of text generation models, like OpenAI’s GPT-3, over one hundred billion parameters representing hundreds of billions of characters were used to train the AI model. 

Images are obviously more complex than text, but the process is very similar. AI image generators are taught with millions of captioned images that are fed into a machine learning model. 

Once trained, these models then can either create images from scratch after being given a prompt or they can take an image and convert that image into art. OpenAI’s DALL-E is an example of the former. The popular app Lansa AI is an example of the latter. 

Both kinds of AI image models take from artists’ work to create their images, which is why so many artists have problems with AI.  

What are the problems with how AI art is made?

Now that we’ve discussed how AI art programs work, we can fully understand why so many artists have a problem with them. 

Despite AI being a revolutionary new technology, the issues artists have with AI is one of the oldest problems in the art industry. That issue is plagiarism. 

As we already discussed, AI image models are trained using existing images and their captions. Sometimes, these images are pulled from commercial databases, like Getty Images. However, it’s more common that these images are simply taken from Google. The biggest AI programs use the same database of over 5 billion images collected from all over the internet created by the nonprofit Laion-5B.

Because having an online presence is an essential part of being a modern artist, images from artists websites, instagrams, or other social media platforms are regularly included in the databases used to train AI image generation models. This is almost always done without the artist’s knowledge or consent.  

This isn’t the first time technology has plagiarized artists’ work online recently. During the NFT boom of 1-2 years ago, many artists had their work stolen and turned into NFT’s. Thankfully for artists affected by this, they could easily recognize when their art was stolen. Unfortunately, it’s not so simple when it comes to AI art. 

Even if an AI image generator creates work strikingly similar to an artists’ own work, artists can never be 100% sure of whether their art was used in an AI training database. 

In some extreme contexts, AI art plagiarism is clear to see. For example, you can use an artist’s name in your prompt to get art in the style of that artist. This art is made using that artist’s work within the AI model’s database. 

In most circumstances though, the extent that AI art models plagiarize is a little complicated. At a bare minimum, AI art program developers are using copyrighted material without the author’s consent to teach their AI models. On its face it’s unclear whether this is a problem legally since it’s not illegal to learn from or be inspired by copyrighted materials. 

I could be inspired by the art of Vincent van Gogh and go on to have a lengthy and successful career producing work similar to van Gogh’s. However, I wouldn’t be able to repurpose van Gogh’s work and pass it off as my own without giving the owners of his works’ copyright some credit. 

The difference between repurposing and being inspired by is human creativity. I could see a painting and be inspired to create something else. AI art generators can’t be inspired. The only way they can create art is by repurposing art they’ve seen elsewhere.

This is the problem with AI art. Programs repurpose artists’ work without consent and without giving the artists the proper recognition or compensation necessary. This is without a doubt plagiarism, and is something the AI art world still needs to reckon with. 

What’s next for AI art’s copyright issues?

Artists themselves are sounding the alarm and trying to raise awareness about the issues of AI art, but they are unfortunately fighting an uphill battle against the tech giants behind this recent AI revolution. 

Very few, if any, of the major AI companies have shown willingness to change their practices. These developers maintain that their programs are not guilty of plagiarism.

While we think it is plagiarism in the common sense of the word, the courts will be the ones who decide whether AI programs actually are legally liable for copyright infringement.

A collection of artists have brought a class action suit against Midjourney, Stable Diffusion, and DreamUp. These three companies are some of the biggest AI image generators on the market, so any action against them is sure to send waves throughout the entire AI industry. 

These artists are also being joined by some corporate interest groups in their fight against big AI. Getty Images is suing Stable Diffusion because Stable Diffusion used Getty Images’ database to train their AI model. 

Both these cases represent a new frontier of intellectual property law. Instead of claiming that an individual artwork is infringing on the copyright of another artist’s work, these artists are claiming the entire process of AI art generation infringes on their copyright. 

We can’t predict how judges might decide these cases considering that copyright law is notoriously complicated. We can be sure that the art world and the future of AI will be changed forever by whatever the judges decide, so this is a story we should all be paying attention to as it develops.

6 thoughts on “The ownership problem of AI art”

  1. This is such a weird argument from my perspective. This is like asking if the paint company deserves credit for a painting. What about the paint brush manufacturer? What about the car manufacturer featured in the painting?

    The thing about art is that it’s always inspired by another artist or object, and AI is no different.

    Reply
    • Indeed, also the argument that AI can only repurpose and can’t be inspired depends greatly on understanding how inspiration works in the brain. To my understanding, we don’t know for sure how inspiration works. In fact, Deep Neural Networks get their design based on our best understanding of how neurons work is the brain. This means it is very likely that it is inspired the same way we are.

      I am curious though how a law graduate understands these issues, because technologists will not be writing the legislation that comes out of this.

      Reply
    • I think it’s a little more complicated than you’re making out, but I completely understand your perspective.

      I think a better comparison than the brush/paint company is when a piece of art is used in a collage. When you use someone else’s work and use it in a collage, you are using their work, but it’s not always plagiarism in the sense that the original artist should be compensated for their work being in your collage. The fair use doctrine is what allows you to use copyrighted materials in your collage. Therefore, when these things go to court, a legal analysis using that doctrine decides whether a specific collage violates the original artist’s copyright. The cases against Midjourney, Stable Diffusion, and DreamUp will probably be decided using the same doctrine since the models rely on other artists’ work.

      Hopefully that clears up my argument a little bit about why I think this is a problem with AI art. I probably should’ve been more clear in the article about whether I think AI art is illegally stealing artists work (i.e., infringing on their copyright). I can’t say that for sure right now. This is what the courts will decide. I personally can’t wait to see the arguments.

      Reply
  2. This is similar to “sampling” in the music world. When rap artists began using recognizable snippets from other popular songs I thought we would see a slew of lawsuits. There were some and in the end, it was determined that someone could use part of a song but not all. AI looks to be in similar circumstances.

    Reply
    • I think this is a great comparison and probably one visual artists are hoping the courts agree with. There actually was a slew of lawsuits related to sampling, especially in the 1990s and early 2000s. There are some today too, but much of the precedent has already been set, so there are common industry practices now.

      Basically, samples need to be cleared with the studio who owns the song. When a song is sampled, the original artist gets a writing/production credit so they can get a cut of the money made on the new song that sampled their work. If a sample is not cleared, an artist can be sued and potentially take a huge financial hit.

      Obviously an artist can always sue another if they think the other artist plagiarized their song and infringed on their copyright. If successful, the result of these suits is often the original artist getting a lump sum payment along with writing credits so they can make money through royalties in the future.

      This would be a very attractive option for visual artists whose work was used to train AI models, but it would be harder to enforce, so I’m not sure if the courts would go for this model.

      Reply
  3. But modern art is very different from traditional art. Modern art has a motive behind it, it’s not just a question of producing something that’s physically like it because it won’t encapsulate the motive or the raison d’etre. Consider Guy Portelli’s handprints. Yes with AI you could produce a similar painting with slighly different handprints but even here in this simple case it wouldn’t have the backup. They wouldn’t be the handprints of the famous people involved so it wouldn’t work in the same way. Traditional art painting is probably easier to do in the style of the artist, but it couldn’t be passed off as original because the pigments, oils, canvas etc would need to be reproduced as well. And these would need to be aged.

    Reply

Leave a Comment