Artificial intelligence art generators train themselves on art pulled straight from the internet… but what happens when most of the art out there is now made by AI?

By Alex Hughes

Published: Monday, 26 June 2023 at 12:00 am


In the past year, AI art has gone from research papers to niche fads, all the way through to internet-dominating tools producing millions of images a day. However, to reach this point, these models had to be trained.

This training routine involves a hugely comprehensive deep-dive of the internet, scanning billions of images along with their corresponding descriptive text.

Not only did that raise some major ethical questions around copyright, it also begs one question for the future: what happens when the internet becomes flooded with images made by artificial intelligence?

As these models continue to train, scouring the internet, they will undoubtedly be trained on images they first created. Does that cause some sort of self-perpetuating loop of weirder and weirder images, or will nothing actually change?

The loss of creativity

"Midjourney-woman"
Midjourney generation of a woman in golden hour

“AI will eventually start training on its own work – it’s expected to happen. That will essentially lead to stagnation in creativity. They train on what is already on the internet, so it will copy what is popular out there,” says Ahmed Elgamel, a professor of computer science at Rutgers University.

“If you get into the cycle of feeding it what is on the internet, which right now is mostly AI, that will lead to a stagnation where it is looking at the same thing, the same art style, over and over again.”

While it is easy to picture a future where AI starts pushing out morphed art, driven by repeated training on its own images, it is far more likely that this will simply promote further what it is already creating.

“Basically, it will converge on anything that is popular. More of what is popular will get you stuck to certain art styles that are popular on the internet and it will become biased to that. Whatever is dominant right now, that is what the models will learn to push even more.”

This leads to two key problems. Firstly, it will cause the art that AI is making to stick to a limited set of styles, causing generations of art to look very similar, lacking the imagination most people aim to get out of these generators.

The second problem is that if AI art begins to be recycled back into itself, it will be ingesting any biases that are common in its own creations. This could mean everything from unrealistic body types, morphed hands and an over-reliance on genders in certain roles.

“If there are biases in the images being produced, people searching for specific beauty types and aesthetics, these are going to end up being recycled back into it,” says Elgamel.

“The beauty of human creativity is being able to like or dislike anything, it doesn’t just have to be what is the most popular option right now. If these models get stuck in a feedback loop, they take that sense of individuality away.”

Ever since OpenAI released Dall-E 2, the first generator to kick off this recent movement, the team’s behind these projects have been brutally honest about their reliance on biases and stereotypes.

By originally training on existing content, Dall-E 2 had the tendency to offer up repeating themes in images. Prompts for builders mostly returned men, and the same for flight attendants with women.

This also occurs with race and religious ceremonies, offering a large focus on Western versions of these events.

As most AI image generators are trained in the same way, they have all experienced these biases, which, when returned back into the system would further promote them.

What’s the fix? 

"Midjourney-man"
Midjourney generation of a man playing poker

Of course, there is actually no way to know if this is currently happening, or if it ever will. But for Elgamel, he sees it as a likely future, as AI models continue to train themselves on the wealth of free images available to them.

So what’s the fix? If the future for these art generators is to get stuck in a self-perpetuating training cycle, what’s the alternative option? The most obvious option is to move away from the current training formula.

“I think we need to rethink how this is done. Instead of training these models on huge pools of data, people should be taking their own artwork, or work they have permission to use and training their own models based on these images,” says Elgamel.

“By doing this, you are getting something that is completely unique to you, that is something that you simply cannot achieve with these massive models.”

Of course, this only really applies to artists looking to get the most unique results out of their generations. For the average user, a large, fully trained model will be the option worth going for.

As time goes on, these models could find new, more unique ways to train, developing new art styles and even an understanding of how an image would operate in three-dimensional space.

However, there is equally a risk of stagnation as AI art falls into the trap of going in circles, learning biases and developing a focus on a singular style of art.

About our expert, Ahmed Elgammal

Ahmed Elgammal is a professor of computer science at Rutgers University. He is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers and has published over 180 peer-reviewed papers, book chapters, and books in the fields of computer vision, machine learning, and artificial intelligence.

Read more: