Skip to main content

The rise of AI generated content, and why it may not be so good

When I say the word AI, what is the first thing that comes to your mind? Is it Cortana from Halo or Replicants from Blade Runner? The phrase "robots will take your jobs" is inferred to be a while away, but for many people, that phrase could be much closer to reality than we think. And spoiler alert: it might not be a humanoid robot that takes your place. Recently, there’s been a massive influx of AI-generated content and websites. We’ve all seen the humorous pictures from DALL-E mini (now called Craiyon) that take prompts from the user and create pictures from them, often to wild and hilarious results. These results are often in the uncanny valley, as the AI used in DALL-E mini isn’t quite good enough to make the pictures realistic, causing distorted body parts and the picture to look quite off. But DALL-E mini’s older (and original) brother, DALL-E makes many more realistic pictures with its more established algorithm. 

This AI image generator lets you type in words and get weird pictures back  | CNN Business

With the hype around DALL-E, a surge of AI content generators has sprung up, the most notable one being the ones that can create paintings by replicating real art styles. In September, an AI-generated painting won the Colorado State Fair, making people ask the question if AI art is a real art and how this issue will impact artists’ careers. There’s a fair argument to be made that the person who typed in the prompts should be considered an artist as they spent time finding the best prompts and chose the result, showing creativity. After all, art is a way for someone to express their creativity and imagination, so fine-tuning prompts to get the picture you want could be considered art. However, there’s also the fact that the process of making art is just as important, and a machine-generated picture doesn’t have the soul or the creative backstory that a true painting does. Which side you believe is up to opinion, but what isn’t is the real likelihood that AI art could hurt real artists’ careers. Art enthusiasts may still prefer a human touch, but the Colorado state fair result shows us that very often, AI art is indistinguishable from human-made art. For simple drawings, website backgrounds and other basic commissions, companies may buy AI art over human art, due to cost, time and flexibility. When the US Bureau of Labor Statistics already says that the mean annual wage of artists is $69,000, a reduction or loss in corporate commissions for artists could seriously impact their livelihoods (US Bureau of Labor Statistics). Like it or not, artists are an integral part of human culture, and we cannot afford to lose them. This issue doesn’t even stop at paintings. A website called Jasper can generate AI written speech from prompts (of varying quality). I could see the possible benefits for small, personal work like emails, but it again brings up the question of what people should value in their content. If you look at a painting or read a story and just appreciate the surface-level content, AI-generated content can be fine for you. But I feel that the ideas and concepts brought up by a work of art only really mean something when the person behind it relates to those themes, experiencing them every day. Some people argue that AI art is inevitable and that creatives should use them as tools to enhance their work. But if we slowly accustom the next generations of artists to rely on machines for their inspiration, one day the machines will remain the only ones. Deepfakes: Trick or treat? - ScienceDirect

Another issue with AI-generated content that we've all heard about is deepfakes. For those who don't know, deepfakes are when a person's face or body is digitally replaced with someone else's. This can be used to spread misinformation about politicians or celebrities. Deepfakes can be a severe issue when influential people are deep faked in controversial situations, leading to their careers being ruined or even political issues. Just imagine if the President of the US was deep-faked tomorrow into calling for war against Russia, causing huge political problems and might even lead to the next world war. Of course, this is an extreme example, and the technology isn't advanced enough to make it indistinguishable from reality, but one day it will be. Of course, there are some positive use cases for deepfaking technology. Star Wars notably used a type of deepfaking when it digitally recreated Peter Cushing and Carrie Fisher for cameos in their movies after they died, and when they entirely recreated Mark Hamill's face for the Mandalorian. According to Kietzmann et al., it can also be used for filters, trying on clothes before you buy them, making stunt doubles more realistic and improving translations into other languages of media. While deepfakes can be useful, I don't know if the positives are necessarily worth the serious misinformation issue they bring. Today, 80% of American adults have consumed fake news (Statista). A possible solution to this may be to educate people on misinformation. An MIT study found that being more digitally literate correlated with being able to tell what's misinformation (Sirlin et al.). However, this doesn't correlate with the likelihood of spreading misinformation, which deepfakes can exacerbate as more people are likely to watch a deepfaked video than read a factually incorrect article. This can be somewhat reduced by prompting users to fact-check their information before sharing it, according to MIT (Sirlin et al.). A silver lining to all this is that deepfakes have become quite well-known worldwide and are starting to be addressed further by governments, so hopefully, they can be regulated properly without destroying other AI content generators in the process.

Writing this was a bit of a dilemma for me. On the one hand, I’m always willing to support technological innovation despite drawbacks. Also, I think that the misinformation part of AI content can be overcome. On the other hand, as cool as AI art content is, I can’t help but feel that it destroys the purpose of art. To me, it’s like a person from a wealthy background rapping about escaping from poverty. The lyrics and song might sound great on the surface, but on the inside, it’s empty and meaningless. AI paintings may be able to replicate the look of a human painting, but they can’t copy the emotions behind it - the blood, sweat and tears put into it, the message the artist wanted to say. A painting that looks cool at the moment is forgettable; anyone can make something that looks more visually appealing. But a painting with a message or a purpose, one that strikes the audience, leaves an impact. And if your work is forgotten, left to fade away when the next trend starts, what’s the point of making it in the first place?



Sources:

https://www.wired.com/story/dalle-ai-meme-machine/

https://impakter.com/art-made-by-ai-wins-fine-arts-competition/

https://www.bls.gov/oes/current/oes271013.htm

https://www.researchgate.net/profile/Jan-Kietzmann/publication/338144721_Deepfakes_Trick_or_treat/links/5e6aaf64a6fdccf321d91d43/Deepfakes-Trick-or-treat.pdf

https://misinforeview.hks.harvard.edu/article/digital-literacy-is-associated-with-more-discerning-accuracy-judgments-but-not-sharing-intentions/


Comments