The 2025 Academy Award nominations have been controversial to say the least. From Challengers being snubbed to Emilia Pérez receiving a staggering 13 nominations, this season’s nominees have been met with outrage for a number of reasons. Arguably, the biggest concern that has been raised stems from both Emilia Pérez and The Brutalist, with their respective production companies admitting that generative AI was used in the making of the films. In an era where the use of AI has become more normalised in everyday life, especially in the entertainment industry (much to the dismay of artists), this choice raises a moral dilemma: should we be consuming content that they could not be bothered to make?
Of course, there is a major distinction to be made: not all AI is bad. In some fields, AI has been used to revolutionise our approach to life. It has helped in early breast cancer detection screenings, in media restoration, and in the development of algorithms to enhance our online experience. The issue arises when generative AI is used to create something out of nothing. Although popularised by the likes of ChatGPT, Grok, and Meta AI, the generative AI industry has expanded and has now been integrated into almost every app and program we use today. Major companies such as Coca-Cola have used AI-generated imagery to replace advertisements that are typically animated by human artists. Certain groups theorise that soon people will be making feature-length films solely using generative AI.
A major problem with this new development is the ethical concerns surrounding AI-generated content. Because it is so new, it is thus far unregulated and has no legislation against it. This means that companies are free to legally get away with anything they please, and this has led to numerous issues related to plagiarism. Although the AI programs seem to generate content out of nothing, deep dives into the data used to create these programs have revealed that they are, in fact, being illegally trained on other people’s intellectual property without their knowledge or consent. The same crime that the rest of the world is warned against because it has dire consequences? These companies are exempt from it, and they even get praised for it.
Another big concern is the livelihoods that are being sacrificed. Across the entertainment industry, people are being laid off and replaced with generative AI programs. Animators are losing their jobs, and actors are having their faces and voices stolen, appearing in productions they did not consent to. This has caused alarm and at the end of 2023, members of the SAG-AFTRA union went on strike, demanding better protection and clarification in their contracts regarding generative AI. Studios in the United States attempted to have a clause written into actors’ contracts that gave them the rights to their likenesses, including replicating them with generative AI technology rather than paying the actors themselves. The resulting strikes and contract negotiations led to studios losing large sums of money and having to fairly compensate actors according to how their AI likenesses would be used.
However, the trouble did not end there. While the strike was ongoing, Meta and affiliated companies were posting listings for struggling actors who could no longer work: sit for two hours while they scan your voice, likeness, and personality. The resulting images have not been released for anything yet, but actors in similar positions have noticed that the scans they had been pressured to submit resulted in them featuring in content they had not consented to appear in. Despite the strike and despite the carefully-negotiated contracts and hard-won protections against this blatant theft of intellectual property, people were yet again not being compensated for work that they had not signed up for to begin with.
These deepfakes scratch the surface of something more sinister. While most companies have simply been using second-rate programs to generate an AI likeness of Tom Hanks for their second-rate game without notifying or compensating him, some entities have taken a darker turn. The last few years have seen a surge in AI-generated pornography, with photographs of unsuspecting victims being turned into graphic scenes. Some are even underage, making this already-serious crime even more disturbing. While some see generative AI as a fun little tool to make a picture of a cat sitting on a mushroom, the broader implications of its power have devastating consequences. For as long as it goes unregulated, not just in the entertainment industry but in the wider world too, generative AI is dangerous and has the potential to destroy a life and a livelihood.
For the record, this article was not written with ChatGPT.

