
Europol highlights how deepfake AI will affect law enforcement and what officials can do about it.Photo Credit: BoliviaInteligente via Unsplash
When just a few short years ago we were laughing at absurd, bizarre-looking, obviously fake AI videos and asking ourselves how people in the Facebook comment section could possibly think it’s real, we are now scrolling past AI-generated content without even noticing it’s not a real person. Europol, the European Union Agency for Law Enforcement Cooperation, published a report about how this could affect law enforcement proceedings … and how criminals could use it to their advantage.
Europol’s report, titled Facing reality? Law enforcement and the challenge of deepfakes, focuses specifically on the deepfake issue. Deepfakes are AI-generated likenesses of humans, which in recent years have improved so much that they are oftentimes indistinguishable from videos of real people. Europol warns of the impact of deepfakes on crime and law enforcement, claiming it could be used to manipulate documents, spread misinformation, change or generate video footage in its entirety, and even be provided as a service, with criminals selling the tools and technologies to create deepfakes to others.
The report outlines specific crimes that deepfakes could be used to carry out:
- Harassing or humiliating individuals online
- Perpetrating extortion and fraud
- Facilitating document fraud
- Falsifying online identities and fooling ‘know your customer’ mechanisms16
- Non-consensual pornography
- Online child sexual exploitation
- Falsifying or manipulating electronic evidence for criminal justice investigations
- Disrupting financial markets
- Distributing disinformation and manipulating public opinion
- Supporting the narratives of extremist or terrorist groups
- Stoking social unrest and political polarisation
Deepfakes will also make it difficult to continue with law enforcement proceedings, as new measures will need to be implemented to account for the fact that footage and documents can now be doctored. Where before audio-visual evidence was usually trusted to be an authentic representation of events, more safeguards will have to be put in place to prevent the manipulation of materials and cross-checking footage will become even more vital.
As for deepfake detection, the report does have a positive note: “Law enforcement has always had to deal with fake evidence and therefore is in a good position to adapt to the presence of deepfakes,” it asserts. The report outlines a few methods that officials can use to combat deepfake evidence, such as manual detection, automated detection, and preventative measures that can be taken to ensure the evidence is not manipulated in the first place.
An All-Powerful Tool … in Everyone’s Hands
Not only this, AI is only improving – and exponentially more quickly than anyone expected. With just a few short and specific prompts, anyone with Internet access could generate any length of text, entire songs, videos, photos, and within seconds or minutes at most. The report outlines policies of different social media apps that aim to regulate the inflow of AI-generated content:
- Meta (which owns Facebook and Instagram) aims to remove deepfakes, or otherwise edited media, where “manipulation isn’t apparent and could mislead”
- TikTok bans “Digital Forgeries (Synthetic Media or Manipulated Media) that mislead users by distorting the truth of events
- Reddit “does not allow content that impersonates individuals or entities in a misleading or deceptive manner.”
- Youtube has an existing ban for manipulated media under the spam, deceptive practices and scam policies of their community guidelines.
However, these policies haven’t seemed to remove a drop from the massive pool of AI content on any of these social media sites. Music streaming giant Spotify already has a huge library of AI-generated content, which many users will not even notice was not made by a person. YouTube and TikTok receive a barrage of AI content daily, and most kids’ content on the aforementioned apps is AI-generated. As many YouTubers have lamented, why would someone put the work and time in to make a video, when they could generate one in seconds and earn a lot more from it?
Constantly Updating, Constantly Everywhere
AI is not only prevalent on social media. Recently, Google announced that their Gemini AI will now be able to summarise and create prompts based on PDFs – joining the other apps Gemini is already integrated into, like Gmail, Sheets, and Docs. iPhones can now generate videos made from clips in your library or find and summarise information from your email. There are even AI chatbots that will effortlessly take on the role of your favorite character, or a famous celebrity as you chat with them.
The Bottom Line
While the report outlines a clear direction for law enforcement officials to take steps in to navigate the deepfake issue, creatives like musicians and YouTubers may have a rockier path ahead when it comes to competing against the massive amount of AI-generated content. However, while we may not all be able to stop people from using AI for nefarious purposes, we can all do our part to ensure that AI remains a useful tool – not a replacement – for human-made art.