CogX - Will 2030 be real? A discussion on Deepfakes

We’ve all been warned about deepfakes, but how real, and how imminent is the threat?

CogX - Will 2030 be real? A discussion on Deepfakes

With deepfakes cropping up all over the internet, many people have been perplexed over their use. Are they some harmless fun on TikTok, or do they pose a threat to society?

This video of Tom Cruise originally started on TikTok, but went viral after people worldwide were shocked to find out that it’s not Tom Cruise, but a deepfake. Some people still struggle to believe it’s not real due to how authentic the video appears.

The video, however, invites more profound questions. Could somebody’s face be misused for crime or identity theft? 

On CogX’s panel, Henry Ajder, Nina Schick, and creator of the Tom Cruise deepfake, Chris Ume explore such questions in their ‘Will 2030 be real?’ panel.  

Henry Adjer, a leading deepfake researcher, expressed concern that deepfakes may be used for espionage “I think absolutely the malicious use of synthetic media is gonna become a very real and viable cyber threat.”

 “Technology is advancing quickly, and one of the features of that is the democratisation”.

With misuse, Adjer warns of the potential of political disinformation, as citizens may be unable to discern its authenticity. Additionally, he warns of the possibility of ‘cheap fakes’, created by the everyday person will pose a “very real cyber threat [...] and will inevitably be weaponised in a cyber context.” 

How much of an imminent threat are deepfakes?

Chris Ume, the creator of the Tom Cruise deepfake above, explained, ‘Deepfakes are hard to get that level of realism and require a good actor’. Additionally, deepfakes at this moment in time are only able to be applied to faces, so a young face to an old body would give it away.

Ume warns we’re “5-7 years away from regular people being able to make them.” 

Panellist Nina Schick, author and specialist in technology, geopolitics and society, added,

“We should prepare, we should get ready."

Schick explained that the internet is a “rapidly changing information ecosystem,” supported by the statistic that it is predicted 90% of video content by 2030 will be synthetically generated.

Is there a solution?

Nina suggested that there are two technical solutions to deepfakes:

1. Build an AI (artificial intelligence) to detect deepfakes or synthetic media

This design would use a detection model, as deepfakes would be undetectable to the human eye, and there would be far too much content for humans to sift through. Facebook has been working on a similar solution.

One concern would be that as soon technology is developed and built, generators can beat it. “There will never be one size that fits all”.

An additional concern would be that the synthetic material becomes so sophisticated that no detector is able to pick it up.

Academic researchers are still undecided and often split on this idea.

2. Content Authenticity 

Content Authenticity has been considered by tech software already, as seen by the Content Authenticity Initiative, with three steps.

  1. Detection of deliberately deceptive media using algorithmic identification and human-centred verification. A concern, however, is that synthetic content will become faster and better, resulting in these detection techniques struggling to keep pace.
  2. Education. Creators must understand ways to use these high-tech creative tools responsibly, and skills must be learned and promoted through media literacy campaigns and formal education. People must become equipped with the tools and knowledge to discern synthetic media and misinformation.
  3. Content attribution. Where a tracker is built inside and can track and expose indicators of authenticity so that consumers can have awareness of who has altered content and what exactly has been changed. This ability to provide content attribution for creators, publishers and consumers is essential to engender trust online. However, the legal aspect is conflicted and is thought by researchers to have societal impacts on privacy. 

How can deepfakes be ethical?

Chris Ume urges ethical standards and regulations to be imposed amongst creators, such as gaining permission from the person whose face is being imitated. Responsibility also needs to be taken by the creators, who should be asking self-reflective questions like:

Am I harming somebody or their reputation?

Am I making somebody say something they would never say?

What do you think of deepfakes? Let us know down below in the comments!

Author

Elle Farrell-Kingsley

Elle Farrell-Kingsley Contributor

Elle is a passionate advocate for youth policy, AI ethics, and interdisciplinary approaches. Recognised for reporting and researching emerging technologies and their impact, Elle has earned accolades such as the 100 Brilliant Women in AI Ethics™ 2024, the TechWomen100 Award, and the Lord Blunkett Award at the University of Law. Her achievements have led to a funded place on the Sustainable Finance programme at the Smith School of Enterprise and the Environment University of Oxford, a Lord Blunkett scholarship covering her Legal Technology, AI and Cyberlaw studies, and a prestigious John Schofield Fellowship with a mentor from BBC World News, enhancing her skills in broadcast media. Her work spans impactful journalism, content curation for AI search engines, and advocating for informed policies in the UK Parliament.

With a humanities and social sciences background, she offers a unique perspective that encourages readers to explore the intersection of arts, technology, policy, and society.

Recent posts by this author

View more posts by Elle Farrell-Kingsley

0 Comments

Post A Comment

You must be signed in to post a comment. Click here to sign in now

You might also like

In This Climate: Did COP29 actually achieve anything?

In This Climate: Did COP29 actually achieve anything?

by Voice Magazine

Read now