As we move further into the Computer Age, fake news, digital deceit and widespread use of social media are having a profound impact on every element of society, from swaying elections and manipulating science-proven facts, to encouraging racial bias and exploiting women.
Once a topic only discussed in computer research labs, deepfakes were catapulted into mainstream media in 2017 after various online communities began swapping faces of high-profile personalities with actors in pornographic films.
Deepfakes are becoming increasingly convincing, with eroding clarity between what’s real and what’s fake.
What is a deepfake
Deepfaking is the act of using artificial intelligence and machine learning technology to produce or alter video, image or audio content using the sequence of the original to create a version of something that didn’t occur.
“You need a piece of machine learning to digest all of these video sequences, with the machine eventually learning who the person is, how they are represented, how they move and evolve in the video,” says Dr. Richard Nock, Data61’s machine learning group leader.
“So if you ask the machine to make a new sequence of this person, the machine is going to be able to automatically generate a new one.”
It’s the same story for images, paintings and sounds, aside from a few variations in the technology used to analyse and learn certain mediums.
“But the piece of technology is almost always the same, which is where the name ‘deep fake’ comes from,” says Dr. Nock. “It’s usually deep learning, a subset of machine learning, that you’re using to ask the machine to forge a new reality.”
Creating a deep fake
Deep fakes have been described as one of the contributing factors of the Infocalypse, a term used to label the age of cybercriminals, digital misinformation, clickbait and data misuse. A large amount of original digital content of the victim is needed - an easy feat considering the trend of posting videos and photos to social media.
Despite this, Dr. Nock argues that we shouldn’t expect the average internet user to start altering videos to the point that they’re unrecognisable to the original.
“No, if the person does not have enough videos, pictures, audio, and money it’s unlikely. It costs a small budget to create deep fakes. You need a computer, you need to train the computer, which needs to be reasonably powerful - more powerful if you want to do video sequences.
So for an individual in general, no, but if an individual is knowledgeable about machine learning, has money, and lots of background information, then it’s going to be easier.”
Creating a convincing deepfake is an unlikely feat for the general computer user, however, an individual with advanced knowledge of machine learning, the specific software needed to digitally alter a piece of content, and access to the victim’s publicly available social media profile for photographic, video and audio content, could do so.
Although as face-morphing apps inbuilt with automated AI and machine learning become more advanced, deepfake creation could possibly come to be attainable to the general population in the future.
The cost of a free download is all it takes for a Snapchat user to appear as someone else, with the application’s gender swap filter and baby lens completely altering the user’s appearance.
While most people use the feature in good faith, there have been numerous instances of catfishing (an individual that fabricates an online identity to trick others into exploitive emotional or romantic relationships), via online dating apps, with some using the experience as a social experiment and others as a ploy to extract sensitive information.
A US publication reported that last month a 20-year-old college student named Ethan used the popular gender swap filter to pose as a 16-year-old girl named Esther, eventually reporting a 40-year-old man to local authorities after he tried to solicit ‘Esther’ for a face-to-face meeting.
While this analogy could enhance law enforcement’s ability to discover potential criminal acts, it could also herald a future where inexpensive or free face morphing technology could be used by someone with limited machine learning knowledge and access to the victim’s public social media accounts to exploit the individual’s digital appearance.
Who is at risk
Politicians, celebrities and those in the public spotlight are the most obvious victims of deep fakes, however, the rise of posting multiple videos and selfies to public internet platforms places everyone at risk.
'The creation of explicit images is one example of how deepfakes are being used to harass individuals online, with one AI-powered app creating images of women might look like, according to the algorithm, unclothed.'
AI-assisted propaganda is predicted to impact the result of the 2020 US presidential according to The Wall Street Journal and The Guardian, with the rapid spread across social media of altered content, like that of the ‘drunk’ Nancy Polsi deepfake, already altering general opinion. One version of the altered Pelosi video that was posted by conservative Facebook page Politics WatchDog was viewed more than two million times, shared over 45,000 times and accumulated 23,000 comments calling the Democratic Party politician ‘drunk’ and ‘a babbling mess’.
According to Dr. Nock, an alternative effect of election deepfakery could be an online exodus, with a segment of the population placing their trust in the opinions of a closed circle of friends, whether it be physical or an online forum, such as Reddit.
“Once you’ve passed that breaking point and no longer trust an information source, most people would start retracting themselves, refraining themselves from accessing public media content because it cannot be trusted anymore, and eventually relying on their friends, which can be limiting if people are more exposed to opinions rather than the facts.”
“People don’t trust the public information that they have, but trust more in the private information they will get from social media.”
What is being done to prevent deepfakes
There are at least two ways to prevent deepfakes according to Dr Nock:
- Invent a mechanism of authenticity, whether that be a physical stamp such as blockchain or branding, to confirm that the information is from a trusted source and the video is depicting something that happened.
- Train machine learning to detect deep fakes created by other machines.
- These mechanisms would need to be widely adopted by different information sources in order to be successful.
“Blockchain could work - if carefully crafted - but a watermark component would probably not,” explains Dr. Nock. “Changing the format of an original document would eventually alter the watermark, while the document would obviously stay original; this would not happen with the blockchain.”
Machine learning is already detecting deep fakes, with researchers from UC Berkeley and the University of Southern California using this method to distinguish unique head and face movement. These subtle personal quirks are currently not modeled by deep fake algorithms, with the technique returning a 92% level of accuracy.
While this research is comforting, bad actors will inevitably continue to reinvent and adapt AI-generated fakes.
Machine learning is a powerful technology, and one that’s becoming more sophisticated over time. Deepfakes aside, machine learning is also bringing enormous positive benefits to areas like privacy, healthcare, transport and even self-driving cars.
At CSIRO’s Data61, we act as a network and partner with government, industry and universities, to advance the technologies of AI in many areas of society and industry, such as adversarial machine learning, cybersecurity and data protection, and rich data-driven insights.
Central to the wider deployment of these advanced technologies is trust, and much of our work is motivated by maximising the trustworthiness of these new technologies through publicly accessible case studies, pilot programs, and research.