Deepfake – what it is and how to tell it from the original

25. 02. 2025 18 reading minutes

We are entering a new digital age where we can no longer trust everything we see and hear. What was once considered irrefutable proof that something really happened, captured as a photograph, video or sound recording, is increasingly becoming a potential tool for manipulation. With the advent of deepfake technology, the line between reality and illusion is blurring, as in the film The Matrix.

Deepfake - what it is and how to tell it from the original

In the article you will learn:

With the computer equipment and software tools available, anyone can alter any image, sound or even video to make it look completely realistic, even if it is a fake. Fake news, manipulated political speeches or defamatory videos that damage the reputation of others make it difficult to distinguish truth from deception.

Once used to create stunning film effects, these technologies are becoming a dangerous tool in the hands of fraudsters. The creators of fake videos can easily create realistic situations that influence public opinion, spread misinformation online and disrupt fair political competition. Conversely, consumers of this fake media make themselves vulnerable to deception, manipulation and lies that can have serious consequences for their lives.

Trust in digital media is increasingly fragile. We are witnessing a plethora of new false information spreading through virtual space with astonishing speed. They look all too real. Can these digital scams be detected and can we protect ourselves from deepfake threats? Read on to find out.

Deepfake technology – example 1

Definition of deepfake

Deepfake is an advanced form of digital manipulation that uses artificial intelligence (AI) and machine learning (ML) to create or edit audiovisual content. The quality of fake content is determined by how faithful and realistic it is to us, and how difficult it is to distinguish from real content. Data scientists ( AWS Data Scientist) play an important role in this process, analyzing vast amounts of visual and audio data, training AI models, and optimizing algorithms to make deepfakes outputs look as convincing as possible.

The term deepfake itself is a combination of the words deep and fake.. Deep comes from deep learning AI technology (which is a subset of machine learningand uses neural networks to process information in layers), and fake. refers to fake content.

The beginning of deepfakes

Synthetic media technology has its roots in the field of computer imaging and artificial intelligence, which began to develop in the 1990s. During this period, technologies such as Computer Generated Imagery ( CGI) were developed to enable realistic animations and simulations of human faces. Although these technologies were primarily intended for the film industry, they represented a future form for more advanced image and video manipulation systems.

Once used for stunning film effects, they are now becoming a dangerous tool in the hands of fraudsters. The creators of fake videos can easily create realistic situations that can influence public opinion, spread misinformation on social networks or even disrupt fair political competition. Conversely, consumers of these fake media make themselves vulnerable to deception, manipulation and lies that can have serious consequences for their lives.

The foundations of deepfake technology were laid with the development of neural networks. In 2014, machine learning scientist Ian Goodfellow introduced the concept of generative adversarial networks ( GANs). These networks work on the principle of two algorithms that compete.

Generator algorithm

The primary role of the generator is to create initial fake digital content such as audio, photo or video. The goal of the generator is to mimic the target’s appearance, voice or behaviour as closely as possible.

Discriminator algorithm

The discriminator then analyzes the content produced by the generator to determine the extent to which it appears authentic or fake.

Repetitive feedback between the generator and discriminator creates a continuous process of incremental improvement. This technology has become the basis for creating realistic deepfake videos and images.

Deepfake technology – example 2

Origin of the term deepfake

The name itself first appeared on the Reddit platform in late 2017. A user who chose the nickname ‘Deepfake’ began sharing edited videos of celebrities with pornographic content, using machine learning algorithms to swap the faces of celebrities and porn actresses. Although these were amateur experiments, the fake videos immediately sparked widespread public interest, but also concerns about how the technology could be used to turn innocent entertainment into unethical or illegal activity.

Rapid development of deepfake technology

As of 2018, deepfake technology has started to improve dramatically, mainly due to the availability of powerful computing devices such as GPUs and Ryzen Threadripper, and the development of the cloud. Cases of deepfake videos being used to spread misinformation, influence public opinion, or create erotic content without the consent of those involved have become more common.

Deepfake – exponential growth of fake content

However, in recent years we have seen an exponential increase in the quality and availability of deepfake technology. Last year alone, deepfake content increased by 1,700% compared to the previous year. The quality of fake material has improved to the point where over 75% of people now have difficulty distinguishing between real and fake content. While initially over 95% of deepfake material was adult content, today up to 80% of all deepfakes are related to cryptocurrencies.

Elon Musk is the most common choice of scammers to promote various dubious investments. People have lost billions of dollars in this way. The largest number of fake ads encouraging people to invest in fictitious assets can be found on Facebook, but the X network (formerly Twitter), YouTube or Instagram are also used for this purpose.

Deepfake technology – example 3

The deepfake creation process

The process starts with collecting a large amount of data, mainly photos and videos of the person we are trying to recreate. The more material we collect, the more realistic and convincing the end result will be. We feed the data into an AI system that uses deep machine learning techniques to analyse and determine the characteristic features of facial expressions and movements of facial parts such as eyes, ears, mouth, eyebrows, etc.

Then another set of data is used. This consists of material such as the image and sound of the person we want to imitate. Most often it is our own image and sound that we want to transfer to the person we want to imitate. Artificial intelligence uses complex algorithms to merge the two sets of data and create a realistic-looking representation.

Of course, this process is not straightforward, but rather iterative and requires constant adjustments. To create a convincing deepfake, we need to work on the synchronization of lips, facial expressions and overall lighting. We also need to play around with the audio, especially if we are creating a video where the person is supposed to be speaking.

Deepfake tools are now available that can create a fairly convincing digital impersonation from a single photo, or imitate a voice from a few minutes of audio recording. Deloitte reports that deepfake software can be purchased on the dark web for as little as $20. On the other hand, better, more professional solutions cost several thousand dollars. Of course, the result is directly proportional to the amount of work and time involved.

Deepfake vs. original – AI-generated faces

With advances in artificial intelligence, we are seeing the emergence of technologies capable of creating synthetic faces so realistic that it is almost impossible for humans to distinguish them from real ones. A study by Sophia J. Nightingale and Hany Farid, published in 2022 in the Proceedings of the National Academy of Sciences (PNAS), looked at this very question.

In the study, the authors found that faces generated using advanced algorithms, specifically StyleGAN2,are almost indistinguishable from real faces to ordinary observers. In experiments where participants were asked to judge whether a face was real or synthetic, they achieved an accuracy of only about 48.2% to 59%, which is at the level of chance estimation. Even training and specific instructions for identifying synthetic faces did not significantly improve accuracy. Synthetic faces are now so accurate that there is no way of effectively recognizing them with the naked eye.

Success rate of correct classification of real (R) and synthetic (S) faces
Success rate of correct classification of real (R) and synthetic (S) faces .
SOURCE: pnas.org/doi/epub/10.1073/pnas.2120481119

Credibility of synthetic faces

As well as being indistinguishable from real faces, the study found that synthetic faces were, on average, rated as more trustworthy. Participants in the experiment were asked to rate the trustworthiness of the faces on a scale of 1 to 7 (with 1 being the least trustworthy face and 7 being the most trustworthy face), with the synthetic faces scoring an average of 4.82, compared to 4.48 for the real faces. This difference, although small, was statistically significant. The reason for this phenomenon may be that the synthetic faces often show a subtle smile, which also has a positive effect on their ratings.

Synthetic faces (S) look more credible than real faces (R)
Synthetic faces (S) look more credible than real faces (R) .
SOURCE: pnas.org/doi/epub/10.1073/pnas.2120481119

These findings have serious implications for the digital world. The availability of this technology opens up opportunities for abuse – from the creation of false identities and fraud to the anonymous dissemination of misinformation. In a situation where any photo or video can be faked, the authenticity of digital content can be questioned from the outset.

Although the authors of the study suggest, for example, implementing watermarks in the generated images, this does not solve the problem, as AI algorithms can easily remove them. A better solution would be to use cryptographic verification, where a private encryption key or digital signature is attached to the content and the corresponding public key is made available to decrypt the signature.

Deepfake trends 2024

Let’s take a look at the year 2024. The Deepfake Trends 2024 study reveals that deepfake messages have thrived this year, thanks largely to freely available generative AI tools. Deepfake lies have spread across a wide range of sectors, impacting businesses of all sizes and raising the issue of effective identity verification.

Important findings

  • Deepfake scams are on the rise. Up to half of businesses worldwide have experienced a fraud attempt using audio or video deepfakes.
  • Average financial losses are as high as $450,000, with large companies often reporting losses in excess of $1 million.
  • In terms of risk perception, up to 66% of executives consider deepfake a serious threat, with identity theft (42%) and phishing attacks being the most common concerns.

Global overview and trends by sector

Deepfakes are having the greatest impact in countries such as the United Arab Emirates and Singapore, where more than 50% of organizations have experienced one in the last year. The sectors most affected include IT, crypto firms and financial services, but healthcare and aviation companies also report significant risks.

Business safeguards

Biometric verification and multi-factor authentication (MFA): More than 84% of organizations have deployed advanced deepfake detection technologies, with biometrics such as fingerprints and liveness detection playing the most prominent role.
Advanced AI algorithms: Nearly half of companies are using machine learning to improve the accuracy of deepfake detection.

People already lose billions of dollars a year by handing over their money to fraudsters. The Centre for Financial Services at consulting firm Deloitte predicts that generative artificial intelligence could cause $40 billion in fraud losses in the US by 2027, up from $12.3 billion in 2023, an increase of 32% per year.

In the future, it will be interesting to see how companies manage to adapt to increasingly sophisticated fraud.

Deepfake technology – demonstration 4

The positive side of deepfake technology

Although deepfake technologies are often associated with fraud and threats, their potential goes far beyond crime. Let’s take a look at how the application of deepfake can greatly enrich various industries and improve the quality of our daily lives.

Innovative entertainment and arts

Deepfakes are transforming the film and television industry. They make it possible to create realistic special effects, bring historical characters to life or even replicate actors for scenes that would otherwise be impossible to film. They also offer artists new ways to express their own creativity, for example by transforming still images into vivid portraits.

Personalisation and communication

Personalisation and communication In marketing and advertising, deepfake technology can be used to create personalised campaigns that better reach target audiences. For example, creating videos in which familiar faces address individual customers by name can increase engagement and build stronger brand relationships.

Educational tools

Deepfake technologies also have applications in education. Schools and universities can use realistic simulations to teach history, where historical figures “come to life” and tell their stories, or to train professionals such as doctors or pilots by simulating real-life situations.

Protection of cultural heritage

Deepfakes can be used to reconstruct damaged or lost cultural artefacts. They can help to digitally restore sculptures, paintings or other historical monuments and make them accessible to a wider audience. They can also serve as a tool for preserving memories, for example by creating realistic models of people for family archives.

AI research and development

Deepfake technologies are also helping to improve artificial intelligence in the areas of fraud detection and privacy. Research into deepfake fraud detection is providing new ways to improve security in the digital world.

How to spot a deepfake

How to spot a deepfake A few years ago, I could have told you how to spot a deepfake. Nowadays, it’s almost impossible for anyone to take the trouble to find all the possible details. Even AI systems designed to find deepfake patterns in content have a problem with this. This applies to deepfake material where the person isn’t making fast movements. For example, it is still a problem for deepfake technology to generate a realistic gymnast in an exercise – gymnastics has been called the modern Turing test for deepfakes.

Deepfake technology – demonstration 5

How can you protect yourself from misinformation?

Cybercriminals and scammers know that many people, especially the older ones among us, can’t tell the difference between a deepfake and a real thing. They don’t check whether the content they are presented with is authentic. The only protection that nature has given us is critical thinking. Before we believe anything we see, hear or read, we need to stop and ask ourselves a few questions. You’ve probably heard them before.

Who? Who is presenting this information and what is the source?
What? What is said or shown? Is the information shocking? Does it sound too good to be true?
Where? Where does the information come from? Is it possible to find out where it was first published?
When? When was this information recorded? Can it be verified?
Why? Why is this information being presented? Could there be an ulterior motive?
How? How do I know it’s real?

Quiz

As you can see from this quiz, it’s getting harder and harder to tell the difference between deepfakes and real content.

Deepfake – risk or opportunity?

Deepfake technology is a fascinating demonstration of how far human creativity and technology can go hand in hand. It allows us not only to create realistic images and stories, but also to confront the very nature of what we take to be reality. It forces us to re-evaluate our trust – in what we see, hear and read.

But there is an opportunity in this illusion. It teaches us to look deeper, to ask critical questions and not to give in to first impressions. With critical thinking, fact-checking and a cautious approach, we can not only master this technology, but also use it for the benefit of society.

Deepfake is an extraordinary phenomenon, like fire, which can be dangerous but also extremely useful if we know how to control it. It opens the door to new forms of creativity, learning and innovation. The challenge is not only to protect ourselves from its risks, but also to find ways to integrate it into our lives so that it serves us, not harms us.

The future belongs to those who can tame technology and turn it into a tool for good. Critical thinking is our greatest weapon. With it, we can overcome illusion and create a digital world where truth is stronger than lies. Deepfake is both a challenge and an opportunity – it is up to us to deal with the illusion.

FAQ

What is deepfake?

Deepfake is a technology that uses artificial intelligence to create realistic fake video or audio recordings. These recordings can show people saying or doing things they never actually did.

How does deepfake work?

Deepfake uses machine learning algorithms, in particular techniques such as generative adversarial networks (GANs). These networks learn from large amounts of data to reproduce faces and voices with high accuracy.

What are the potential uses of deepfake technology?

Deepfakes can be used for a variety of purposes, from entertainment (e.g. in films and video games) to education and advertising. But it can also be misused to spread misinformation or create fake news.

Is it possible to spot a deepfake?

Detecting deepfake technology is a challenge, but tools and techniques are being developed to identify fake videos. These tools look for irregularities in motion, lighting and sound.

What are the legal and ethical issues associated with deepfakes?

Deepfakes raise many ethical and legal issues, including privacy, copyright and the potential for misuse to spread false information. Many countries have already begun to pass legislation to regulate this technology.

Can deepfakes compromise security?

Yes, deepfakes can be used for fraud, blackmail or political manipulation. There have been cases where deepfake videos have been used to discredit public figures or spread fake news.

How can I protect myself from a deepfake?

It is important to be vigilant when consuming online content. Check sources of information, follow official channels and be sceptical of videos that look suspicious or are circulated without context.

Jozef Wagner

I have been programming in Java for more than 10 years, currently I work at msg life Slovakia as a Java programmer senior and I help customers implement their requirements into the Life Factory insurance software. In my free time I like to relax in the forest or play a good computer game.

Send attachments larger than 4MB to
jobs.sk@msg-life.com

Join us!

    *

    *

    The operator processing your personal data is the company msg life Slovakia s. r. o., Hraničná 18, 821 05 Bratislava, IČO: . Personal data in the scope of a resume, application for employment, motivation letter, or other documents with your personal data, supplemented by any notes from the selection procedure, will be processed for the purposes of the selection procedure and the creation of a database of applicants for future selection procedures in the above-mentioned scope for a period of 3 years. Your consent to the processing of personal data can be revoked at any time at the e-mail address: jobs.sk.life@msg.group or by written notification at the operator's address. Withdrawal of consent does not affect the processing of personal data based on consent before its withdrawal. Personal data may also be processed by an intermediary authorized by the operator (system provider), the company recruitis.io p. r. o., Chmelova 357/2, 500 03 Hradec Králové, Czech Republic, ID: . More information on the processing of personal data can be found at here>.