Real Talk about Deepfakes
- Heather Pennel
- 2 days ago
- 5 min read

I was talking to a friend of mine recently who told me whenever a member of his cybersecurity team calls him, it has to be cameras on and full live background, no filters. At first, I thought he was just talking about a corporate “cameras on” policy, but he soon explained that due to the rise of nation-state attacks, it was imperative that he can see and know that he is talking to his team and not a threat actor using deepfake technology. Maybe you have had this experience in your own business. If not, I unfortunately feel it won’t be long before this threat expands to your neck of the woods. 2026 is forecasted to be “the year of the deepfakes,” and they are expected to become mainstream in scams.
We live in interesting times, and it’s fascinating to witness the rapid changes that AI is bringing to the table. The weaponization of generative AI has come to the forefront, and deepfakes have become a critical issue. We rely heavily on our senses to discern what is true, and that is quickly being tested with advances in AI. When it comes to taking in information with our ears and eyes, deepfakes are challenging our senses as the technology is evolving at astonishing speeds.
What began as experimental deepfake threats has quickly become a normalized and highly convincing tactic in modern cyberattacks. Projections suggest that the total number of shared deepfakes has the potential to increase from around 500,000 in 2023 to an estimated 8 million by the end of 2025. The volume of deepfake videos alone is projected to grow 900% annually at this point. This threat is also evolving from a static and previously generated version to advanced real-time, interactive, and voice-activated synthetic content that is much harder to detect.. This technology is moving from being used for mainly reputational damage to direct and massive financial fraud.
What can we expect in 2026?
Can you tell a deepfake from an authentic one? I’ll be honest that I have been fooled. Many online news resources offer fun “quizzes” to test your knowledge on advances in the technology, and I have humbly failed. It’s suggested that human detection of deepfake videos is less than 25% accurate. 2026 is the projected year for it to become nearly impossible for humans to distinguish synthetic media from real media. We are also expected to take the brunt of financial losses due to deepfakes this year. By 2027, the projected loss is estimated at $40 billion. A consequence of this is that deepfakes have the potential to outsmart your HR department. Synthetic candidates will be capable of passing interviews in real time, allowing organizations to unknowingly hire threat actors. We are at risk of inviting the enemy right into our front doors and paying them to do it!
The most significant shift in the last six months has been the transition of pre-rendered video clips to real-time “live” deepfakes. Scammers are now using platforms like Zoom and YouTube to present AI avatars, allowing them to impersonate colleagues, loved ones, or other trusted individuals in live, high-pressure conversations. It’s one thing to catch a phishing email from “your boss” asking you to send an Amazon card to a specific contact. How likely are we to fall for a live video Zoom call with not only their voice but real-time likeness, mannerisms, and behavior asking you to hurriedly take care of an urgent financial request. We will always have to remain vigilant and have alternative ways of verifying and communicating with our trusted inner circle. Inter office procedures and employee education will have to adapt and change as this risk becomes a credible threat.
Last but not least, website and identity cloning are on the rise. AI tools are now allowing for the rapid creation of perfectly cloned websites that are difficult for fraud teams to take down permanently. Of course, this is leading to mass phishing and synthetic identity theft. Deepfakes will dominate our voices, videos, and websites. Sadly, all of our technological forms of presence are at risk of becoming a deepfake.
How can we defend ourselves?
The Zero-Trust approach is always a winner and in this case technology alone will no longer be enough. For many of us these are the things we have always done but we need to apply them in new ways. I remember being in elementary school and learning to have a safe word with my family incase someone other than my parents tried to pick us up from school. The same principal applies now, Corporations and families can consider establishing pre-agreed "safe words or phrases" that is never written down digitally. This has remained one of the most effective ways. Organizations will need to rely on out-of-band verification using secondary channels to verify sensitive and financial requests. Verifying requests through completely different mediums such as a direct call to personal number or an internal encrypted chat is necessary. Finally implementing a 15-30 minute "cool down" period for high risk actions. Deepfake attacks rely on high pressure urgency to prevent you from thinking critically.
Security firms are evolving toward “identity governance” for AI, treating autonomous agents as powerful identities that require granular permission controls and continuous monitoring.
It is projected that spending on deepfake detection technology will grow by 40% in 2026. Forrester gives the example that; “media firms are deploying deepfake detection for content authentication, while help desk teams use it to defend against social engineering attacks, such as those seen in the Scattered Spider campaign. Financial services are leveraging detection tools to prevent fraud, and HR teams are integrating them into interview processes to combat synthetic identity scams, including those linked to North Korean IT worker schemes.” This makes it clear that organizations need to prioritize evaluating deepfake detection providers now and update processes that are at high risk to stay ahead of this constantly adapting and advancing threat.
We are also starting to see more regulation and legal responses as this AI snowballs into us, and we will see some taking effect in 2026. Content Provenance is the ability to verify where digital content comes from, how it was created and whether it has been altered as it moves across platforms over time. There is now the C2PA standard , pioneered by Adobe and Microsoft, that is a technical standard that embeds a "digital nutrition label" into files to prove where they came from and if they were edited, think of it like a digital passport. Several U.S. states have enacted strict disclosure laws for AI in healthcare and elections. New York now requires conspicuous disclosure for “synthetic performers” in advertisements as of June 9, 2026. The TAKE IT DOWN Act, effective May 19, 2026, criminalizes the knowing publication of nonconsensual “digital forgeries” or intimate visual depictions. Regulation of AI has two sides and differing opinions; however, we can expect to see more of this to protect citizens, businesses, and governments as the threats continue to evolve.
The irony of artificial intelligence is that while we reap its benefits, allowing us more time to relax and let it take care of the busy work, its risks also mean we have to be more vigilant, more intelligent, better informed, and put more work into defeating those who are using it against us. Deepfakes are here, and next year’s article on this technology will most likely look very different than this, since AI evolves at paces we have only seen in science fiction. This is the time to gather our resources, put procedures and protections in place, and, as I like to say these days, don’t be scared, be prepared!
Sources:
Forrester. Predictions 2026: Trust & Privacy—How GenAI, Deepfakes, and Privacy Tech Will Affect Trust Globally.https://www.forrester.com/blogs/predictions-2026-trust-privacy-how-genai-deepfakes-and-privacy-tech-will-affect-trust-globally/
Coalition for Content Provenance and Authenticity (C2PA).https://c2pa.org/
Scientific American. How Digital Forensics Could Prove What’s Real in the Age of Deepfakes.https://www.scientificamerican.com/article/how-digital-forensics-could-prove-whats-real-in-the-age-of-deepfakes/
