How adversarial AI is creating shallow trust in deepfake world

How adversarial AI is creating shallow trust in deepfake world

Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More


With 87% of Americans holding businesses accountable for digital privacy, yet only 34% trusting them to use AI effectively to protect against fraud, a significant trust gap exists. Despite 51% of enterprises deploying AI for cybersecurity and fraud management, just 43% of customers globally believe companies are getting it right. There’s an urgent need for companies to bridge the trust gap and ensure their AI-driven security measures inspire confidence. Deepfakes are widening the gap.

Growing Trust Gap

The growing trust gap permeates everything, from customers’ buying relationships with businesses they’ve trusted for years to elections being held in seven of the ten largest countries in the world. Telesign’s 2024 Trust Index provides new insights into the growing trust gap between customers and the companies they buy from and, on a broader scale, national elections.  

Deepfakes depleting trust in brands, elections

Deepfakes and misinformation are driving a wedge of distrust between companies, the customers they serve, and citizens participating in elections this year.

“Once fooled by a deepfake, you may no longer believe what you see online. And when people begin to doubt everything when they can’t tell fiction from fact, democracy itself is threatened,” says Andy Parsons, Adobe’s Senior Director of the Content Authenticity Initiative.


Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now


Widespread distributions of deepfakes across social media platforms populated with bot-based, often automated fake accounts make it even more challenging to differentiate between fake and real content. This technique has become commonplace globally. One example is from September 2020, when analytics firm Graphika and Facebook blocked a Chinese network of accounts supporting “Operation Naval Gazing” that posted content on geopolitical issues, including US-Chinese relations in the context of the South China Sea conflict.

Nation-states invest heavily in misinformation campaigns to sway the elections of nations they’re in conflict with, often with the goal of destabilizing democracy or creating social unrest. The 2024 Annual Threat Assessment of the U.S. Intelligence Community report states, “Russia is using AI to create deepfakes and is developing the capability to fool experts. Individuals in war zones and unstable political environments may serve as some of the highest-value targets for such deepfake malign influence.”

Attackers are relentless in weaponizing AI and building arsenals of deepfake technologies that rely on the rapid gains being made in generative adversarial networks (GANs). Their tradecraft is having an immediate impact on voters globally.

72% of global voters fear AI-generated content with deepfake video and voice cloning is undermining elections today, according to Telesign’s Index. 81% of Americans are specifically concerned about the impact deepfakes and related GAN-generated content will have on elections. Americans are among the most aware of AI-generated political ads or messages. 45% report seeing an AI-generated political ad or message in the last year, while 17% have seen one in the last week.

Trust in AI and Machine Learning

One promising sign from Telesign’s Index is that despite fears of adversarial AI-based attacks using deepfakes and voice cloning to derail elections, the majority  (71%) of Americans would trust election outcomes more if AI and machine learning (ML) were used to prevent cyberattacks and fraud.

How GANs deliver increasingly realistic content

GANs are the tech engines powering deep fake’s growing popularity. Everyone, from rogue attackers experimenting with the technology to sophisticated nation-states, including Russia, is doubling down on GANs to create videos and voice cloning that appear authentic.

The greater the authenticity of deep fake content, the greater the impact on customer and voter trust. Because they are so challenging to detect, GANs are extensively used in phishing attacks, identity theft, and social engineering schemes. The New York Times offers a quiz to see if readers can identify which of ten images are real or AI-generated, further underscoring how rapidly GANs are improving deepfakes.

GANs include two competing neural networks, with the first serving as the generator and the second the discriminator. The generator continually creates false, synthetic data, including images, videos, or audio, while the discriminator evaluates how real the created content looks.

The goal is for the generator to continually increase the quality and realism of the image or data to deceive the discriminator. The sophisticated nature of GANs enables the creation of deepfakes that are nearly indistinguishable from authentic content, significantly undermining trust. These AI-generated fakes can be used to spread misinformation rapidly through social media and fake accounts, eroding trust in brands and democratic processes alike.

Source: CEPS Task Force Report, May 2021.

Protecting trust in a deepfake world

“The emergence of AI over the past year has brought the importance of trust in the digital world to the forefront,” says Christophe Van de Weyer, CEO of Telesign. “As AI continues to advance and become more accessible, it is crucial that we prioritize fraud protection solutions powered by AI to protect the integrity of personal and institutional data—AI is the best defense against AI-enabled fraud attacks. At Telesign, we are committed to leveraging AI and ML technologies to combat digital fraud, ensuring a more secure and trustworthy digital environment for all.” Harnessing intelligence from more than 2,200 digital identity signals, Telesign’s AI models empower companies to transact with their customers and grow trust, fulfilling the growth potential today’s diverse digital economies represent. Telesign helps its customers prevent the transmission of 30+ million fraudulent messages each month and protects 1+ billion accounts from takeovers every year.  Verify API from Telesign uses AI and ML to add contextual intelligence and consolidate omnichannel verification into a single API, streamlining transactions and reducing fraud risks.

Telesigns’ Index shows that there is valid cause for concern when it comes to getting cyber hygiene right. Their study found that 99% of successful digital intrusions start when accounts have multifactor authentication (MFA) turned off. CISA provides a useful fact sheet on MFA that defines why it’s important and how it works.

A well-executed MFA plan will require the user to present a combination of something they know, something they have, or some form of a biometric factor. One of the primary reasons so many Snowflake customers are getting breached is that MFA is not enabled by default. Microsoft will start enforcing MFA on Azure in July. GitHub began requiring users to enable MFA starting in March 2023.

Identity-based breaches quickly deplete customer trust. Lack of a solid identity and access management (IAM) hygiene plan nearly always leads to orphaned, dormant accounts that often stay active for years. Attackers constantly sharpen their tradecraft to find new ways to identify and exploit dormant accounts.

Recent research by Ivanti found that 45% of enterprises believe former employees and contractors may still have active access to their company systems and files. “Enterprises and large organizations often fail to account for the huge ecosystem of apps, platforms, and third-party services that grant access well past an employee’s or contractor’s termination,” Dr. Srinivas Mukkamala, Chief Product Officer at Ivanti told VentureBeat during an interview earlier this year. “There is a shockingly large number of security professionals — and even leadership-level executives — still have access to former employers’ systems and data.”

Conclusion – Preserving trust in a deepfake world

Telesign’s Trust Index quantifies the current trust gaps and their direction for the future. One of the Index’s most pragmatic findings is just how important it is to get IAM and MFA right. Another is how much customers rely on CISOs and CIOs to make the right decisions regarding AL/ML to protect their customers’ identities and data.

As neural networks continue to improve, increasing GAN’s accuracy, speed, and ability to create deceptive content, doubling down on security becomes core to any CISO’s roadmap for the future. Nearly all breach attempts start with a compromised identity. Shutting that down, regardless of how it starts with deep fake content, is a goal in reach for any business.



Source link lol
By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.