Pindrop Reports 1,300% Surge in Deepfake Fraud

Pindrop's latest report reveals a dramatic increase in deepfake fraud, with synthetic voice attacks rising significantly across various sectors.

Pindrop has released its 2025 Voice Intelligence & Security Report, highlighting a significant rise in AI-powered fraud and deepfake attacks. Announced in a press release, the report reveals that deepfake fraud attempts increased by over 1,300% in 2024, with synthetic voice attacks surging across retail, banking, and insurance sectors.

The report details that fraud attempts in U.S. contact centers now occur every 46 seconds, with a notable rise in synthetic voice attacks: 107% in retail, 149% in banking, and 475% in insurance. Pindrop's analysis of over 1.2 billion customer calls in 2024 also showed a 173% increase in synthetic voice calls from Q1 to Q4.

Fraudsters are employing advanced tactics such as spoofing-as-a-service platforms and AI-enhanced phishing, leveraging breached personally identifiable information (PII) to bypass traditional defenses. The report also notes a growing trend of deepfake job candidates using AI-generated voices and video to deceive recruiters during remote interviews.

Looking ahead, Pindrop forecasts a continued rise in AI-driven threats, with deepfaked calls projected to increase by 155% in 2025. The company emphasizes the need for evolving security solutions to keep pace with the rapid advancements in fraud tactics.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more