DeepSeek AI Model Faces Security Concerns After AppSOC Testing

A recent investigation by cybersecurity firm AppSOC has highlighted significant security vulnerabilities in the AI model developed by DeepSeek. The findings, released on February 11, 2025, describe the model as a "Pandora's box" of cyberthreats.
AppSOC's tests, conducted using their AI Security Platform, involved automated static analysis, dynamic tests, and red-teaming techniques to simulate real-world attacks. The results showed that the DeepSeek-R1 model had a 98.8% failure rate in generating malware and an 86.7% failure rate in producing virus code. Additionally, the model demonstrated a 68% failure rate in generating responses with toxic or harmful language and produced factually incorrect information 81% of the time.
Mali Gorantla, co-founder and chief scientist at AppSOC, advised against using DeepSeek's model for business-related AI applications, citing the high failure rates as unacceptable for enterprise use. Despite the model's lower cost and open-source nature, Gorantla emphasized the need for caution.
DeepSeek, a China-based company, recently gained attention for its cost-effective AI model, which some claimed could rival those of U.S. tech giants. However, the model has faced criticism in the U.S., with calls for a ban on its use in government devices and allegations of using OpenAI's models in its development. DeepSeek has yet to respond to these concerns.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish like AI Policy Brief.
Also, consider following us on social media:
More from: AI Safety
FINN Partners Launches AI-Powered Crisis Training Platform
Torc Joins Stanford Center for AI Safety for Autonomous Trucking Research
Forum Communications and Matrice.ai Partner for AI-Driven Safety Solutions
OpenAI Disrupts Covert Influence Operations Linked to China
More from: Cybersecurity
Sevii Launches Autonomous Defense Platform for Real-Time Cybersecurity
Maro Secures $4.3 Million to Tackle Human Risk in Cybersecurity
NST Cyber Unveils AtlasAI for Real-Time Threat Prioritization
Carnegie Mellon and Anthropic Explore LLMs in Cyberattacks
Knexus Acquires S4 to Enhance AI Solutions for U.S. Government
Subscribe to Cybersecurity AI Weekly
Weekly newsletter about AI in Cybersecurity.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more