
Qwen2.5-Max Vulnerability Assessment by Protect AI
Protect AI has conducted a vulnerability assessment of the Qwen2.5-Max model, revealing a medium risk score of 35 out of 100. The assessment utilized Protect AI's Recon tool, which employs an Attack Library scan to evaluate the model's resilience against various attack techniques, including evasion, system prompt leak, prompt injection, jailbreak, safety, and adversarial suffix.
The assessment identified 140 successful attacks, with over 94 classified as critical or high severity. The model was found to be most vulnerable to prompt injection and evasion techniques, with nearly 48% of successful attacks falling into the prompt injection category. This highlights significant concerns for the model's use in large language model (LLM) applications, especially in enterprise settings.
In comparison to DeepSeek-V3-0324, Qwen2.5-Max demonstrated better security alignment, with a lower attack success rate in prompt injection and evasion categories. Despite DeepSeek-V3-0324's superior performance in reasoning and code generation benchmarks, Qwen2.5-Max showed greater resilience to attacks, making it a more secure option for LLM applications.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.
Also, consider following us on social media:
Subscribe to Cybersecurity AI Weekly
Weekly newsletter about AI in Cybersecurity.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more