
Qwen2.5-Max Vulnerability Assessment by Protect AI
Protect AI has conducted a vulnerability assessment of the Qwen2.5-Max model, revealing a medium risk score of 35 out of 100. The assessment utilized Protect AI's Recon tool, which employs an Attack Library scan to evaluate the model's resilience against various attack techniques, including evasion, system prompt leak, prompt injection, jailbreak, safety, and adversarial suffix.
The assessment identified 140 successful attacks, with over 94 classified as critical or high severity. The model was found to be most vulnerable to prompt injection and evasion techniques, with nearly 48% of successful attacks falling into the prompt injection category. This highlights significant concerns for the model's use in large language model (LLM) applications, especially in enterprise settings.
In comparison to DeepSeek-V3-0324, Qwen2.5-Max demonstrated better security alignment, with a lower attack success rate in prompt injection and evasion categories. Despite DeepSeek-V3-0324's superior performance in reasoning and code generation benchmarks, Qwen2.5-Max showed greater resilience to attacks, making it a more secure option for LLM applications.
We hope you enjoyed this article.
Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.
Also, consider following our LinkedIn page AI Brief.
Subscribe to Daily AI Brief
Daily report covering major AI developments and industry news, with both top stories and complete market updates