Protect AI and Leidos Collaborate to Secure U.S. Government AI Systems
Protect AI and Leidos have announced a collaboration to enhance AI security for U.S. government systems, announced in a press release. This partnership aims to provide robust AI security capabilities to protect mission-critical government applications from adversarial threats and vulnerabilities.
The collaboration leverages Leidos' expertise in secure digital transformation and Protect AI's platform to deliver comprehensive security across the entire AI supply chain. This includes protection against threats posed by next-generation agentic AI models, which are autonomous systems capable of making decisions without human intervention. Such systems can pose significant risks to national security and critical infrastructure if manipulated by external threats.
By integrating Protect AI's platform into Leidos' secure digital transformation initiatives, the partnership aims to provide federal agencies with critical capabilities to manage AI risks. This includes protection against issues like prompt injection, adversarial manipulation, and model drift. The Protect AI platform offers a suite of tools, including Guardian for model security, Recon for automated red-teaming, and Layer for LLM runtime security, ensuring comprehensive protection and compliance with federal standards.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like Defense AI Brief, Cybersecurity AI Weekly or Daily AI Brief.
Also, consider following us on social media:
More from: Defense
More from: Cybersecurity
Subscribe to Defense AI Brief
Your weekly intelligence briefing on the technology shaping modern warfare and national security.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more