US and Allies Issue AI Security Guidance

The US, along with Australia, New Zealand, and the UK, has released joint guidance to enhance AI security, focusing on protecting training data and infrastructure.

The US, in collaboration with Australia, New Zealand, and the UK, has released a joint guidance document aimed at enhancing the security of AI systems. This guidance, published on Thursday, emphasizes the importance of protecting training data from tampering and limiting access to AI infrastructure .

The document addresses various aspects of AI security, including data protection throughout the AI lifecycle, supply chain considerations, and strategies to mitigate potential attacks on large datasets. It highlights the need for digital signatures to authenticate data modifications, trusted infrastructure to prevent unauthorized access, and ongoing risk assessments to identify emerging threats.

The guidance also recommends using cryptographic hashes to ensure data integrity and anomaly detection algorithms to filter out malicious data points before training. These measures aim to prevent data quality issues, such as statistical bias and data drift, from compromising AI model safety and reliability. The collaboration reflects growing concerns among Western nations about the vulnerabilities in AI systems that could impact critical infrastructure.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish. For example, in the Daily AI Brief you can read the most up to date AI news round-up 6 days per week.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more