Safetensors Joins PyTorch Foundation to Strengthen AI Model Security

April 08, 2026
The PyTorch Foundation has added Safetensors, developed by Hugging Face, as its newest project to improve secure AI model distribution and prevent arbitrary code execution risks.

Announced in a press release, the PyTorch Foundation has welcomed Safetensors as its newest project under the Linux Foundation. Developed by Hugging Face, Safetensors enhances the security of AI model distribution by preventing arbitrary code execution and improving performance across multi-GPU and multi-node deployments.

Safetensors is a widely used tensor serialization format within the open source machine learning ecosystem. Its integration into the Foundation's portfolio addresses security risks associated with model sharing and execution. The format acts as a table of contents for AI model data, ensuring that models can be exchanged safely without enabling untrusted code execution.

The addition of Safetensors expands the PyTorch Foundation’s suite of open source projects, which includes DeepSpeed, Helion, PyTorch, Ray, and vLLM. The Foundation stated that this move supports the development of secure, high-performance AI systems and aligns with its goal of providing trusted infrastructure for open source AI development.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.

Also, consider following us on social media:

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more