Anthropic Introduces Data Sharing Option for AI Training
Anthropic has announced updates to its consumer terms, requiring users of its Claude Free, Pro, and Max plans to decide by September 28, 2025, whether they want their data used for AI model training. In a company announcement, Anthropic detailed that users who opt in will have their data retained for five years, a significant change from the previous 30-day retention policy.
The new policy does not affect business customers using Claude Gov, Claude for Work, Claude for Education, or API access. Users can adjust their data sharing preferences at any time, and new users will make their choice during the signup process. Existing users will encounter a pop-up notification to make their decision.
Anthropic emphasizes that participating users will contribute to improving model safety and capabilities, such as coding and reasoning skills. The company assures that data will not be sold to third parties and will be protected through various privacy measures.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like AI Policy Brief or Daily AI Brief.
Also, consider following us on social media:
Subscribe to AI Policy Brief
Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.
Market report
AI’s Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation
The 2025 AI Governance Benchmark Report by ModelOp provides insights from 100 senior AI and data leaders across various industries, highlighting the challenges enterprises face in scaling AI initiatives. The report emphasizes the importance of AI governance and automation in overcoming fragmented systems and inconsistent practices, showcasing how early adoption correlates with faster deployment and stronger ROI.
Read more