Anthropic Restricts AI Services to Chinese-Owned Entities
Anthropic has announced a significant policy change, barring Chinese-owned entities from using its artificial intelligence services. This move, reported by the Financial Times, is part of a broader effort to limit access to its technology by countries deemed as authoritarian regions, including Russia, Iran, and North Korea.
The San Francisco-based company, known for its Claude chatbot, stated that the policy would affect entities more than 50 percent owned by companies in unsupported regions. This decision aims to prevent these entities from accessing Anthropic's AI services through subsidiaries in other countries.
An Anthropic executive highlighted that the policy is designed to align with the company's commitment to ensuring that AI capabilities advance democratic interests. The policy shift is expected to impact the company's revenues by hundreds of millions of dollars, but Anthropic believes it is necessary to address the potential misuse of AI technology by authoritarian regimes.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like AI Funding Brief, AI Policy Brief or Daily AI Brief.
Also, consider following us on social media:
More from: Funding
More from: Regulation
Subscribe to AI Policy Brief
Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.
Whitepaper
Governing the Future: A Strategic Framework for AI Adoption in Financial Institutions
This whitepaper explores the transformative impact of artificial intelligence on the financial industry, focusing on the governance challenges and regulatory demands faced by banks. It provides a strategic framework for AI adoption, emphasizing the importance of a unified AI approach to streamline compliance and reduce operational costs. The document offers actionable insights and expert recommendations for banks with fewer than 2,000 employees to become leaders in compliant, customer-centric AI.
Read more