July 27, 2025
Anthropic has launched 'sub-agents' for its Claude Code platform, allowing developers to delegate tasks to specialized AI assistants. This feature aims to streamline complex workflows by providing independent contexts for each task.
July 27, 2025
AI startup Anthropic is facing a class action lawsuit that could result in billions of dollars in damages due to the use of pirated books for training its AI models. The lawsuit, certified by a federal judge, involves millions of books and poses a significant financial threat to the company.
July 26, 2025
Anthropic has introduced AI agents designed to autonomously conduct alignment audits, enhancing the safety and reliability of AI models like Claude.
July 26, 2025
Carnegie Mellon University and Anthropic have demonstrated that large language models (LLMs) can autonomously plan and execute cyberattacks, simulating real-world breaches like the 2017 Equifax data breach.
July 26, 2025
AI startup Anthropic is in early discussions to raise up to $5 billion, aiming to more than double its valuation to over $150 billion. The funding round involves potential investments from Middle Eastern entities, including MGX.
July 24, 2025
Anthropic has partnered with the University of Chicago's Becker Friedman Institute to study AI's impact on labor markets and the economy, providing tools and training to faculty economists.
July 22, 2025
PitchBook has announced partnerships with Anthropic, Perplexity, Rogo, and Hebbia to integrate its private capital market data into AI-driven workflows, enhancing accessibility and usability for financial professionals.
July 21, 2025
Anthropic plans to sign the EU's General-Purpose AI Code of Practice, aligning with its commitment to transparency and safety in AI development.
July 15, 2025
S&P Global has partnered with Anthropic to integrate its financial data into the Claude AI platform, enhancing data access for financial professionals.
July 15, 2025
The Pentagon has awarded $200 million contracts to Anthropic, Google, OpenAI, and xAI to enhance national security through advanced AI technologies.
July 15, 2025
Anthropic has introduced a new solution for financial analysis with Claude, offering enhanced capabilities for finance professionals. The platform integrates various data sources and provides tools for complex financial tasks.
July 07, 2025
Anthropic has introduced a transparency framework for AI, targeting large model developers with specific revenue and spending thresholds, announced on July 7, 2025.
June 27, 2025
Anthropic has been revealed to have destroyed millions of print books to train its AI model, Claude, as part of a legally sanctioned fair use operation.
June 16, 2025
Anthropic has introduced the Claude Code SDK, enabling developers to integrate AI-powered coding tools into their workflows. The SDK supports TypeScript, Python, and command line usage.
June 08, 2025
Dario Amodei, CEO of Anthropic, has publicly opposed a proposed 10-year moratorium on state-level AI regulation, advocating instead for federal transparency standards.
June 06, 2025
Anthropic has launched a specialized set of Claude Gov models tailored for U.S. national security agencies, designed to enhance strategic planning and intelligence analysis.
June 01, 2025
Anthropic has open-sourced its circuit tracing tools, enabling researchers to generate and explore attribution graphs for AI models. This initiative aims to enhance the understanding of AI model behaviors.
May 31, 2025
Anthropic has achieved $3 billion in annualized revenue, a significant increase from $1 billion in December 2024, driven by rising business demand for AI, according to Reuters.
May 29, 2025
Reed Hastings, co-founder of Netflix, has been appointed to the board of directors at Anthropic, an AI safety and research company. His appointment was made by Anthropic's Long-Term Benefit Trust.
May 27, 2025
Anthropic has launched a beta version of voice mode for its Claude chatbot, allowing users to engage in spoken conversations. This feature is available on mobile apps and offers various voice options.