Commercial AI models were able to autonomously generate real-world smart contract exploits worth millions; the costs of such attacks are falling rapidly. Recent research by major artificial intelligence company Antropic and AI security organization Machine Learning Alignment & Theory Scholars (MATS) showed that AI agents collectively developed smart contract exploits worth $4.6 million. Research released by Anthropic’s red team (a team dedicated to acting like a bad actor to discover potential for abuse) on Monday said that currently available commercial AI models are capable of exploiting smart contracts. Anthropic’s Claude Opus 4.5, Claude Sonnet 4.5 and OpenAI’s GPT-5 collectively developed exploits worth $4.6 million when tested on contracts, exploiting them after their most recent training data was gathered. Read more
Backed by Wall Street heavyweights, Anthropic’s soaring valuation comes after it closed a $13 billion Series F, reflecting the mainstreaming of AI. AI company Anthropic, the developer of the Claude family of large language models, has reached a $183 billion valuation following its latest funding round — a dramatic increase from the start of the year that underscores the accelerating growth of AI applications. The company disclosed Tuesday that it closed a $13 billion Series F round co-led by venture firms ICONIQ Capital, Fidelity Management & Research Company and Lightspeed Venture Partners. Some of North America’s most prominent investors also joined the raise, reflecting the surge in institutional interest in artificial intelligence as a disruptive technology. Read more