News

A new feature with Claude Opus 4 and 4.1 lets it end conversations with users with "persistently harmful or abusive ...
Claude AI of Anthropic now prohibits chats about nuclear and chemical weapons, reflecting the company's commitment to safety ...
OpenAI rival Anthropic says Claude has been updated with a rare new feature that allows the AI model to end conversations ...
While Meta's recently exposed AI policy explicitly permitted troubling sexual, violent, and racist content, Anthropic adopted ...
According to Anthropic, the vast majority of Claude users will never experience their AI suddenly walking out mid-chat. The ...
Software engineers are finding that OpenAI’s new GPT-5 model is helping them think through coding problems—but isn’t much ...
Claude AI adds privacy-first memory, extended reasoning, and education tools, challenging ChatGPT in enterprise and developer ...
In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The ...
Google and Anthropic are racing to add memory and massive context windows to their AIs right as new research shows that ...
How Hugh Williams, a former Google and eBay engineering VP, used Anthropic's Claude Code tool to rapidly build a complex ...
Anthropic launches learning modes for Claude AI that guide users through step-by-step reasoning instead of providing direct answers, intensifying competition with OpenAI and Google in the booming AI ...
Anthropic is boosting the capacity of its Claude Sonnet 4 AI model, giving enterprise API customers the ability to send up to ...