Policy
Latest news, analysis, and insights about Policy.
Trump Bans Anthropic. Hours Later, CENTCOM Uses Claude.
Trump ordered all federal agencies to cease using Anthropic technology. Hours later, CENTCOM used Claude for Iran airstrike targeting and combat simulations. The paradox reveals AI is now load-bearing military infrastructure that even the president can't switch off mid-war.
OpenAI Takes Pentagon Deal as Anthropic Fights Back
OpenAI secured a $200M Pentagon contract with safety guardrails while Anthropic was blacklisted for demanding nearly identical protections. The frontier AI industry just fractured along political lines — and the precedent should alarm everyone in tech.
AI's Biggest Week: Funding, Launches & Open Source
xAI bags $20B, Anthropic closes a $30B Series G, and the AlphaGo creator raises $1B to build AI without LLMs. Meanwhile, AI-generated comments just killed California's pollution rules. This week in AI was anything but quiet.
South Korea Enacts Sweeping AI Regulations as Startups Warn of Compliance Costs
South Korea has enacted landmark AI legislation, becoming one of the first Asian nations with comprehensive AI rules. But Korean startups—including global players like Kakao, Naver, and Upstage—are warning the compliance burden could kneecap innovation.
eBay's AI Agent Ban Signals Coming Clash Between Platforms and Agentic Commerce
eBay just drew a line in the sand against AI shopping agents. The platform's new terms explicitly ban 'buy for me' bots and LLM-driven tools—the first major policy response to agentic commerce from an e-commerce giant.
OpenAI's 'Edu for Countries' Brings AI Infrastructure to National Education Systems
OpenAI is making its biggest push into government relations yet with 'Edu for Countries,' a new initiative to embed AI tools directly into national education infrastructure. The program raises critical questions about who shapes the future of learning—and on what terms.
Murder-Suicide Case Exposes OpenAI's Inconsistent Policy on Dead Users' Chat Logs
A murder-suicide case has exposed a troubling gap: OpenAI won't disclose what happens to ChatGPT conversation logs when users die. The company's inconsistent handling of deceased users' data raises urgent questions about privacy, law enforcement access, and digital estate rights for millions of AI users.