AI Weekly Issue #477: Jensen Huang says we’ve achieved AGI. The benchmarks say 0.37%.

💡 Insights
AI is superhuman at exams but can’t figure out a simple game. ARC-AGI-3 gave frontier models interactive environments with no rules and no goals — just figure it out. Humans solve 100%. The best AI scored 0.37%. Current architectures can pattern-match anything in their training data but cannot adapt to novelty. That gap defines what AI can and cannot replace in your work today.
The AI value chain just inverted. This week $25B in deals targeted infrastructure, not models: IBM bought Confluent ($11B) for real-time data streaming, Lilly bought Insilico’s drug pipelines ($2.75B), Physical Intelligence raised $1B for robot control systems. Building a better LLM is table stakes. Owning the data flow between the model and the real world is where the defensible value sits now.
If you set safety boundaries, courts will protect them. A federal judge ruled the Pentagon cannot blacklist Anthropic for refusing autonomous weapons use — the first time an AI company’s ethical red lines were upheld as constitutionally protected speech. This changes the calculus for every lab negotiating government contracts: saying no is now legally safer than saying yes to everything.