AI safety

Auto Added by WPeMatico

School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users

OpenAI could have prevented one of the deadliest mass shootings in Canada’s history, a string of seven lawsuits filed Wednesday in a California court alleged. Ultimately, the AI company overruled recommendations from its internal safety team. More than eight months prior to the school shooting, trained experts had flagged a ChatGPT account later linked to […]

School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users Read More »

The US-China AI gap closed. The responsible AI gap didn’t

The assumption that the US holds a durable lead in AI model performance is not well-supported by the data, and that is just one of the uncomfortable findings in Stanford University’s 2026 AI Index Report, published this week. The report, produced by Stanford’s Institute for Human-Centred Artificial Intelligence, is a 423-page annual assessment of where

The US-China AI gap closed. The responsible AI gap didn’t Read More »

The rise of Moltbook suggests viral AI prompts may be the next big security threat

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but

The rise of Moltbook suggests viral AI prompts may be the next big security threat Read More »

AI Safety Under Fire: Why 42 U.S. States Say Chatbots Are Putting Users at Risk

Artificial Intelligence is evolving at a speed few technologies in history have matched. What began as simple automation has now transformed into systems capable of conversation, emotional expression, and autonomous decision-making. AI chatbots are no longer limited to answering questions — they are advising users, offering emotional support, and influencing real-world choices. But as AI

AI Safety Under Fire: Why 42 U.S. States Say Chatbots Are Putting Users at Risk Read More »

Agentic AI Is Now Helping Hackers — What It Really Means and How We Can Protect Ourselves

Every once in a while, the cybersecurity landscape hits a turning point — a moment that forces everyone in tech to pause and accept one hard truth: The rules have changed. Anthropic’s recent report marked one of those moments. For the first time, a mostly autonomous cyberattack powered by agentic AI was observed in the

Agentic AI Is Now Helping Hackers — What It Really Means and How We Can Protect Ourselves Read More »

Vertical-First Agents: Why Industry-Specific AI Beats Generic Models

Over the past year, artificial intelligence has evolved rapidly—from simple question-answering systems to AI agents capable of executing real business actions. But as enterprises begin deploying AI across operations, one truth is becoming increasingly clear: Generic AI may impress. Vertical-first AI delivers results. Across healthcare, banking, finance, retail, logistics, manufacturing, and other regulated industries, organizations

Vertical-First Agents: Why Industry-Specific AI Beats Generic Models Read More »

Grok assumes users seeking images of underage girls have “good intent”

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as “sexually suggestive or nudifying,” Bloomberg reported. While the chatbot claimed that xAI

Grok assumes users seeking images of underage girls have “good intent” Read More »