AI security

Auto Added by WPeMatico

Google made agentic AI governance a product. Enterprises still have to catch up.

Two weeks ago at Google Cloud Next ’26 in Las Vegas, Google did something the enterprise AI industry has been dancing around for the better part of two years: it made agentic AI governance a native product feature, not an afterthought. The centrepiece announcement was the Gemini Enterprise Agent Platform, pitched as the successor to Vertex AI […]

Google made agentic AI governance a product. Enterprises still have to catch up. Read More »

Anthropic’s Mythos AI model sparks fears of turbocharged hacking

Anthropic’s new Mythos AI model is raising concern among governments and companies that it could outpace current cyber security defenses, turbocharge hacking, and expose weaknesses faster than they can be fixed. The San Francisco-based startup released a cyber-focused model this month, which has shown the ability to detect software flaws faster than humans but also

Anthropic’s Mythos AI model sparks fears of turbocharged hacking Read More »

Anthropic locked down its most powerful AI Model over cybersecurity fears–then put it to work

Anthropic’s most capable AI model has already found thousands of AI cybersecurity vulnerabilities across every major operating system and web browser. The company’s response was not to release it, but to quietly hand it to the organisations responsible for keeping the internet running. That model is Claude Mythos Preview, and the initiative is called Project Glasswing.

Anthropic locked down its most powerful AI Model over cybersecurity fears–then put it to work Read More »

Anthropic keeps new AI model private after it finds thousands of external vulnerabilities

Anthropic’s most capable AI model has already found thousands of AI cybersecurity vulnerabilities across every major operating system and web browser. The company’s response was not to release it, but to quietly hand it to the organisations responsible for keeping the internet running. That model is Claude Mythos Preview, and the initiative is called Project Glasswing.

Anthropic keeps new AI model private after it finds thousands of external vulnerabilities Read More »

AI Risk & Compliance in 2026: Why QA Teams Must Lead the Shift

Artificial Intelligence is no longer a futuristic concept or an experimental capability. In 2026, AI has firmly embedded itself into core business operations—powering decisions in hiring, finance, healthcare, customer experience, and beyond. This shift brings a fundamental change: AI risk is now business risk. For Quality Engineering teams, especially QA leaders, this marks a turning

AI Risk & Compliance in 2026: Why QA Teams Must Lead the Shift Read More »

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

On Thursday, Google announced that “commercially motivated” actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat. Google published the findings in what amounts to a quarterly

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says Read More »

AI companies want you to stop chatting with bots and start managing them

On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to

AI companies want you to stop chatting with bots and start managing them Read More »

The rise of Moltbook suggests viral AI prompts may be the next big security threat

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but

The rise of Moltbook suggests viral AI prompts may be the next big security threat Read More »

AI agents now have their own Reddit-style social network, and it’s getting weird fast

On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness. The platform, which launched days ago as a companion to the viral OpenClaw (once

AI agents now have their own Reddit-style social network, and it’s getting weird fast Read More »

Users flock to open source Moltbot for always-on AI, despite major risks

An open source AI assistant called Moltbot (formerly “Clawdbot”) recently crossed 69,000 stars on GitHub after a month, making it one of the fastest-growing AI projects of 2026. Created by Austrian developer Peter Steinberger, the tool lets users run a personal AI assistant and control it through messaging apps they already use. While some say

Users flock to open source Moltbot for always-on AI, despite major risks Read More »