Security

Auto Added by WPeMatico

AI Expo 2026 Day 2: Moving experimental pilots to AI production

The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition. Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more […]

AI Expo 2026 Day 2: Moving experimental pilots to AI production Read More »

Microsoft unveils method to detect sleeper agent backdoors

Researchers from Microsoft have unveiled a scanning method to identify poisoned models without knowing the trigger or intended outcome. Organisations integrating open-weight large language models (LLMs) face a specific supply chain vulnerability where distinct memory leaks and internal attention patterns expose hidden threats known as “sleeper agents”. These poisoned models contain backdoors that lie dormant

Microsoft unveils method to detect sleeper agent backdoors Read More »

The rise of Moltbook suggests viral AI prompts may be the next big security threat

On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but

The rise of Moltbook suggests viral AI prompts may be the next big security threat Read More »

How to Build Multi-Layered LLM Safety Filters to Defend Against Adaptive, Paraphrased, and Adversarial Prompt Attacks

In this tutorial, we build a robust, multi-layered safety filter designed to defend large language models against adaptive and paraphrased attacks. We combine semantic similarity analysis, rule-based pattern detection, LLM-driven intent classification, and anomaly detection to create a defense system that relies on no single point of failure. Also, we demonstrate how practical, production-style safety

How to Build Multi-Layered LLM Safety Filters to Defend Against Adaptive, Paraphrased, and Adversarial Prompt Attacks Read More »

Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims

Plus: AI agent OpenClaw gives cybersecurity experts the willies, China executes 11 scam compound bosses, a $40 million crypto theft has an unexpected alleged culprit, and more.

Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims Read More »

Web portal leaves kids’ chats with AI toy open to anyone with Gmail account

Earlier this month, Joseph Thacker’s neighbor mentioned to him that she’d preordered a couple of stuffed dinosaur toys for her children. She’d chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher,

Web portal leaves kids’ chats with AI toy open to anyone with Gmail account Read More »

Overrun with AI slop, cURL scraps bug bounties to ensure “intact mental health”

The project developer for one of the Internet’s most popular networking tools is scrapping its vulnerability reward program after being overrun by a spike in the submission of low-quality reports, much of it AI-generated slop. “We are just a small single open source project with a small number of active maintainers,” Daniel Stenberg, the founder

Overrun with AI slop, cURL scraps bug bounties to ensure “intact mental health” Read More »