Security

Auto Added by WPeMatico

Anthropic: Claude faces ‘industrial-scale’ AI model distillation

Anthropic has detailed three “industrial-scale” AI model distillation campaigns by overseas labs designed to extract abilities from Claude. These competitors generated over 16 million exchanges using approximately 24,000 deceptive accounts. Their goal was to acquire proprietary logic to improve their competing platforms. The extraction technique, known as distillation, involves training a weaker system on the […]

Anthropic: Claude faces ‘industrial-scale’ AI model distillation Read More »

Red Hat unifies AI and tactical edge deployment for UK MOD

The UK Ministry of Defence (MOD) has selected Red Hat to architect a unified AI and hybrid cloud backbone across its entire estate. Announced today, the agreement is designed to break down data silos and accelerate the deployment of AI models from the data centre to the tactical edge. For CIOs, it’s part of a

Red Hat unifies AI and tactical edge deployment for UK MOD Read More »

ICE and CBP’s Face-Recognition App Can’t Actually Verify Who People Are

ICE has used Mobile Fortify to identify immigrants and citizens alike over 100,000 times, by one estimate. It wasn’t built to work like that—and only got approved after DHS abandoned its own privacy rules.

ICE and CBP’s Face-Recognition App Can’t Actually Verify Who People Are Read More »

AI Expo 2026 Day 2: Moving experimental pilots to AI production

The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition. Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more

AI Expo 2026 Day 2: Moving experimental pilots to AI production Read More »

Microsoft unveils method to detect sleeper agent backdoors

Researchers from Microsoft have unveiled a scanning method to identify poisoned models without knowing the trigger or intended outcome. Organisations integrating open-weight large language models (LLMs) face a specific supply chain vulnerability where distinct memory leaks and internal attention patterns expose hidden threats known as “sleeper agents”. These poisoned models contain backdoors that lie dormant

Microsoft unveils method to detect sleeper agent backdoors Read More »