Hello and welcome to Eye on AI…In this edition: A new Anthropic study reveals that even the biggest AI models can be ‘poisoned’ with just a few hundred documents…OpenAI’s deal with Broadcom…. Sora 2 and the AI slop issue … and corporate America spends big on AI.

Hi, Beatrice Nolan here. I’m filling in for Jeremy, who is on assignment this week. A recent study from Anthropic, in collaboration with the UK AI Security Institute and the Alan Turing Institute, caught my eye earlier this week. The study focused on the “poisoning” of AI models, and it undermined some conventional wisdom within the AI sector.

The research found that the introduction of just 250 bad documents, a tiny proportion when compared to the billions of texts a model learns from, can secretly produce a “backdoor”

See Full Page