Human content moderators still outperform AI when it comes to recognizing policy-violating material, but they also cost significantly more.

Marketers looking to ensure that their ads do not surface in a toxic slurry face a dilemma – spend more money or see more Hitler.

Researchers affiliated with AI brand protection biz Zefr did the math, detailed in a preprint paper titled "AI vs. Human Moderators: A Comparative Evaluation of Multimodal LLMs in Content Moderation for Brand Safety."

The paper, accepted at the upcoming Computer Vision in Advertising and Marketing (CVAM) workshop at the 2025 International Conference on Computer Vision, presents an analysis of the cost and effectiveness of multimodal large language models (MLLMs) for brand safety tasks.

The researchers' calculations show

See Full Page