Anthropic is scrambling to assert its political neutrality as the Trump administration intensifies its campaign against so-called “woke AI ,” placing itself at the center of an increasingly ideological fight over how large language models should talk about politics.
In a detailed post Thursday, Anthropic unveiled a sweeping effort to train its Claude chatbot to behave with what it calls “political even-handedness,” a framework meant to ensure the model treats competing viewpoints “with equal depth, engagement, and quality of analysis.”
The company also released a new automated method for measuring political bias and published results suggesting its latest model, Claude Sonnet 4.5, outperforms or matches competitors on neutrality.
The announcement comes in the midst of unusually st

Fortune

America News
Reuters US Politics
Newsweek Top
Reuters US Top
AlterNet
Raw Story
New York Magazine Intelligencer
Associated Press US News