AMD Senior AI Director Stella Laurenzo just dropped hard data on Anthropic’s Claude Code regression. After reviewing 6,852 real coding sessions from January to March 2026, she concluded the model has noticeably declined since February on complex engineering tasks. Key drops: thinking depth fell sharply, the model now reads far less of your codebase before making changes, skips files on roughly one-third of edits, defaults to simpler fixes more often, and shows increased bail-outs or permission-seeking behavior. Power users have felt this for weeks, what used to feel thoughtful and deep now frequently lands shallow and lazy. Anthropic says much of the change comes from “adaptive thinking” and thinking redaction for better latency, claiming internal reasoning remains intact. Laurenzo’s telemetry suggests the practical impact is real and measurable. This is the classic centralized AI trap: rapid user growth leads to infra pressure and quiet quality trade-offs. image