I tried GPT-5.4, and most answers were really good - but a few had me concerned ...
AI benchmarks rely on models not knowing they’re being tested. Anthropic revealed that Claude Opus 4.6 figured it out anyway, identifying the BrowseComp benchmark by name and decrypting its encrypted ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results