DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
33% Positive
Analyzed from 277 words in the discussion.
Trending Topics
#review#code#detail#post#something#finds#changes#system#claude#more

Discussion (5 Comments)Read Original on HackerNews
First, the Bradley-Terry competition is mixing up what’s important for something to be shown with how new or unusual something is when it’s shown. Detail finds a problem, and three bots looking at pairs of changes also noticed a similar level in their scores, but people won't pay any extra attention to it in the actual running system. The engineer was going to see the problem from the bot’s review anyway. It would be good to specifically measure what Detail alone finds, versus what all the bots together find. We could look at that as a completely separate measure.
Second, when the same AI model is doing the judging, it’s being unfair. When I use a Claude AI to review code changes and a Codex AI to review the very same code changes as I’m developing, Codex finds issues that Claude is inclined to overlook. And Sonnet 4.6’s ratings of code, which come from a system run by Sonnet, also have some of that unfairness, even after the system summarizes the code.
We started with a DIY code review skill because it's inherent to want to customize to our codebase and infra before trying solutions that add layers which may get in our way here. We have a 1 page skill that that does seperate passes on security, spec conformance, proper DRY & architectural abstractions, etc, and adversarial result quality passes to prune & prioritize. Others do similar.
Once quality is fixed, I'd expect the comparisons to be less on hits/misses , and more on token efficiency. That's a tricky one bc developer local review tokens are heavily subsidized right now.
Lmk if there is any qs I can answer about Detail or the post.