Back to News
Advertisement
mmufeedvh about 7 hours ago 11 commentsRead Article on ndaybench.winfunc.com
N-Day-Bench tests whether frontier LLMs can find known security vulnerabilities in real repository code. Each month it pulls fresh cases from GitHub security advisories, checks out the repo at the last commit before the patch, and gives models a sandboxed bash shell to explore the codebase.

Static vulnerability discovery benchmarks become outdated quickly. Cases leak into training data, and scores start measuring memorization. The monthly refresh keeps the test set ahead of contamination β€” or at least makes the contamination window honest.

Each case runs three agents: a Curator reads the advisory and builds an answer key, a Finder (the model under test) gets 24 shell steps to explore the code and write a structured report, and a Judge scores the blinded submission. The Finder never sees the patch. It starts from sink hints and must trace the bug through actual code.

Only repos with 10k+ stars qualify. A diversity pass prevents any single repo from dominating the set. Ambiguous advisories (merge commits, multi-repo references, unresolvable refs) are dropped.

Currently evaluating GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro, GLM-5.1, and Kimi K2.5. All traces are public.

Methodology: https://ndaybench.winfunc.com/methodology

Live Leaderboard: https://ndaybench.winfunc.com/leaderboard

Live Traces: https://ndaybench.winfunc.com/traces

Advertisement

⚑ Community Insights

Discussion Sentiment

80% Positive

Analyzed from 396 words in the discussion.

Trending Topics

#answer#finder#shell#report#results#model#curator#key#steps#code

Discussion (11 Comments)Read Original on HackerNews

sacrelegeβ€’about 2 hours ago
Thanks for putting N-Day-Bench together - really interesting benchmark design and results.

I'd love to see how the model we serve, Qwen3.5 122B A10B, stacks up against the rest on this benchmark. AI Router Switzerland (aiRouter.ch) can sponsor free API access for about a month if that helps for adding it to the evaluation set.

Cynddlβ€’about 6 hours ago
> Each case runs three agents: a Curator reads the advisory and builds an answer key, a Finder (the model under test) gets 24 shell steps to explore the code and write a structured report, and a Judge scores the blinded submission. The Finder never sees the patch. It starts from sink hints and must trace the bug through actual code.

Curator, answer key, Finder, shell steps, structured report, sink hints… I understand nothing. Did you use an LLM to generate this HN submission?

It looks like a standard LLM-as-a-judge approach. Do you manually validate or verify some of the results? Done poorly, the results can be very noisy and meaningless.

rohansood15β€’about 5 hours ago
I worked in AppSec in the past, made sense to me. Maybe you aren't the target audience?

You don't really need manual verification for these, the CVEs (vulnerabilities) are public and can be programmatically validated.

johnfnβ€’about 4 hours ago
Is this really that hard to parse?

Curator and Finder are the names of the agents. "answer key" - haven't you ever taken a test in high school? It's an explanation of the answer. "shell steps" I presume means it gets to run 24 commands on the shell. "structured report" - do I really need to explain to you what a report is? "sink hints" - I admit I didn't know this one, but a bit of searching indicates that it's a hint at where the vulnerability lies.

peytonβ€’about 6 hours ago
> Did you use an LLM to generate this HN submission?

Must have.

> The Finder will never see the patch.

I wasn’t worried that this eval would show the answer to the model before evaluating it. Seems requirements leaked into this post.

linzhangrunβ€’about 4 hours ago
Definitely possible. In January, I tried using Gemini to perform black-box/white-box testing on an existing system in my company (it's quite old). It successfully exploited a hidden SQL injection vulnerability to penetrate the system and extract password hashes (not particularly strong passwords, successfully decrypted on a public website). In terms of pure skill level, I'd say this is at least the level of a mid-level cybersecurity professional, not even considering the significant efficiency improvement.
spicyusernameβ€’about 4 hours ago
I'd love to see some of the open source models in there
mbbutlerβ€’about 7 hours ago
It would be helpful to add in some cases that do not contain any vulnerabilities to assess false-positive rate as well.
mufeedvhβ€’about 7 hours ago
This is a good idea.

Will incorporate false-positive rates into the rubric from the next run onwards.

At winfunc, we spent a lot of research time taming these models to eradicate false-positive rates (it's high!) so this does feel important enough to be documented. Thanks!

cortesoftβ€’about 6 hours ago
Any code that is certain that it doesn't have any vulnerabilities is going to be pretty trivial to verify.
Rohinatorβ€’about 7 hours ago
Very curious how Claude Mythos will perform here