Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

0% Positive

Analyzed from 178 words in the discussion.

Trending Topics

#why#data#mythos#plots#across#categories#vulns#able#detect#need

Discussion (8 Comments)Read Original on HackerNews

WhiteDawn38 minutes ago
First you need to get through the safety net. I’ve had many productive gpt5.4 sessions hit a roadblock of “ethicality” and pollute the context with multiple rounds of trying to convince it to continue
mertciklaabout 1 hour ago
why does this read like an openai ad?
nsingh2about 2 hours ago
These plots are terrible. Why is categorical data connected across categories with lines? Why not just use bar plots?

Like in the "Web Vulns in OSS" plot, white box data for Opus 4.7 is not available, but the absurd linear interpolation across categories implies it should be near 60.

scottyahabout 2 hours ago
It's just an ad thinly disguised as useful data.
wmfabout 1 hour ago
I think the x axis is meant to be time but they screwed it up.
strange_quarkabout 2 hours ago
Wasn't it already confirmed that small open-weight models were able to detect most of the same headline vulns as mythos? How is this any different?
stanfordkidabout 1 hour ago
No, they are able to detect errors when pointed at them but they have a lot of false positives... making them functionally useless for a large unknown codebase. They also can't build and run an exploit post-identification. Mythos can find vulnerabilities (purportedly) and actually validate them by building and running exploits. This makes it functional and usable for hacking.
nardonsabout 1 hour ago
Do you have a source for this? Not doubting it, but I would like to have something concrete the next time the Mythos horse manure is cited.