DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
48% Positive
Analyzed from 2591 words in the discussion.
Trending Topics
#source#software#code#open#bug#oss#more#security#vulnerabilities#llm

Discussion (46 Comments)Read Original on HackerNews
Imagine somebody finding a flaw in a mathematical proof and everybody being sad because a beautiful proof got invalidated rather than being glad future work won't build on flawed assumptions.
I get that the rate of vulnerability discovery can be a burden, especially for people doing FOSS in their spare time, but the sustainability problem with that has always existed and only gets exacerbated by the vulnerability stuff, but the latter isn't the cause you need to make go away.
The patching cycle can become a problem for certain operations / industries.
Everybody hates the work, and security is often seen as a barrier and a cost center, not a driver or revenue.
Try binge-watching old Star Trek episodes, to see how Spock deals with the illogical 99.9% of humanity?
It's gotten much easier to reverse engineer binaries in general, and security patches in particular. Basically, an LLM can turn binaries into 'readable' code, and then reason about said code.
But yeah, if you're distributing binaries publicly, then you're going to have very similar problems.
This understanding may be incomplete or outdated (things moving very fast right now). I'd love to hear from a someone with more experience using LLMs to do binary analysis about the level of 'binary annotation' needed for LLMs relative to humans.
As a reminder; your account has been shadow-banned, it looks like you got a little unlucky in 2016.
I take it that Metabase is both not paying bug bounties and not using these tools internally?
If that's the case, Metabase is not going to get meaningful investment from researchers who want to fix issues, but they'll get increased attention from malicious attackers who have no qualms exploiting the vulnerabilities for profit.
LLMs have made it a lot easier for people to find vulnerabilities in software. Open-source makes it easier, but we already have non-AI tooling (IDA Pro, Ghidra) that's good at binary reverse engineering, and LLMs can use that output to find vulnerabilities as well.
This year, as I select products to use for sensitive data, I've been paying a lot more attention to whether they offer bug bounties and for how much. For example, I like Kagi for search and thought about trying Orion, their web browser. Then, I saw that Kagi's been paying $100 for UXSS vulnerabilities.[0] For comparison, Firefox pays $8-10k,[1] and Chrome pays up to $10k for the same class of bug.[2]
[0] https://help.kagi.com/kagi/privacy/bug-bounty-program.html
[1] https://www.mozilla.org/en-US/security/client-bug-bounty/
[2] https://bughunters.google.com/about/rules/chrome-friends/chr...
Defining an "era" as a "summer" is short-sighted. Calling an industry-wide efforts to find and fix security vulnerabilities with better tools "strip mining" is backwards thinking, from where I sit.
People who prefer 0days in their code baffle me.
One of the benefits of Open source has been that there are more eye balls on the source, leading to more secure code/better quality. I think given enough time the bug reports will plateau and we will be back to a normal cadence - once the tsunami is over, hopefully things will settle at a more manageable cadence .
OSS has always had tradeoffs and I sadly think this one is going straight to the "Cons" column. We still think the Pros outweigh the Cons, but this is NotGreat.
Source that is unmaintained is dead. Nobody is looking at it, even the maintainer has something better to do.
Do you know whats even more powerful than "eyeballs"? Money.
Won't matter if is closed source, signed, and or obfuscated. =3
* I presume I'm not the only one to find the agents tasked with adding unit tests will sometimes try to sneak through "open source code and apply regex to confirm presence or absence of specific string literal".
They can speed you up significantly, but you absolutely do need to pay attention to what they produce.
I'm sure what they have is awesome, but it's clear that there are people out there with some decent prompts that are getting results out of widely available models as well.
The big thing we're sharing is: bulk scanning by random people in random geographies got a _lot_ better around January, it's widely distributed, and it's going to get a lot better regardless of whether that specific version of Mythos becomes widely available or not.
Absolutely, and the "false-positive" issue people keep citing as why Mythos is so good is easily solved in the harness, simplest solution is starting fresh context with another prompt to evaluate if it's a false-positive or not, just adding that drastically cuts down the rate.
Besides that, hiring a beefy GPU instance at Vast.ai or similar places then running your own uncensored models on it, I've had great success with AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored-NVFP4, smart + uncensored, but there are lots of options, probably some are already tailored for security research.
I have dog-fooded it heavily on my own projects, client projects and friends projects. It finds things that are really quite clever and not obvious. It really helps me.
But when I try to do the obvious thing for sales of using an OSS project to get hype, show off etc. I find that it becomes really hard to really know that I am helping and not just spamming.
To be clear - I think for an AI tool like mine to actually give you clever results that finds not obvious issues and security flaws - it needs to have some level of false positives.
I find myself struggling to justify the approach of firing off defects to an OSS maintainer without verifying them - which takes considerable time if I am going to do a good job. Even with tools to help pull apart the code, the core problem is always you don't know what you don't know.
The same process working on my own projects I can eat through a ton of defects and find some really great stuff. But that's only possible because I can tell at a glance what is real, what is fake, and also what is an oh ** issue.
So I think this is true, but the risk is that people who don't understand the projects just point scanners at OSS blindly and ruin the good work maintainers are doing.
This stuff is more complicated than people give credit - and it's so easy to kid yourself into thinking any bug report is helpful.
And you're surprised OSS projects are pivoting towards "open source does not mean open contributions"?
Or you know, provide the security companies and businesses using your software for free with all the fix timelines and out of hours support they’ve paid for (none).
Umm... no? It's called OPEN source. Expecting people to cancel their plans to make your free software more secure is pretty audacious. Luckily, many WILL, but the expectation is just foolish.
These alerts are absolutely not being shared publicly before we have a fix for them.
MIT:
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
BSD:
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
OSS maintainers don't owe anyone shit. Anyone who thinks fixing the bug is important is free to fix it and submit a patch.
At the risk of repeating myself -- this is targeted at other OSS maintainers, not random people who might have done a git pull of some random project a couple years ago.
Ignore (admittedly low-effort LLM generated) reports at your own peril.
Fact is that Mythos found only one issue in curl and nothing at all in most code bases. It is getting quiet around Mythos, and the AI companies will move on to the next scam.
In most open source projects, Mythos or similar tools have found nothing. The AI people only contact the projects where they find something, because it would be bad for marketing otherwise.
Who gave them "the right to scan"? You did by hosting your open source in public. But scanning a public service prior to AI was still covered by "Unauthorized System Access".
But what if they are wrong, and given the self-serving nature of these scans, now your repo is just OJ Simpson? And your software is banned due to an external scan you did not ask for?
Is there no one in this world who will be accountable for any thing at all? Can we sue the scanners if they are wrong and publish their results for defamation even in a public PR?
These things will happen. IF I had source in the open and a scan result was incorrect that nobody asked for and the results had false positives, I would sue Anthropic for defamation and I would win.