ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
57% Positive
Analyzed from 7967 words in the discussion.
Trending Topics
#source#open#security#code#more#com#closed#cal#software#don

Discussion (278 Comments)Read Original on HackerNews
Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
* a la https://news.ycombinator.com/item?id=26998308
The real answer is they are likely having a hard time converting people to paid plans
That's a very weak moat unless you have something else like the friction of network dependence similar to a social network.
Do it then
Generally speaking it is very very difficult to have a license redefine legal terms. Either this theseus copy is legally a derivative work or it isn't, and text of a license is going to do at most very very little to change that.
Are you willing to bear the burden of litigation?
Copyright can only deny the right to make copies.
If someone spends years using your software and they have learned a mental model of how your software works, they can build an exact replica and there is nothing you can do about that since there is no copy you can sue over. Said user is also allowed to use AI tools to aid in the process.
What you want is an EULA, which is a contract users explicitly have to agree with. A license file only grants access or the right to copy, it doesn't affect usage of your software.
"AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better. This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary."
Replace AI with "open source and Linux", and "open source" with "Windows" in the statements. That's what Microsoft's PR team would have said about open source and Linux about 20 years back in the 2000s.
After the unsuccessful FUD era, now Microsoft is running away with Linux by running its Windows alongside via WSL to combat MacOS Unix-like popularity, and due to Linux and open source dominance in the cloud OS demographic.
The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.
(I might be very wrong here)
It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.
Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).
One way to reduce distribution is to, raise the price.
Another is to make a worse product.
Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).
The economics of software are going to massively reconfigure in the coming years, open source most of all.
I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
So each time you roll the dice you gamble on getting a fresh set of 0-days? I don't get why anyone would want this.
Project model capabilities out a few years. Even if you only assume linear improvement at some point your risk-adjusted outcome lines cross each other and this becomes the preferred way of authoring code - code nobody but you ever sees.
Most enterprises already HATE adopting open source. They only do it because the economic benefit of free reuse has traditionally outweighed the risks.
If you need a parallel: we already do this today for JIT compilers. Everything is just getting pushed down a layer.
That can't be right, can it? Given stable software, the relative attack surface keeps shrinking. Mythos does not produce exploits. Should be defenders advantage, token wise, no?
Defenders have to find all the holes in all their systems, while attackers just need to find one hole in one system.
AI in general will, don't worry. "Move fast and break things" makes more exploits than "move steadily and fix things" does.
There are ways to use LLM service providers that leave no tokens unused, by just billing per token. Unsurprisingly, this quickly becomes much more expensive than subscriptions.
This is not true.
The problem rather is that the managers of many companies don't allow their programmers to apply their knowledge about security - the programmers should rather weed out new features.
Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
This is true until certain point, unless the requirement / contract itself has loophole which the attacker can exploit it without limit. But I don't think this is the case.
Let's say, if someone found an loophole in sort() which can cause denial-of-service. The cause would be the implementation itself, not the contract of sorting. People + AI will figure it out and fix it eventually.
Llm's will find your issues faster, but not necessarily more accurately than a domain expert. But experts cost money and effort takes longer to apply.
Are llm's going to reduce everyone's wages because they are cheap labour?
(I just hope they can learn to verify the exploits are valid before sharing them!)
I might like to live there.
For projects with NO WARRANTY, the risk is minimal, so yes there are upsides.
For a commercial project like cal.com, where a breach means massive liability, they don’t have the resources to risk breaches in the short term for potentially better software in the long term.
I'd give them more credits if they use the AI slop unmaintainability argument.
Our scheduling tool, Thunderbird Appointment, will always be open source.
Repo here: https:// github.com/thunderbird/appointment
Come talk to us and build with us. We'll help you replace Cal.com
Sounds like a great tool though. How much is the hosted version?
1. https://stage.appointment.day
As a datapoint: FF + Chrome with lots of stuff open uses 2.6GB on my machine. With XFCE and a GB of other apps, it’s using about 4GB. 15 year old machine. Perf is fine.
2. Gives email address.
3. Is told to join the waitlist.
4. Blocks email address given at 2.
Hardly a terrific experience.
do we need an appointment :)
A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?
https://en.wikipedia.org/wiki/Linus%27s_law
No, attackers are also rational economical actors. They don't randomly attack any software just for the aesthetics beauty of the process. They attack for bounty, for fame, for national interest, etc. No matter the reason it's not random and thus they DO have a budget, both in time and money. They attack THIS project versus another project because it's interesting to them. If it's not, they might move to another project but they certainly won't spend infinite time precisely because they don't have infinite resources. IMHO it's much more interesting to consider the realistic arm race then theoretical scenarii that never take place.
It is also become a trend that LLM-assisted users are generating more low-quality issues, dubious security reports, and noisy PRs, to the point where keeping the whole stack open source no longer feels worth it. Even if the real reason is monetization rather than security, I can still understand the decision.
I suspect we will see more of this from commercial products built around a FOSS core. The other failure mode is that maintainers stop treating security disclosures as something special and just handle them like ordinary bugs, as with libxml2. In that sense, Chromium moving toward a Rust-based XML library is also an interesting development.
This game will end horribly.
But you won't keep the doors open for others to use them against it.
So it is, unfortunately, understandable in a way...
Did they ever promise to keep their codebase FOSS forever, in a way that differs from what they're already doing over at cal.diy? If not, I don't see why it would be reasonable to expect them to spend a huge amount of money re-scanning on every single commit/deploy in order to keep their non-"DIY" product open source.
But you might need thousands of sessions to uncover some vulnerabilities, and you don’t want to stop shipping changes because the security checks are taking hours to run
It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.
This! I love OSS but this argument seems to get overlooked in most of the comments here.
I feel like with AI, self-hosting software reliably is becoming easier so the incentives to pay for a hosted service of an OSS project are going down.
Wanna sack a load of staff? - AI
Wanna cut your consumer products division? - AI
Wanna take away the source? - AI
It has always been odd to me they didn’t have this functionality years ago. It’s been requested for a long long time
So do that and fix your bugs. This post makes no sense.
I'm not sure I agree with Drew Breunig, however. The number of bugs isn't infinite. Once we have models that are capable enough and scan the source code with them at regular intervals, the likelihood of remaining bugs that can be exploited goes way down.
I would rather say that the core product is not strong and differentiated enough to resist this new age of coding, and it's an attempt to protect revenues.
So not really.
I think they went closed source as there are too many decent clones based off their code and they realized it's eating up their niche.
Not to mention, I presume the core bits of Cal.com's source code are already in place and aren't going to change significantly?
Like, this feels like a business decision and not a security decision
No you certainly didn't, otherwise you shouldn't have come up with such a meaningless excuse!
I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.
And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
Give him $100 to obtain that capability.
Give each open source project maintainer $100.
Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.
I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.
We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.
If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
You can keep the untested branch closed if you want to go with “cathedral” model, even.
Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.
To what end? You can just look at the code. It's right there. You don't need to "hack" anything.
If you want to "hack on it", you're welcome to do so.
Would you like to take a look at some of my open-source projects your neighbour's kid might like to hack on?
Since such "clean room" implementations ostensibly do not see the source, it's arguably irrelevant whether such sources are open are not. Such implementations will happen regardless of whether the sources they're reimplementing are opened or closed.
We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.
Time will tell, I am in the open source camp, though.
[1] https://github.com/xataio/pgroll
They should provide free continued git commit security analysis for open source projects. That would increase the quality of open source projects and would inspire more projects to go open source, which is also a win for the AI companies.
Scan everyone's code, for free. Make all code as secure as an llm can make it as a baseline.
One must assume this was a direction they wanted to move towards and this is the justification they thought would be most palatable.
(Enter name of large software vendor here) has long-since proven that security through obscurity is not a real thing.
IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?
The only open source that will remain will be the real open source projects that are true to the ethos.
Otherwise, copying code and improving it with AI or with humans is the same, as long as the product improves.
I doubt that many semi-automatic AI copies can really improve a product more than the original team, for really valid products.
AI will be a filter of bad quality.
Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.
How has this changed?
https://git.sr.ht/~bsprague/schedyou
What's worse is your choosing to keep it buggy behind closed doors so no one can see the bugs. That's 100% the wrong approach.
It seems like an easy decision, not a difficult one.
Proposition 2: The most popular shared libraries are going to be quickly torn apart by LLM security tools to find vulnerabilities
Proposition 3: After a brief period of mass vulnerability discovery, the overall quality of shared libraries will dramatically increased.
Conclusion: After the initial wave of vulnerabilities has passed, the main threat to open source code bases is in their own comparatively small amount of code.
If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).
"But if everyone can read the source code, they'll be able to find vulnerabilities more easily!"
No. Security by obscurity has proven wrong.
Open Source Isn't Dead - https://news.ycombinator.com/item?id=47780712
Cybersecurity looks like proof of work now - https://news.ycombinator.com/item?id=47769089
For example using something like Next.js means a very large chunk of important obscurity is thrown out the window. The same for any publicly available server/client isomorphic framework.
I thought this was grandiose and projecting their own weakness onto others, an extremely unappealing marketing position that may get clicks in the short term but will undermine trust beyond that.
That's right. Nothing.
And given that they will not rewrite the whole codebase in the next few days it means that security vulnerabilities are still there to be discovered by someone willing to pay the AI tax.
Maybe you are referring to the whole Github thing.
* Someone lols at code. Answer: ignore them.
* Someone sees your vulns. Answer: someone is already trying to hack you anyway.
https://news.ycombinator.com/item?id=47780712
That said, I agree with another commenter that this seems like more of a business decision than a security one.
I always say to just stop with the virtue signaling led sales technique.
I despise the "we are like the market leader of our niche but open source" angle. Developer as a buyer and as a community these days in my opinion do not care about open source anymore. There is no long term value to that. The moment a product gets traction the open source elements is a constant mild headache as open source product means that they have no intellectual copyright on the core aspect of the product and it is hard to raise money or sell the company. And whenever a product gets traction they will take any excuse to make it close source again. With an open source product they are just coasting on brand. Regardless of what your personal opinion is, this has been largely true for most for-profit business.
Open source is largely is nothing more then a branding concept for a company who is backed by investors.
And a religion that was invented by those who wanted to have all the world's code for free to train AI to code.
Hi {audience},
It is with a heavy heart that I have to announce that {thing we were going to do anyway} is necessary due to AI. AI has changed the industry and we are powerless to do anything other than {unpopular decision we were going to do regardless}.
This post's argument seems circular to me.
That is not true.
https://en.wikipedia.org/wiki/Security_through_obscurity
Security through obscurity doesn't work in isolation. It doesn't work as the only solution. It is discouraged, because it can be a panacea.
But it also doesn't hurt in many instances. Holding back your source code can be a strategic advantage. It does mean that adversaries can't directly read it (nor can your friends or allies!)
Having a proprietary protocol or file format, this is also "security through obscurity" and it may slow down or hinder an attacker. Obscurity may be part of a "defense in depth" strategy that includes robust and valid methods as well.
But it is harmful to baldly claim that "it doesn't work".
AI can clone something like cal.com with or without source code access, so in trying to pointlessly defend against AI they are just ruining the trust they built with their customers, which is the one thing AI can never create out of thin air.
We exclusively run our companies with FOSS software we can audit or change at any time because we work in security research so every tool we choose is -our- responsibility.
They ruined their one and only market differentiator.
We will now be swapping to self hosting ASAP and canceling our subscriptions.
Really disappointing.
Meanwhile at Distrust and Caution we will continue to open source every line of code we write, because our goal is building trust with our customers and users.
- Well, did it work for those companies?
- No, it never does. I mean, these companies somehow delude themselves into thinking it might, but... but it might work for us.
That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
Charge for api access, take a cut of the extensions economy.
How do i do that, I'm open source?
AI also goes a long way towards erasing the distinction between source code and executable code. The disassembly skill of a good LLM is nothing short of jaw-dropping.
So going closed-source may be safer for SaaS, but closing the source won't save a codebase from being exploited if the binaries are still accessible to the public. In that sense, instead of dooming SaaS as many people have suggested AI will do, it may instead be a boon.
At your cost.
Every time you push. (or if not that, at least every time there is a new version that you call a release)
Including every time a dependency updates, unless you pin specific versions.
I assume (caveat: I've not looked into the costs) many projects can't justify that.
Though I don't disagree with you that this looks like a commercial decision with “LLM based bug finders could find all our bad code” as an excuse. The lack of confidence in their own code while open does not instil confidence that it'll be secure enough to trust now closed.
I believe than N companies using an open source project and contributing back would make this burden smaller than one company using the same closed-source project.
Great move.
Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.
Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.
Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
It makes me think of how great chess engines have affected competitive chess over the last few years. Sure, the ceiling for Elo ratings at the top levels has gone up, but it's still a fair game because everyone has access to the new tools. High-level players aren't necessarily spending more time on prep than they were before; they're just getting more value out of the hours they do spend.
I think Cal are making the wrong call, and abandoning their principles. But it isn't fair to say the game is accelerating in a proportionate way.
See: https://www.youtube.com/watch?v=2CieKDg-JrA
Ultimately, he concludes that while in the short run the game defines the players' actions, an environment that makes cooperation too risky naturally forces participants to stop cooperating to protect themselves from being "exploited" (this bit is around 34:39 - 34:46)
Then good, that overengineered, intentionally-crippled crap should go away.