ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
75% Positive
Analyzed from 1345 words in the discussion.
Trending Topics
#software#more#cost#claude#something#code#few#https#same#need

Discussion (40 Comments)Read Original on HackerNews
I think it's possible the amount of new software that will be written for an audience of 1-10 will be greater in 2026 than in any previous year, and then the same again for many years to come. I also think a lot of this software will be essentially 'hidden' - people just writing this stuff for themselves because the cost to say things to an agent is very low compared with the cost of actually planning out a software design and so forth.
Interoperability will probably be important in the next few years and I wonder if this is something solvable at the agent/LLM level (standing instructions like 'typically, use sqlite, use plaintext, use open standards' or whatever). I also think observability and ops will be pretty important - many people who want personal software but don't care for the maintenance and upkeep.
My wm, shell, terminal, editor, file manager, pop-up menu (dmenu-like) are all pure ruby (including font rendering and X11 bindings). These all started before I started using Claude to improve them, so they're still mostly hand-written, but that is changing.
They're messy, they have bugs and "misfeatures" that works for me but likely would be painful for others.
Like OP, I don't really recommend anyone else use my code, at least not directly, and that is extremely liberating.
Overall, the projects covers the largest surface of what I use beyond the kernel, a browser, and Xorg (I'm so, so tempted, but I think an LLM will need to get a lot further first before I could fit it into my schedule).
It doesn't need to be polished because it's mostly for me. It's okay for them to have bugs as long as they work better for me than the alternatives.
I strongly believe more people should do this. It's both a great learning experience, and it gives you a system that has exactly the features you actually want and use.
And it's only going to get easier to do this.
Would it be possible to share the jsonl files too, like how Mario Zechner shared his chats with the AI, while working on his Pi coding agent?
https://x.com/badlogicgames/status/2041151967695634619?s=46
[1]: https://fortune.com/2026/04/28/nvidia-executive-cost-of-ai-i...
[2]: https://www.briefs.co/news/uber-torches-entire-2026-ai-budge...
As a hobby, normal rates don’t apply, but just not to be misleading on the equivalent cost.
Most software is done after the first or second version and the developers just keep working on it to justify their job; adding features no one needs and just get in the way or make the program worse. It'll be nice when the software I have does exactly what I need and doesn't change until I tell it to change for something I need.
The only feature Macos has shipped in the past 10 years that I actually like is air-drop. Everything else is a PITA annoyance, or as I've found out from upgrading, just bug ridden slop that doesn't work well anymore.
A word of warning: a reliable lock tool for X11 is difficult. You should look at XSecureLock, which uses a multiprocess approach to avoid leaving the desktop unprotected in case of crash. It also implements a number of countermeasure to ensure the desktop stays locked and the locker stays in the front of the display. It's small too, so easy to audit (but written in C).
On this software itself: I’d like to know how this feels to use. It’s so very lightweight. Does it feel categorically different to what we are used to?
One of the things I miss about the 1980s home computers is that they booted into a usable command line in a handful of seconds, from a few KB in ROM. Imagine what today’s HW could do if we’d retained that level of efficiency.
There are big benefits to using a language that has good static analysis with LLMs.
Still a cool project, thanks for sharing.
I have wondered about having LLMs output machine code directly and skipping the compiler/assembler altogether. Then you'd just commit your spec/prompt and run it through the LLM to get your binary.
rust can do that. You can run a hyper stripped down rust that was made for embedded devices specifically because those devices don't have room for a runtime.
I struggle to understand why, though.
0: https://github.com/isene/chasm
Also, reading it is probably not the intended use. It’s probably: “Hey Claude, give me a TLDR of this”
But the incessant “AI was used here, thus is it garbage” is long past time to enter the grave.