FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
61% Positive
Analyzed from 2918 words in the discussion.
Trending Topics
#code#don#open#prs#llm#source#llms#maintainer#more#feature

Discussion (104 Comments)Read Original on HackerNews
As the maintainer of ghidra-delinker-extension, whenever I get a non-trivial PR (like adding an object file format or ISA analyzer) I'm happy that it happens. It also means that I get to install a toolchain, maybe learn how to use it (MSVC...), figure out all of the nonsense and undocumented bullshit in it (COFF...), write byte-perfect roundtrip parser/serializer plus tests inside binary-file-toolkit if necessary, prepare golden Ghidra databases, write the unit tests for them, make sure that the delinked stuff when relinked actually works, have it pass my standards quality plus the linter and have a clean Git history.
I usually find it easier to take their branch, do all of that work myself (attributing authorship to commits whenever appropriate), push it to the master branch and close the PR than puppeteering someone halfway across the globe through GitHub comments into doing all of that for me.
Conversely, at work I implemented support for PKCS#7 certificate chains inside of Mbed-TLS and diligently submitted PRs upstream. They were correct, commented, documented, tested, everything was spotless to the implicit admission of one of the developers. It's still open today (with merge conflicts naturally) and there are like five open PRs for the exact same feature.
When I see this, I'm not going to insist, I'll move on to my next Jira task.
I know when I run into bugs in a project I depend on, I'll usually run it down and fix it myself, because I need it fixed. Writing it up the bug along with the PR and sending it back to the maintainer feels like common courtesy. And if it gets merged in, I don't need to fork/apply patches when I update. Win-win, I'd say.
But if maintainers don't want to take PR's, that's cool, too. I can appreciate that it's sometimes easier to just do it yourself.
But i feel like it was always true that patches from the internet at large were largely more trouble then they were worth most of the time. The reason people accept them is not for the sake of the patch itself but because that is how you get new contributors who eventually become useful.
Past month or so I implemented a project from scratch that would've taken me many months without a LLM.
I iterated at my own pace, I know how things are built, it's a foundation I can build on.
I've had a lot more trouble reviewing similarly sized PRs (some implementing the same feature) on other projects I maintain. I made a huge effort to properly review and accept a smaller one because the contributor went the extra mile, and made every possible effort to make things easier on us. I rejected outright - and noisily - all the low effort PRs. I voted to accept one that I couldn't in good faith say I thoroughly reviewed, because it's from another maintainer that I trust will be around to pick up the pieces if anything breaks.
So, yeah. If I don't know and trust you already, please don't send me your LLM generated PR. I'd much rather start with a spec, a bug, a failing test that we agree should fail, and (if needed) generate the code myself.
Delaying what?
If the maintainer authors every PR they don’t have to waste time talking with other people about their PR.
Also, at the point they actively don’t want collaboration, why do open source at all?
Strange times, these.
Collaboration is a common pattern in larger projects but is uncommon in general
That's a unicorn.
If I'm lucky, I get a "It doesn't work." After several back-and-forths, I might get "It isn't displaying the image."
I am still in the middle of one of these, right now. Since the user is in Australia, and we're using email, it is a slow process. There's something weird with his phone. That doesn't mean that I can't/won't fix it (I want to, even if it isn't my fault. I can usually do a workaround). It's just really, really difficult to get that kind of information from a nontechnical user, who is a bit "scattered," anyway.
0: I liked BurntSushi's Rust projects since they are super easy to edit because they're well architected and fast by default. Easy to write in.
How do you like Helix as a starting point? Currently, I'm having Claude write a little personal text editor with CodeEditTextView as a starting point and now that I saw your comment I suddenly realized I mostly like using a modal editor and only didn't do it here because I'm moving from a webpage (where Vimium style stuff never appealed to me). Good hint that. I wonder if neovim's server mode will be helpful to me.
Code formatting is easy to solve. You write linting tests, and if they fail the PR is rejected. Code structure is a bit tricker. You can enforce things like cyclomatic complexity, but module layout is harder.
I guess my point being that it's become pretty easy to convert back and forth between code and specs these days, so it's all kind of the same to me. The PR at least has the benefit of offering one possible concrete implementation that can be evaluated for pros and cons and may also expose unforeseen gotchas.
Of course it is the maintainer's right to decide how they want to receive and respond to community feedback, though.
Sometimes I'm not a fan of the change in its entirety and want to do something different but along the same lines. It would be faster for me to point the agent at the PR and tell it "Implement these changes but with these alterations..." and iterate with it myself. I find the back and forth in pull requests to be overly tiresome.
Thank your contributor; then, use the PR - and the time you’d have spent reviewing it- to guide a reimplementation.
Submitters use LLMs to generate the code and reviewers use LLMs to review it.
This just like my favorite, “We can use LLMs to write the code and write the tests.”
The JavaScript ecosystem is a good demonstration of a platform that is encumbered with layers that can only ever perform the abilities provivded by the underlying platform while adding additional interfaces that, while easier for some to use, frequently provide a lot of functionality a program might not need.
Adding features as a superset of a specification allows compatibility between users of a base specification, failure to interoperate would require violating the base spec, and then they are just making a different thing.
Bugs are still bugs, whether a human or AI made them, or fixed them. Let's just address those as we find them.
Somehow it's not really happening.
Repo, for those interested: https://github.com/jaggederest/pronghorn/
I find that the core issues really revolve around the audience - getting it good enough that I can use it for my own purposes, where I know the bugs and issues and understand how to use it, on the specific hardware, is fabulous. Getting it from there to "anyone with relatively low technical knowledge beyond the ability to set up home assistant", and "compatible with all the various RPi/smallboard computers" is a pretty enormous amount of work. So I suspect we'll see a lot of "homemade" software that is definitely not salable, but is definitely valuable and useful for the individual.
I hope, over the long to medium term, that these sorts of things will converge in an "rising tide lifts all boats" way so that the ecosystem is healthier and more vibrant, but I worry that what we may see is a resurgence of shovelware.
As a sidenote: what's with the usage of "take" to designate an opinion instead of the word "opinion" or "view"?
This is just a change in position of what work is useful for others to do.
Give me ideas. Report bugs. Request features. I never wanted your code in the first place.
https://steve-yegge.medium.com/vibe-maintainer-a2273a841040
95% of this is covered by a warning that says "I won't merge any PR that a) does not pass linting (configured to my liking) and b) introduces extra deps"
> With LLMs, it's easier for me to get my own LLM to make the change and then review it myself.
So this person is passing on free labour and instead prefers a BDFL schema, possibly supported by a code assistant they likely have to pay for. All for a supposed risk of malice?
I don't know. I never worked on a large (and/or widely adopted) open-source codebase. But I am afraid we would've never had Linux under this mindset.
It feels like a lot of people assume a sense of entitlement because one platform vendor settled on a specific usage pattern early on.
Maybe I'm not up to date with the bleeding edge of linters, but I've never seen one that adequately flags
Into There's all sorts of architectural decisions at even higher levels than that.I find myself doing the same, nowadays I want bug reports and feature requests, not PRs. If the feature fits in with my product vision, I implement and release it quickly. The code itself has little value, in this case.
I know my code base(s) well. I also have agentic tools, and so do you. While people using their tokens is maybe nice from a $$ POV, it's really not necessary. Because I'll just have to review the whole thing (myself as well as by agent).
Weird world we live in now.
The fact-of-life journaling about the flood of code, the observation that he can just re-prompt his own LLM to implement the same feature or optimization
all of this would have just been controversial pontificating 3 months ago, let alone in the last year or even two years. But all of a sudden enough people are using agentic coding tools - many having skipped the autocomplete AI coders of yesteryear entirely - that we can have this conversation