Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

61% Positive

Analyzed from 2223 words in the discussion.

Trending Topics

#review#code#source#open#more#software#maintainer#prs#project#maintainers

Discussion (40 Comments)Read Original on HackerNews

spenroseabout 4 hours ago
Fantastic piece: shows how fundamental dynamics (queuing) generate practical problems AND what to do about them. This essay is better than 95% of tech blog posts I read via HN. Kudos!

An original sin of Free Software which carried through to Open Source and infects HN via its many Open Source believers is a reluctance to take project management seriously. OP shows that Jellyfin’s dictat... er, maintainer is not effectively managing the project. Open Source has no adequate answers (“fork” is not adequate).

armanckeserabout 4 hours ago
Thanks a lot! I appreciate the kind words. I do want to clarify that I think in Jellyfin-web's case, the maintainer does mean well and doesn't really have the "benevolent dictat... er, maintainer" approach. But there seems to be this defeatist argument of: we have one maintainer which means 6 months per PR and features not being merged, that I think Open Source projects could do a better job at
spenroseabout 3 hours ago
Indeed. The problem arises from a two step:

1. Free Software / Open Source are Good and True by assertion. There is no God but source code, and Stallman is its prophet. 2. Questions whose answers tend to contradict point 1., such as “Gee, the world runs on Python — as wonderful as job as Guido and his inner circle have done, is it time to ask what an ideal management structure for a technology worth (tens? hundreds? of) billions of dollars might be?” are not welcome — are largely not asked.

pjc50about 3 hours ago
People get what they pay for.

(There could be a long discussion here about expectations placed on unpaid maintainers, and what the real purpose of Open Source / Free Software is beyond merely being zero cost at the point of use, but those tend to just go round forever. There's even a paid alternative to Jellyfin: Plex.)

erikeriksonabout 2 hours ago
Have you paid?

I too have been frustrated by the way open source works. Maintainers are frequently people in high demand and open source rarely pays commensurate.

So too have I given my work away and been met with entitled demands for service and time. I enjoyed writing the code and making something useful. I enjoyed the validation of that belief based on use but that doesn't feed the family or further my actual goals in life.

shimman3 minutes ago
Why should I pay, why can't we tax big tech and VC firms so the public can fund this stuff instead? They have all the money, they have all the power; why can't we take it away from them?

The world of software would be a vastly better place if the public had options to invest in software as well.

saulpw19 minutes ago
I agree that open source needs to find ways to engage with the economy, for multiple reasons. But the project needs to create the system/process/structure; individual contributors paying money won't affect these systemic problems even if the money they're paying is substantial. At best they create a temporary system of privilege.
lstolcmanabout 4 hours ago
I read about similar issue today in another context, in a thread about introducing AI code review in OpenWrt [0]. The idea came from the fact that the project has too few maintainers compared to the number of incoming patches.

Automated code review is supposed to help catch the most trivial and basic mistakes (which, as the author claims, are often repetitive), and also speed up feedback. Ultimately, this should help push issues forward and let maintainers focus on harder problems like architectural issues, which needs deep knowledge, and AI can't solve this part yet.

On the other hand, there are comments opposing the policies of AI companies, complaining about pointless and nit-picky-annoying code review comments, that don't add much, and raising the concern that AI reviews are treated as checklist for getting things merged; which can be frustrating regarding to the amount of bot comments. The suggested mitigation would be to explicitly note, that the AI code review is only a suggestion of changes. [1]

In the end, I think accepting AI in a way similar to the rules introduced in Linux (i.e., you can make your life easier, but you still have to understand the code) makes sense, given the limited code review capacity, compared to the volume of incoming contributions - which is also referred in a mailing list thread I'm referring to [2]

[0] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...

[1] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...

[2] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...

twpabout 4 hours ago
AI reviews are flaky - maybe correct 80% of the time - and everyone hates flakiness.

AI code reviews easily double the work in reviewing: you have to both review the original code and the AI code review. The AI code review can be 80% correct, but you never know which 80% is correct and which 20% is garbage, so you have to review all the AI's comments.

Oryginabout 3 hours ago
Maybe but I'll take a 80% correct review over no review at all. If it alleviates a good chunk of back and forth between the reviewer and the committer, it's still overall a time save for the maintainer.
armanckeserabout 4 hours ago
Agreed. A problem I see with how AI reviews have been used is that after one kicks it off, now the maintainer has to review both the PR and the AI's review which doesn't really save time. Like you said, if AI review was used more intentionally, e.g. all PRs have to go through AI review that checks for the baseline requirements and only after the contributor signals "I addressed everything AI commented either by giving my disagreement reasons or making the changes", maintainers spending time on the review could save a lot of quality time.
ACCount37about 4 hours ago
"Pointless and nit-picky-annoying code review comments" seems like it could be mitigated with better prompting?

Leverage the innate in-context learning - by supplying the code review AI with an annotated list of "do" and "don't". Define the expected reviewer behavior better, dial it in over time.

asdfasgasdgasdgabout 4 hours ago
Additionally, I can't be the only person who has initially viewed a received code review comment as a pointless nitpick only to realize it prevented a serious bug. I think as a code review recipient there is a natural human bias to believe that our code is already great and to see feedback as being less important than a truly neutral observer would.
lstolcmanabout 4 hours ago
Apparently, this is what they are trying to do [0].

In some commercial projects we use copilot reviews in github, and noticed this "low quality nit-picky" style of review comments as well - but there is no way of getting rid of them as it is managed externally by github...

[0]: http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...

dvhabout 5 hours ago
You need to go back to the roots of open source. Fork it, merge your two changes, remove 90% of code you don't need, rename it, write article about speed up in the new successor vs the old thing.
armanckeserabout 5 hours ago
It is a rite of passage. Meet Jellypin, my fork that only allows watching media with subtitles
mike_hearnabout 1 hour ago
Forks don't have to be hostile. A perfectly reasonable way to react to an overwhelmed maintainer is just to do a friendly fork. Keep the original name, attribution, git history etc, update the README and start acting as a trustworthy lieutenant. You can review stuck PRs and merge them into your own branch, whilst also merging with upstream master. After a while if you seem to be making good calls the original maintainer can do a bulk merge from your branch to bring in many PRs at once, and maybe add you to the repository.
onionisafruitabout 4 hours ago
Check out my fork, Jellyden(iro). It’s the best way to watch Heat 2. All the media selection garbage is removed for a streamlined Heat 2 experience, because why would you want to watch anything else when you could be watching Heat 2 instead.
esafakabout 3 hours ago
Now all I have to do is pull both your forks and create my own so I can add one more feature. This is the future!
pjc50about 3 hours ago
It's worth asking "if AI is so great for software development, won't that make it dramatically easier for people to maintain their own forks of software?"

(I suspect the answer ends up being no, but the reasons could be interesting)

mike_hearnabout 1 hour ago
I'm curious why you think the answer would be no. I've had some success with resolving complex merges with GPT 5.4, and it seems obvious enough that AI is a good solution for maintainers who don't have anyone they can trust to take over the project whilst also needing to boost throughput.
zokierabout 4 hours ago
You jest, but I think there is kernel of truth here. I do think people should be doing more (friendly) forks instead of funneling everything through upstream.
PaulKeebleabout 3 hours ago
Ultimately if the new contributor brings in others to the project to also review and progress the project then it will quickly outpace the development on jellyfin and become the successful fork. No maintainer can cope with the workload of something like jellyfin and if they wont assign maintainers there isn't much else to be done.

The key to the success is dealing with the outstanding merges by bringing maintainers onboard that are trying to contribute, build up the team and then the merges will get processed a lot faster.

armanckeserabout 1 hour ago
So this is exactly what's unintuitive about queues, an analogy would be car lanes. Intuition might lead you to conclude that if a 2 lane road has traffic constantly going to 4 lanes will solve the traffic. But this is not true. Many people that would have used the road might have been using public transport or just decided not to commute or stay inside normally will join the traffic until it once again equilibriates. Adding more maintainers without addressing the core problems of the queue won't lead to success
saulpw12 minutes ago
If you only focus on "solving the traffic" then you're right, adding more lanes ultimately just leads to more lanes being full. But the overall throughput is much higher! We need more holistic solutions, to be sure, but I hope no one thinks that means I-5 around LA could just be 2 lanes of traffic because they'll be full of traffic either way.
nemomarx17 minutes ago
Does induced demand apply to open source maintaining? What would be the mechanism for that?

For traffic, more users note that the highway is easier to drive on and come over. Would people notice development speeding up and start adding more issues?

mkjabout 3 hours ago
Looking at the PR discussed, it's 34 commits! I'd probably ignore that too as a maintainer. The PR description isn't particularly motivating, "Cleans up the implementation", "see #6735 for the actual motivation".
armanckeserabout 2 hours ago
Fair call-out, although couple things to point out, I am used to a Squash Merge workflow which I think makes reviews easier based on comments as the reviewer gets to see what changed after their comment easier. Many of the commits are merge commits. If you actually look at the timeline of the original PR, you will see that it also started with a smaller scope but as time passed I also went through the cycle of "while at it, let me also fix this" loop that I mentioned in the article.

The point of the article is: there is a feature that people would like, there is someone who wants to add it, the appropriate time and a lot more for this feature to be merged has been spent yet the feature is nowhere to be found. That's the two way street I am trying to get across. I wish I wasn't even able to open the PR, I wish the maintainer would utilize more automation tools to groom feature requests and potential contributors with agreed upon plans and agreed upon timelines so that both sides time could be used much more effectively.

As far as PR descriptions etc goes, I asked multiple times what the best route to merging would be. If that went through better descriptions, I was happy to do that, as you can see, I wasn't aware of the "no conventional commits" rule, so in my next PRs I used the correct approach, but that should be completely automatable. Yes, I should have spent more time studying Jellyfin's conventions, but I shouldn't have to, not because its unfair for me, simply because there are more contributors than maintainers, so maintainers should not rely on desired behavior from contributors, they should force that behavior as much as possible.

Liskni_siabout 3 hours ago
Many of those are "Merge branch 'master' into armanc/subtitle-sync-refactor". Rebasing the PR on top of master would bring that down to like 15 or something.
mkjabout 3 hours ago
Fair enough. A 15 commit PR is still pretty long winded.
NortySpockabout 3 hours ago
First, I think sorting PRs by "recently updated" can be a good proxy for "does anyone care about this PR"... If it's being bumped and reviewed, it is.

But also definitely start setting up linting rules / labels to indicate how healthy or close something is to being merged.

The goal is to limit work-in-progress, and focus on getting PRs that are easy to merge over the finish line.

Edit: and yeah, a weekly review cadence goes a long way to triage tickets and PRs and get some initial feedback to developers. I also like the "next review on this date" proposal to push certain problematic PRs to a slower cadence so they're not occupying too much bandwidth.

dottedmagabout 2 hours ago
I've had one-line PR fixing real bugs sitting unreviewed for years. It didn't need any bumps or reviews.
PaulKeebleabout 3 hours ago
When it comes to open source projects before you do any work go and look at the merge requests/PRs and look at who is getting them resolved and if lots of them are seemingly stuck. Some projects just don't take PRs from unknown people and they don't invite contributors onto their team, those projects with one maintainer and little community collaboration aren't worth writing code for.

There is a constraint still but a project like the Linux kernal has put a lot of layers of review and testing and merging between the source of truth and the underlying contributions. Having a number of leutenents that deal with subsections of the system that test merges and review the contributions is necessary for a project to grow.

esafakabout 3 hours ago
When it comes to open source software I would:

1. Modularize the code to allow plugins so users can start using them immediately, and you can vet them at your own pace. 2. Make tests robust and easy to run (one command to run, at most one to setup) so you don't have to pore over their code to have some confidence that it works.

bombcar42 minutes ago
This is the real key - modularize, pluginize, or otherwise make it so features can exist behind an "experimental" tag or similar, so that they can get merged and out there and not disrupt users if something doesn't work or goes wrong.

Actual Budget uses this well and merges PRs much faster, because they have the "it's experimental" to hide behind.

pessimizerabout 2 hours ago
The basic problem is that these projects need democratic governance, not dictators. If you do the thing that lawyers always tell us not to do and compare code to law, you'll see how inadequate a king alone is to maintain all the law of a kingdom. He does not have time to approve everything. He does not have time to even be aware of everything. He has no easy way to figure out what his subjects want.

The problem is that we haven't created theory and tools for online governance. We just went with dictatorship. If using a piece of software automatically made me part of the community of that piece of software, we'd have something. Only to the extent that I felt like participating of course, but if software would aid that, in a uniform manner, across projects, that would be an achievement.

The code has been treated as the end-all be-all, but projects get rewritten. The important part is the institution. We've been regularly concentrating that institution into one unpaid or poorly paid guy, until it gets handed to some corporate vulture who thinks of the users as prey.

The irony of this situation is that a backlog of PRs means that you have a overwhelming surplus of people willing to do free work. Seeing it as a problem is some sort of ideological failure. We just hate democracy and losing control so much that we're willing to starve surrounded by food.

NegativeK19 minutes ago
I think maybe we should listen to the lawyers here.

The tension is that people start with writing code to scratch an itch and share it freely. Every bit of community management pulls away from sitting down and coding a personal project. Even accepting the free work of others.

There are people who like doing both, but someone publishing their code online doesn't imply that. And it certainly doesn't imply that they hate democracy.

dugidugoutabout 1 hour ago
I think there is some truth here, but implementation is just very messy in any case. Look at NixOS for an example of the insane amount of time a democratic structure demands on a projects governing body and how this ultimately shaped this specific project's inhabitants.

This is to say, I do agree software and especially open source is a ripe bed to experiment with different ways to organize!