RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
40% Positive
Analyzed from 710 words in the discussion.
Trending Topics
#rss#feeds#search#feed#news#readers#lot#enough#actually#post

Discussion (8 Comments)Read Original on HackerNews
This is what lead to algorithm based filtering. Hacker News uses a simplistic algorithm but it is definitely using one and it works well enough. It's why I come here. We all collectively vote things up and what remains is nominally interesting enough to skim from the front page. With a bit of editorializing.
Social networks tried to game the algorithms for ad revenue. Which is why they are a lot less popular these days. Sites like Medium, Substack, Tumblr, etc. took over from simple blogs and immediately started raising walled gardens around them to become discovery platforms, have recommendations, etc.
But at least they support RSS. A lot of websites still do. If you run any kind of website publishing regular news or article content and you don't support a feed, you are being an idiot. It's easy, doesn't really cost anything, and you might actually get people using your feed once in a while. Your site might actually have one without you realizing. Most news papers have feeds. They are everywhere. The main issue isn't finding them but sifting through them. It always was.
With agent based approaches, you control the algorithm. That wasn't possible in the past. LLMs can summarize, aggregate, categorize, group, filter, etc.
+ With a strong enough social network you probably don't have to care about SEO as much
You can title your post about bad customer service practices in a unique way without a second thought [0] and your more traditionally titled posts can still make the first page of a Google search with a reasonable query [1].
+ Depending on your niche your target audience is likely to already be tapped in well enough to not have to rely on search engines for content catering to their interests.
I feel like search engine practices trend along the curve shown in that meme where it's the "fool" on one end and then the "normie" in the middle and then the "Jedi" on the other end who does the same thing as the idiot. Except in this case "Jedis" only search for what's not present in their feeds (which doesn't have to be only RSS feeds) and fools can eventually cultivate their own feeds for their interests and reserve search engine use for mundane purposes that essentially fulfill the responsibility of some kind of pop culture almanac, phonebook and portal to Wikipedia.
[0]: https://shkspr.mobi/blog/2026/03/bored-of-eating-your-own-do...
[1]: https://shkspr.mobi/blog/2026/04/does-mythos-mean-you-need-t... — I Googled "mythos and open source". Interestingly, a forum discussion about this post came before it: https://itsfoss.community/t/does-mythos-mean-you-need-to-shu...
Also, RSS readers are generally automated. I know I've had them around for years pulling in articles that I never read. Like a podcast "listen" is actually just an automated download, RSS traffic does not necessarily involve anyone actually reading the article, whereas search traffic is generally high intent and is at least resulting in eyeballs on the site, if not actual readers.
The data doesnt purport to cover any more than the 1 website. Its not like there are any generalisations about other websites derived from the data. Its just "These are where my hits come from"
They try to address that
> I added RSS and Newsletter tracking. These data are very lossy. If someone is subscribed to my RSS feed and opens a post and their client downloads a lazy-loaded image at the end of the post, I get a hit.
Meanwhile, the long-time users who subscribed via RSS are still showing up like they always have. If this is the case, it’s a bit of a sad reality for content creators.