Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
74% Positive
Analyzed from 1533 words in the discussion.
Trending Topics
#papers#tiktok#social#idea#scientific#summaries#find#already#useful#don
Discussion Sentiment
Analyzed from 1533 words in the discussion.
Trending Topics
Discussion (38 Comments)Read Original on HackerNews
I am interested to hear if anyone knows why the format may not resonate with researchers or those reading papers in general?
My own reason is that to get value from a "social" site the number of interactions has to be high and of a fast speed for people to continue to engage, which is maybe not possible to hit on research papers.
You have to either invest a lot to get a critical mass to join your site, or make it extremely entertaining to be there from the start. Apart from all the criticism, this is what Facebook, Instagram, Twitter, and LinkedIn got right from the start. For their intended audiences, it is either useful or fun to be on their platforms.
I don't see much added value for most arXiv extensions, except for SemanticScholar [1], which might have been lucky being one of the first.
[1] https://www.semanticscholar.org/
Other formats are dense and require reading and internalizing the content
Somewhat recently, the ACM (one of the premier publishers for computer science) integrated AI-generated summaries for all papers, and it made these summaries appear in place of author-written abstracts; to find the abstract, users had to use a toggle. The ACM argued that this was a benefit. After significant community pushback, the ACM has swapped things: author-written abstracts now appear first, but users are still offered a toggle to access AI-generated summaries instead.
As highlighted by professor Anil Madhavapeddy [1], the AI summaries are often factually incorrect, sometimes obviously, but often subtly. This sentiment was corroborated by numerous colleagues of mine less publicly: they checked the AI-generated summaries of their own papers, and for almost every paper were able to identify at least one factually incorrect or significantly misleading statement.
Some people argue that AI-generated summaries help to democratize academia; I think instead they are democratizing misunderstanding. The models fundamentally lack the capacity to "understand" when what they say is wrong or misleading. It is not uncommon that I have students in office hours with severe misgivings about our course material because they asked an LLM some innocuous question to which they thought surely the LLM would generate an accurate response. The course material is, of course, drawn from various sources, so the LLM ought be fairly likely to generate accurate responses. In contrast, a publication is often (or, by definition in my field, necessarily) introducing novel conclusions; this means that the LLM is less likely to generate an accurate summary for a paper than for course materials, and the course material summaries are already problematic enough, so I think applying this to research is just a bad move.
I understand the appeal. I understand how liberating it must feel to someone to get to "talk to" a paper to seek greater understanding. But if you already don't know enough about the material that this is useful, you also don't know enough to know when the responses are subtly incorrect, and I think this completely undermines the purpose of publication in the first place.
[1] https://anil.recoil.org/notes/acm-ai-recs
Instead of some LinkedIn / TikTok / Facebook / Insta for X, create a group or channel in an existing network. Create a subreddit, or Facebook group or telegram channel. There are a number of existing social networks that are good at creating sub-communities. I don't want to join another social media platform.
Because you don’t control the existing social network. Meaning you can’t exploit and profit from the users in a group¹, which is the whole point of digital social networks, outside of a few truly ideological non-profits (remember Diaspora?).
Âą And if you do find a way, the parent network will simply eat you up
They are ad financed, clickbait driven "engangement" machines, that are designed to make people addictive and do not respect their users at all.
So TikTok for scientific papers already makes me not want to engage with the concept. But a social network with a focus on science is something I am interested. But the base would need to be solid. Where I can trust that they do not sell out to some ad network in 2 months after they established some users.
My $0.02 try creating an AI powered science channel on YT or insta before spending time on creating a dedicated app.
But the popularity metrics and AI aspects seem like they will cause a bias towards certain types of papers, making potentially useful ones not get found.
When I opened the link, I expected to directly be shown the target content. If there's a login screen or any explanation to do, it should either be postponed or integrated into the experience.
Is the gravity set very high or am I getting too old to play Flappy Bird with Transformers?
This looks amazing. I hope Android will be an option.
Seems like a cool idea, but also really niche. I could see a map tool as part of this video thingy where you can see word/phrase associations between adjacent papers as a similarity and connection search?
(1) https://philippdubach.com/posts/rss-swipr-find-blogs-like-yo...
FYI I'm getting "Too many signups right now. Please try again in a few minutes." when trying to sign up to the waiting list. (congrats haha, but good to fix)
I joined the waiting list.
I hope it’s not purely ai generated, but who knows, maybe it is and it’s still interesting and informative. It could still be with such huge volume and high signal basis. Wish I’d thought of this actually.
There are so many papers being written these days that it's difficult to find all the ones that are relevant to your work and interests. Likewise, there's a discoverability problem for authors who are not already well-known. Andrej Karpathy's arXiv Sanity site used to be a decent way of sifting through papers in some areas, but sadly it's been down for a while now.
It is an interesting mix though. I am not dismissing it outright. After all, I am driving ford lightning and kinda like ratty..