Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 405 words in the discussion.

Trending Topics

#words#training#research#own#doesn#understand#data#papers#reading#llm

Discussion (11 Comments)Read Original on HackerNews

kazinator•about 2 hours ago
Author doesn't seem to understand that LLM AI works by predicting tokens out of training data. The model writes a research summary because it digested academic papers and other sources in its training. When you say "AI can already do social science research better than most professors" that is false unless you mean the colloquial sense of "research" meaning "reading other people's existing stuff and paraphrasing it in my own words". But the AI doesn't even have "own words"; they are the training data's words.

If all scientists suddnenly do nothing all day but play with AI --- all research grinds to a halt!

lubujackson•28 minutes ago
Don't undersell AI - it also synthesizes and recombines those summaries in a purposeful way. Otherwise it couldn't product code that works in an existing codebase.

So it is able to process and act upon summaries and concepts. In other words, apply synthesis. What it can't do is understand what a useful result looks like without direction. So it could synthesize a billion pointless claims from source material, but we still need a human to know which ones matter (without a specialized framework to comprehend this). If you provide LLMs with an objective and source materials it is certainly capable of following threads of logic or building an argument backed by sources.

I understand the concerns about AI, but it is a powerful tool for discovery and synthesis.

in-silico•about 1 hour ago
This is true after pretraining, but reinforcement learning allows the model to discover strategies and ideas that weren't in its training corpus.
squidbeak•about 1 hour ago
What about Alphafold?

> But the AI doesn't even have "own words"; they are the training data's words.

If the AI understands those words, in what sense aren't they its 'own words'? Are you arguing that nothing but neologisms count?

kazinator•about 1 hour ago
I would say that I don't consider that to be an LLM.
jdlyga•about 2 hours ago
CS Academia tends to lag behind industry practices. The research frontier can be very cutting edge, but course curriculum, assignments, and institutional norms are slower and more conservative. That’s usually manageable when the shift is something like cloud adoption, new tooling, or a new dominant programming language. But this particular industry trend, use of AI in software development, is massive and fast moving (especially the agentic workflow growth over the last 6 months). And we're just now understanding where everything fits in and its limitations.
frozenseven•27 minutes ago
Journal articles are sometimes years behind. There are still papers coming out that use GPT-3.5 (!) for their main result. These days I'm basically only reading arXiv preprints (and whatever is trending on GitHub).
laughingcurve•about 2 hours ago
As an academic this article was a fantastic position piece. I loved this and enjoyed reading it even if I didn't agree 100% thank you for sharing
34ajHa•about 2 hours ago
"P.P.S. That is, entirely generated based on my artisanal, hand-crafted human social media posts and thoughts on the topic. So who wrote it, really? You tell me."

We can't since it is a vapid, unsourced, AI mania fueled piece that could have been written by AI.

I suppose the associate professor wants AI funding.

buffer_overlord•about 10 hours ago
and web's dead baby....web's dead.
dyauspitr•about 3 hours ago
Is this the new clickbait? AI written AI scare papers.