ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
100% Positive
Analyzed from 405 words in the discussion.
Trending Topics
#words#training#research#own#doesn#understand#data#papers#reading#llm

Discussion (11 Comments)Read Original on HackerNews
If all scientists suddnenly do nothing all day but play with AI --- all research grinds to a halt!
So it is able to process and act upon summaries and concepts. In other words, apply synthesis. What it can't do is understand what a useful result looks like without direction. So it could synthesize a billion pointless claims from source material, but we still need a human to know which ones matter (without a specialized framework to comprehend this). If you provide LLMs with an objective and source materials it is certainly capable of following threads of logic or building an argument backed by sources.
I understand the concerns about AI, but it is a powerful tool for discovery and synthesis.
> But the AI doesn't even have "own words"; they are the training data's words.
If the AI understands those words, in what sense aren't they its 'own words'? Are you arguing that nothing but neologisms count?
We can't since it is a vapid, unsourced, AI mania fueled piece that could have been written by AI.
I suppose the associate professor wants AI funding.