FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
67% Positive
Analyzed from 4528 words in the discussion.
Trending Topics
#students#learning#kids#more#learn#education#teaching#don#years#questions

Discussion (72 Comments)Read Original on HackerNews
I was among those who, when Khanmigo was first announced, were pretty excited about it's potential. I then waited for data on the results....and kept waiting.....and kept waiting. And now four years later this is apparently what we are going to get. I think that this is enough for me to decide that Khanmigo, regardless of whether or not a student actually engages with it, doesn't make much learning difference. At some point, the absence of (reported) data becomes data in itself.
I still believe, in principle, that AI tutors could be massively helpful for learning. But apparently we haven't yet figured out how to take that principle and turn it into reality.
AI is perfectly capable of teaching you quantum mechanics if you understand music theory. However, unless you have a full understanding of music theory, you'll need to explain to the machine what you know, and that takes trial and error that most students won't bother with.
Honest question: how many of you tech bros have used this platform with your own children? If you won’t dog food it, quit claiming it’ll help the disadvantaged. Please.
It’s going to be quite hard to motivate students to learn now that they know answering can be automated.
I thought Sal's revolution was the idea of flipping the script on primary school learning: in-class homework & at-home video lessons.
I'm not surprised. Students are not rewarded when they ask _curious_ questions--rather, they're admonished for not paying sufficient attention.
Personally, my first use of ChatGPT was to ask tangential questions on JavaScript while taking a LinkedIn learning course on VueJS. I found ChatAI an excellent substitute for Reddit and StackOverflow, which is how I would have followed these inquiries before. Of course, I'm not a primary-school-age learner. I had to learn _How To Learn_ from experience.
I still remember when Khan Academy first came out, there was talk that teachers would go obsolete because teaching would become centralized and delivered over video.
Khan Academy to me is still just a YouTube channel trying very hard to be something more.
The thing is people want more than material. They want the material to be accredited and examined. Otherwise there is no demonstrable credibility from doing it.
And there's a whole world out there of higher quality material with has that accreditation and examination structure around it. And it existed, sometimes for decades in the case of The Open University, before Khan Academy appeared. But it costs money.
Well, in practice it's still about the amount of time a pupil does train with the right oversight and that is precisely the bottleneck that hasn't been alleviated.
Who could have guessed.
A para from from [0] makes it seem that students understand that LLM use doesn't lead to learning, but still do so. Do they not see effort put into learning worthwhile?
I myself use LLMs for learning (using ChatGPT's study mode for instance r.i.p) and can see that there's a right way to use it—you reach for it when you hit a wall, not to avoid the friction of developing an understanding.From what I understand tho, most of LLM use for learning is just LLM used as a tool for cheating. Even tfa mentions something of the sort:
The article attributes _skill issue_ as part of the problem, but how much of that is a motivation or awareness issue. How do you make student realize that learning is worth it?[0] https://arstechnica.com/science/2026/04/to-teach-in-the-time...
[1] https://www.reddit.com/r/Teachers/
But "students will use the cheating machine to cheat" was obvious from the release of ChatGPT3. There was never some period of time where AI looked like it was a net positive for students only to be revealed to have an unexpected harm.
Even from the folks who claim to use LLMs to learn rather than cheat or avoid work, I've seen so many people admit that they are actually using it to harm themselves. "Oh, I only ask ChatGPT for the answer for really hard problems." Yeah man, doing the hard problems is how you learn.
>Unlike other AI tools such as ChatGPT, Khanmigo doesn’t just give answers. Instead, with limitless patience, it guides learners to find the answer themselves. In addition, Khanmigo is the only AI tool that is incorporated with Khan Academy’s world-class content library that covers math, humanities, coding, social studies, and more.
The first differentiation is literally just prompting (if at all). Nowadays you can tell any chatbot to behave that way. The second one may have been an edge before tool use was widely common, but with all chatbots now having access to the internet and code execution, it seems like this has also become a dud. This product was a nice idea on paper, but the fast technical evolution of the field has largely left it in the dust.
They had really cool math videos and got given too much money, that's about the story.
As other people have noted, asking a.k.a <i>typing</i> questions, especially math-type is fatiguing, and there's no substitute for pen and paper and thinking hard.
KA would be better off using AI on the supply side (but heavily curated) to have more assignments, or better assignments in some sections.
But it's important to recognize KA for what it is, and it's an excellent way to have some sort of a basic curriculum, especially when self-studying, and all of the instructors have great teaching personalities, as far as I can deduce from the approach in the videos.
To give an example, I have a friend who learned system design through Claude in order to get a job interview (and he got really good at system design)while I have another friend who copies and paste ChatGPT responses in order to get a B on a reflection assignment.
This highlights that there is legit use case for personalized learning and growth via AI-but these are the people who seek knowledge with or without AI. Whereas the majority of students actively tries to do the least as possible on assignments even if they get 0 value out of it
Modern AI has made me a more productive teacher—-I produce higher quality material and have more time for research.
But the impact on most students is negative. It is another thing to engage with, which they won’t unless forced. The only way to learn is to do the work yourself. An AI tutor can get you unstuck faster, but that’s typically bad. Learning to be productively stuck on something for days without making visible progress is an important skill that most people never learn.
I like struggling with interesting problems. But spinning your wheels is not progress. So, IMO, getting you unstuck is a generally good thing.
The issue in my country is that you equate education with getting a safe job. 20 years ago, you needed a high-school degree in social science to get a government job. 10 years ago you needed a bachelor in social sciences to get the same job. 5 years ago you needed a bachelor in economy/engineering to get the same job. Now, because of recessions this is stretching to masters degrees.
You can't expect people who just want a job and a comfortable life and NEED to go to uni for this to want to be curious and want to learn.
Feels like whatever tool they'd be given, they'd be ahead anyway. What's more worrying IMHO is, are the remaining 85% faring even worst than they would have before because they are learning even less, not just slower than the 15% learning faster. Namely is the gain for the few a loss for the majority?
As for the other question, its mixed. I think about 20% of students understand that they are fucked if they just delegate it all to LLMs, they still go through the ropes and show up to class but do the minimum. However most are down the deep end in various degrees. I have seen students with 5 different 3000-line files for 5 questions for the same lab where each file has 3 lines of code different. This never happened even when the students cheated by accessing old labs online or plagiarizing before.
I believe that what will happen (because universities move really slow on policy and education on LLM use), is that pre-LLM, the university had a normal distribution of skills upon graduation. A company could trust that someone with a degree knew X and Y. With this however, you have more of a bimodal distribution, some know nothing and some know it all, so then the trust in universities deteriorates. I think we will see much more IQ-test/practical tests in hiring processes as the trust falters for that a degree equals something.
Ignoring whether or not this is a good idea in the first place, what about inverting the loop? Have the robot drive the interaction.
It's been fascinating to watch - my kids are really into Slay the Spire, and it had a discussion about a decision tree they use when fighting one of the enemies, and then it used that to bridge to writing some python code and walking them through it. Another time, with dinosaurs, it went with them through the k-pg extinction event, and what really killed the dinosaurs - the kids thought the explosion - it walked them towards the sun dimming, and why food getting more scarce filtered for small mammals, our ancestors, and smaller dinosaurs.
on the other hand, I was playing a lot Slay the Spire few years back and I would love to talk about with my kids while they play. Going from that it is not job of the parent to explain why dinosaurs are extinct?
If you can’t articulate what you want it becomes a guessing game
How about completing the loop? Pose subject matter questions to them throughout the day, maybe via something like mobile push, collect their answers, immediately grade their results, and then actively reward them for performance.
All of the things brick and mortar schools are uniquely bad at.
An AI-based education system should have embedded in it "I am here to teach this person Geometry. Here is a list of the topics to cover, with a breakdown of steps for each including an intro section, a study section, a test section, and the meta material to go along with it.
That would work.
The people who work in education don't have this issue; the people who work in tech and assume that gives them expertise in education do.
The poor engagement of the KA bot becomes clear--a teaching technique in not an education system.
So... yeah, it got old quick. Genuinely cool for a bit but now "we" as users just want good UX. Now give me the FAQ that I can search through then an email if it's not in there.
PS: FWIW I do believe in a long-tail fashion, for few users who are not into scripting, might not be developers (or believe they could become) it could help find very few very niche use cases with solutions.
His hottest take is we're already close to the optimal process for learning, so technology isn't going to improve it. Learning takes work, and no technology can do the work for you.
[0] https://www.youtube.com/watch?v=0xS68sl2D70
AI is great for the curious. But its not yet there where it can proactively engage with students to generate interest.
I remember an educator ranting to me a long time ago that the only data-proven ways to meaningfully improve educational outcomes was to reduce classroom size and make sure kids got enough sleep + fed well enough, everything else was just a waste of time.
Ask teachers that have been teaching for 10 years. Ask the professors how today's kids are different than the ones of yesteryear.
The move to de-tech the classroom will eventually help out I expect, but keeping kids (and adults!!!) from using cognitive shortcuts so they can develop their own sense of what's reasonable instead of taking information from a bought-and-paid-for oracle is going to remain a problem.
Dear Lord, how is this any different from Microsoft sticking Copilot or Google sticking Gemini in every single offering? They're literally saying that people aren't using the chat bot enough so they're going to force it on people inside the product.
i think what should be taught is the metacognative ability - like how to retrieve knowledge, how to ask the right questions towards a certain goal. knowledge itself are easily accessible with ai. now the difficult part is the ability to discern actual knowledge from llm halucination bs, the ability to retrieve the required knowledge given a scenario.
this still requires some foundational grounding — you can't detect bullshit with zero context. but the balance shifts from memorization to retrieval, iteration, verification. honestly i think it is more about critical thinking and philosophy.
1. It isn't
2. As you acknowledge, you need some 'foundational grounding', but the amount needed is quite a lot
3. The best way to teach metacognitive (and all other) skills is within a context
> the balance shifts from memorization to retrieval, iteration, verification
This has been trumpeted with every poorly-thought-out educational change, and it's a marker of unfamiliarity with the space. Memorisation hasn't been the focus ever; it's always about the other skills, and (some) memorisation is useful as part of that.
That is a warning sign if ever there was one.
The biggest thing is motivation. First off, if Khanmigo requires them to type and read everything, that's going to get tiring fast for most kids. But I don't know how you could do voice in a school setting - mine uses STT/TTS, but with 20 kids in a room, it'd be chaos - STT accuracy and diarization with 2 is already really challenging.
Motivation is helped a bit by following their interest, but it seems like KA is having trouble guiding the kids when they prompt it that way. That was a pretty big issue with mine early on - the kids would talk to it for an hour about whatever topic they were interested in at the time, but it would never branch into something new.
The tutor I'm working on solves it by having a concept graph that covers a lot of learning, from the basics like math, dinosaurs, etc to other developmental topics like 6 year old boundary-pushing humor, and two LLM threads - one that handles the conversational turns, and another one in the background that strategizes and steers the conversational thread by looking at the concept graph connections and considering how ready they are for each, and then injecting steering notes into the conversational thread. Basically system 1 and system 2 thinking. And after sessions, it'll make a basic plan of where to start next time, and what might be interesting to offer up.
I mentioned this in another comment, but I've been really pleasantly surprised at the quality of the tutoring, especially when it bridges into new topics - one of my sons is really into slay the spire, and at different times it’s used that as a launching-off point into probabilities, decision trees, python code of the algorithms he thinks about as he's facing different enemies, and general strategies on different facets, and my other son was really into sharks, which it has bridged into extinct sharks like megalodon, how scientists derive how it looks given cartilage's lower propensity to fossilize, bridging to dinosaurs and their fossils, the K-PG extinction event, how food scarcity filtered for smaller animals like the ancestors of birds, and our small mammalian ancestors. And a whole bunch of other topics.
It's been pretty great in that way, but my biggest open question at the moment is how to get them to engage with it on their own on a more regular basis - they go to it occasionally for random questions, but to get good coverage of that huge knowledge graph would take much more. And fundamentally, I think that human engagement still just has a number of important aspects to it that it's lacking, and I'm not sure if it's possible to replace those well enough.
Which explains the poor engagement you observed. To me it seems like a _technique_ I'd expect a skilled educator to deploy, sparingly in narrow use-cases, when it's nescessary to probe a students interest.
Open to suggestions for how to improve it!
> “Students aren’t great at asking questions well.”
In my interactions with my kids public school and their teachers, they're goal is ram content down their throat and test for retention, not foster an environment open to questions
Had a teacher claim straight up they don't believe the system works and are just in teaching for benefits and summer vacation
IMO Sal Khan's revolution hasn't happened because the adults in charge right now are ignorant and inept but incredibly vain nonetheless
Is that actually true though? Average American students (especially those in the public school system) are not excellent test takers, and they're even worse at rote memorization. If this is actually the goal they're not achieving that either.
They could quit and free up the slot for someone who does care
That said I do think it's particularly hilarious that KA's strategy to students not wanting to use the product is to make the product more integral to the experience.
…sounds a lot like Investors versus those who actually perform “work” as it’s defined in research literature.
But I’m sure a shoe company pivoting to AI isn’t a sign of a bubble about to burst, nope.
Who would have thought?