ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
51% Positive
Analyzed from 3552 words in the discussion.
Trending Topics
#literacy#kids#students#don#learning#learn#need#more#same#school

Discussion (83 Comments)Read Original on HackerNews
> The Chromebooks, which the students use in every class and for homework, came pre-installed with an all-ages version of Gemini, a suite of A.I. tools. When my daughter, who is in sixth grade, begins writing an essay, she gets a prompt: “Help me write.” If she is starting work on a slide-show presentation, the prompt is “Help me visualize.” She shoos away these interruptions, but they persist: “Help me edit.” “Beautify this slide.” The image generator is there, if she’d ever wish to pull the plug on her imagination. The Gemini chatbot is there, if she ever wants to talk to no one.
I'm not as anti-AI as the author of the piece, and I think that AI could have a role as a teaching aid. It's infinitely patient and it's able to adapt to a student's needs better than a textbook. Still, I hate the idea of students being encouraged to entirely offload their cognitive work onto an online service rather than think for themselves. The point of making fifth graders write essays, make art, design presentations, etc isn't the end product, it's that they now have the experience of having done the assignment. I would rather see students get taught how to think creatively, analyze a piece of writing, coherently explain an opinion, or draw a picture on their own, instead of giving this up in exchange for the nebulous skill of being "AI native" (aka being able to ask a computer to produce work for you).
To be fair, making slideshows sucks and I've never met anyone that actually enjoys the experience. I'm sure some people out there enjoy it, but anything that gets me out of PowerPoint faster is a win in my book.
But I've also seen situations in which the presenter doesn't care, or the slides are just a backdrop for some better communication/selling/maneuvering they're doing, or they know the information is bogus or the presentation pointless, or they know the audience doesn't care, or for everyone it's just a meeting to be able to say you had a meeting.
I'd guess that at least half the current use of LLMs is for "cheating on your homework" tasks, in which the person prompting it simply doesn't care -- whether it's for schoolwork, professional work, or socializing.
I haven't seen his actual slide deck anywhere online though.
It's not about being anti-AI, it's about being anti-distractions in education.
These companies don't want to raise "AI literacy", they want to get to their future users young.
'AI Literacy' is just very much not that at all and is just state-mandated brain rot.
In third grade I got taught how to type properly and hit 60-70 WPM, which is roughly where I still type to this day when doing tasks that require thinking instead of just doing a pre-compiled speed benchmark.
Kids really need to learn the fundamentals of things, but on the other hand some of the same arguments came out when calculators were going mainstream and classes just evolved to take the new tools into account. I think eventually we'll see the same thing happen with AI, but I'm not sure what that will look like for every case yet. Probably more paper and pencil work tbh
A far better computer literacy course was the one I took at Sacramento City College as a dual-enrollment student in summer 2004, which was the prerequisite to programming courses. Even though I already knew how to program in QBASIC, Visual Basic 6 and C++, I still had to take this course. Anyway, we learned very basic computer architecture (the roles of the CPU, memory, storage, buses, etc.), the role of the operating system and the difference between it and applications, computer networking, the Web (with an introduction to HTML and CSS), the history of computing, and a brief introduction to programming, with exercises in C++ and even Scheme (the professor showed us his copy of SICP and threatened students who talked during his lectures with Scheme homework assignments).
It was a fun class. The professor knew I was a Linux fan, but I had a hard time downloading a distro at home due to my having dial-up. He gave me some FreeBSD install CDs. I became a fan of FreeBSD since, and exploring FreeBSD led me down a rabbit hole where I devoured the history of Unix and BSD. By the time I graduated from high school, I wanted to be a systems software researcher like Ken Thompson and Dennis Ritchie. This shaped my early career; I’ll never forget meeting Marshall Kirk McKusick my senior year of college at USENIX FAST 2009.
Turned out that computer literacy course I was required to take at Sacramento City College despite having computer literacy had far-reaching impacts in my life.
01010101 0123456789ABCDEF AND OR XOR, ..
I feel that my experience was far worse and bordering on the absurd and bureaucratic. We spent years following instructions, taking screenshots of us opening specific windows and dialogs in Office etc, saving all these screenshots into a Word document, and then printing the document.
To be clear, it was every single action you took. Moved the mouse to "Insert"? Don't click it yet, take a screenshot of your mouse on the "Insert" button, and then click it, and take a screenshot of the menu that opened. Then, take more screenshots of moving your mouse to buttons and lists in dialogs that opened. Then, take a screenshot of the document with the thing you just inserted.
Now, write several paragraphs in detail about what you just did. Print everything, and that includes both the document you just created for the exercise and then the document writing about the document creating exercise with all it's dozens of screenshots.
Each individual printed piece of paper needed to be kept in a plastic wallet, which was then kept in document folder. In the end we had multiple of these document folders that were without a doubt a complete waste of paper and time.
The argument was that it was needed in case the exam board decided it needed to double check the teachers scores, which I think never happened once anyway. There was never once a reason given for why each individual piece of paper needed to be put in a plastic wallet.
This was during a period of time where CS education at schools had essentially totally vanished from the curriculum for decades, it was added back after I'd finished school.
Words cannot describe how much I despised the entire ordeal. There simply are not enough words to describe the total absurdity of an IT class requiring screenshots of clicking buttons and being printed onto paper.
While the teacher was trying to explain how to add PowerPoint transitions I was writing scripts that would fetch currency conversions and graph them because I was that bored. One time I write some terrible "chat" system via some type of free shared HTML/PHP hosting and meta tag based auto refreshing of the chat history for a few class friends to talk across the room.
[1] https://www.theverge.com/ai-artificial-intelligence/920401/g...
The coyote is already running beyond the cliff so indoctrinating kids won't save them from an AI winter 6-18 months away.
https://www.youtube.com/watch?v=pohXWbMrXZI
After all, it's nigh magical stuff. A machine that talks to you in common language and is almost always right. If you weren't already prepared for it, you would trust it implicitly. When Wikipedia first came onto the scene, people behaved this way there too. They would believe it was entirely correct. But at some point there was a concerted effort in pedagogy to say things like "You can't cite a Wikipedia article" and that one simply-remembered rule allowed for children to be forced to treat it as an aggregator.
Naturally, setting up a fund for this is nearly always a bad structure. Earmarked funds have a bad habit of ending up being written to be primarily a vehicle to transfer money to pet constituencies. Teachers unions and so on are always advocating for these because that's what funds the complex ecosystem of teacher educators, the certification and curriculum development programs, and so on. This is just social welfare by a different means. Funds should be flexibly used to meet some outcome. Earmarked funds have a habit of ratcheting up. When there is no need for programs, they continue to exist, and bleed money from the actual work product of education - informed students.
I get why these articles are always written in this style but I really would appreciate some better news media. Students hate a lot of things. Their opinion is mostly moot as to whether a subject is a good thing to learn or not. And all this polemic style of "shoehorn" and so on is completely unnecessary, and just makes me treat this whole thing in the realm of some partisan Twitter post.
But the one thing I did appreciate is a link to the text of the bill.
The best results I read about on here using LLM's have to do with prompt mastery I think.
When I was younger, to solve a problem, we had to memorize a large amount of information. Or know someone who does. Or visit libraries and pray they have a book on what you need.
Then came the internet. All of that memorizing was replaced by web searches. You just focus on solving the problem, figuring out what you don't know and searching for that.
Now, it feels like we're automating the searching, connecting the dots and most of the problem solving. We focus on the high level problem description, verification of the results.
I wonder what they'd be adding to this curriculum.
Now, it feels like we're even offloading
Almost.
Of course, they probably plan to do to education what iPads did to education: deskill children. Apple successfully abliterated the concept of a file from a generation of students by making them do their computing in a straitjacket. I can only imagine how an AI-first or AI-only educational curriculum could make kids even worse at using computers.
Like the time I got given a swelling tablet at work to dispose of and had to go through phone tag to get an answer on what to do with it or how dangerous it was. And my coworker asked "if [I] tried asking AI?" I said I am not relying on ChatGPT for something that might explode, I'll wait for the person who's paid to tell me about this thing that might explode.
But are NSF grants really necessary for this? To what degree is this funneling taxpayer money to buy ChatGPT subscriptions and advertise to students by getting them to use AI in the classroom?
How about funding something useful ? Like real literacy as in reading books ? That will help kids out far better than "AI literacy".
Both kinds of students will exist.
It's actually not.
It's easy to get an AI to say a lot about a subject, but that doesn't mean anything the AI said was true. There's a significant risk that the AI has simply hallucinated the information, and now you "know" a bunch of false ideas about the subject, which is worse than not knowing anything about it.
Yeah and I'm betting there's gonna be a whole lot more "press the button to have all your work done for you" students than "work hard" students. FFS even before all this there's been an alarming number of students attending college who have to take remedial classes.
Unfortunately, the AI literacy big tech companies want to push won't align very well with the AI literacy kids need. It'd be like ad literacy for K12 being pushed by Google - obviously what's delivered would not match what the kids actually needed.
It's just a kind of training that's receded into the background as "normal", and that many of us who enjoy recreationally typing out comments on the Internet self-taught.
Writing exercises that children produce in school are immediately thrown into the trash after being graded and reviewed. The product is supposed to be better educated children, not better written papers.
But I thought the models are so good we don’t need humans anymore?
Just imagine what this will do to critical thinking, interpersonal relationships and family dynamics in a country where illiteracy is rapidly climbing. I don't think it's a stretch to write that if the unrestrained capitulation in terms of societal costs towards big tech continues, we're setting ourselves up for {generational, class-based} conflict that will rip our country to pieces.
And learning when other people (AI salespeople, say) are blowing smoke is also an important skill. Again, I'm not sure that AIs are great at teaching that.