Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 388 words in the discussion.

Trending Topics

#model#large#training#train#language#llm#stanford#https#lectures#looks

Discussion (14 Comments)Read Original on HackerNews

jvican•about 3 hours ago
If you're interested in this resource, I highly recommend checking out Stanford's CS336 class. It covers all this curriculum in a lot more depth, introduces you into a lot of theoretical aspects (scaling laws, intuitions) and systems thinking (kernel optimization/profiling). For this, you have to do the assignments, of course... https://cs336.stanford.edu/
the_real_cher•about 3 hours ago
how does one get the lectures? I don't see the option for any lectures.
eftychis•about 3 hours ago
azangru•about 1 hour ago
One goes to youtube and searches for cs336?
NSUserDefaults•about 2 hours ago
Been doing it since the day I was born. The beginnings were hard but I’m getting there.
hliyan•about 1 hour ago
You've actually been primarily training a physics model, with an LLM attached to it.
ofsen•about 2 hours ago
This looks like exact copy of this video of andrej karpathy ( https://youtu.be/kCc8FmEb1nY ) but in a writing format, am i wrong ?
baalimago•about 3 hours ago
Train your LM from scratch*

I doubt you have a machine big enough to make it "Large".

mips_avatar•about 2 hours ago
You can fully train a 1.6b model on a single 3090. That’s a reasonably big model.
electroglyph•about 2 hours ago
you can train it, but not fully
nucleardog•about 3 hours ago
Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!

And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!

I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.

But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?

This is about learning concepts, and the rest of this is mostly moot.

On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.

baalimago•14 minutes ago
Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?

In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).

And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.

Malcolmlisk•about 1 hour ago
Then rewrite the title and call it "learn how to do a non usable llm from scratch"
improbableinf•about 1 hour ago
Opus 4.7 is non-usable for the tasks I have — but it’s considered an LLM.

And no one is stopping anyone from tweaking few parameters in this repo to go above 10M parameters.

hiroakiaizawa•about 3 hours ago
Nice. What scale does this realistically reach on a single machine?
lynx97•about 2 hours ago
Model: 36L/36H/576D, 144.2M params

runs on a Blackwell 6000 Max-Q, using 86GB VRAM. Training supposedly takes 3h40m

iamnotarobotman•about 4 hours ago
This looks great for a first introduction to training LLMs, and it looks simple enough to try this locally. Great job!