Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

60% Positive

Analyzed from 604 words in the discussion.

Trending Topics

#memory#wulf#compute#wall#more#bandwidth#paper#believe#mckee#elements

Discussion (42 Comments)Read Original on HackerNews

DespairYeMighty•1 day ago
She was a CS PhD and somewhat itinerant professor with a long career who wrote a prominent CS paper about computer memory, Hitting the Memory Wall: Implications of the Obvious

https://dl.acm.org/doi/10.1145/216585.216588

on her obituary page, you will see a prominent "Memory Wall" link that is NOT a reference to her paper, but a place for sharing your thoughts about her life

deater•1 day ago
you wouldn't believe how many people cite that paper as "Wulf et al." when that's practically more characters than saying "Wulf and McKee"

I notice these things a bit more as she was my PhD thesis advisor

marricks•1 day ago
There's only two authors! That's so rude!
setgree•1 day ago
It’s also not correct; et al. is conventionally applied to three or more authors (it means “and others,” plural)
bjourne•1 day ago
Why? For all the automatic academic score tracking systems it doesn't matter one bit if it is Wulf et al. or Wulf and McKee.
SecretDreams•1 day ago
et al should never be applied when only two authors!!!
fsckboy•1 day ago
...unless the second one is named Alfred and is an informal person
thaumasiotes•1 day ago
> you wouldn't believe how many people cite that paper as "Wulf et al." when that's practically more characters than saying "Wulf and McKee"

    Wulf et al.
    Wulf and McKee
35% less isn't usually described as "practically more".

It'd be interesting to see someone use the unabbreviated form; I have a hunch they wouldn't know to say "et alia".

tomjakubowski•1 day ago
How did you arrive at 35% less? The first is 11 characters, the second is 14, and 3/14 is 21%.
b473a•1 day ago
Yeah tenure is nice but there's just a hint of mystery behind the title "itinerant professor." Like a wizard that just pops up in places to work computer science magic.
seanmcdirmid•1 day ago
I was a phd student when sally was a professor at Utah. I get the feeling that a lot of people came together for an interesting project (systems/memory related, I can’t even remember the name ATM) and dispersed when the project was at its later stages. I think it’s common in our field for many phds to work as professors for just a few years and not commit to it as a career.
swyx•1 day ago
bit ironic i guess but unintentionally fitting
deater•1 day ago
There are probably so many stories out there of interesting things she did. A few are breifly referenced at her old website here: https://web.archive.org/web/20060116130917/http://www.csl.co...
codaea•1 day ago
Her babysitter was Mike Bloomfield!? (the astronaut)
tkhattra•1 day ago
rip. i got a chuckle out of this trivia on her old website:

> Rob Pike didn't really name my favorite editor after me.

akkartik•1 day ago
My dissertation was on the memory wall, and I never heard of her :/ RIP
AnimalMuppet•1 day ago
Could you (or someone else in the know) give us a brief overview of the current state of the memory wall issue?
Veserv•1 day ago
High bandwidth memory (HBM) can deliver TB/s of memory bandwidth and has completely shattered the memory wall for individual cores/compute elements. The only way for compute to keep up is going wide and parallel as seen in GPUs.

Despite this, massively increased memory bandwidth does not translate to material performance improvements on non-parallel compute tasks because few tasks are actually memory bandwidth bound, instead being memory latency bound.

The best known general solutions for improving memory latency are per-compute element memory caches. Unfortunately, this increases the complexity and size of your compute elements forcing you to reduce the number of compute elements, but a large number of compute elements is the only way to saturate HBM memory bandwidth.

To keep up the best known techniques are either algorithmically batch which allows you to go wide using vector/batch instructions or you go the GPU route with memory latency-hiding parallelism.

vlovich123•1 day ago
Well…. The reason there’s such a big mismatch is the memory controller. Something like 80-90% of the energy is spent moving data in and out because of the complex addressing. If you move compute into the RAM and instead shuttle instructions in and out, you might get a huge speed up. The challenge is when an instruction references some data over there - that may end up eliminating all the advantage. But people I believe are trying to commercialize this concept.
akkartik•1 day ago
Oh my knowledge is woefully out of date. But I believe the memory wall is a fact of life for the most part. Like many others, I nibbled around the edges of the constraint at massive cost in increased complexity. Outside of very specific exceptions the cure tends to be worse than the disease.
fao_•1 day ago
Damn, three years younger than one of my parents. A real shame.

Call your loved ones :(

dyauspitr•1 day ago
I’m never heard of that term.
northes•1 day ago
Thanks for your contribution, then.