Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

71% Positive

Analyzed from 464 words in the discussion.

Trending Topics

#congestion#play#perspective#interesting#learn#internet#bandwidth#middle#llm#models

Discussion (10 Comments)Read Original on HackerNews

dzink•about 5 hours ago
Children learn by playing because not much is expected of the outcome in play. Improvement happens when you can play. When AI has a play environment to learn with reinforcement. When entrepreneurs are allowed to try and fail and do better. Doctors learn by practicing under supervision, or on corpses, until they can do it for real. No straight line goes up without a jiggle in the beginning.
chermi•about 6 hours ago
I like the networking perspective, but the ML perspective is such a loose analogy that it's hard to even judge. I mean, we've known forever softening constraints allows you to reach solutions otherwise unreachable, for one? There's a gulf of difference between succeeding at something deterministic by allowing failure vs. good pattern matching by optimizing over a rough landscape of examples.
Animats•about 3 hours ago
> I like the networking perspective, but the ML perspective is such a loose analogy that it's hard to even judge.

Right. ML doesn't have to work well because it's used in situations where the cost of the errors falls on someone other than the service provider. Hallucinations require a business model where their cost is an externality, like pollution.

With an objective goal, such as tests or a spec or driving without hitting anything, to check the results, it's possible to do better, of course.

The Internet only works because fiber optic bandwidth is cheap. As someone who was working on congestion in the early days, I could see that congestion in the middle of the network had no known solution. If congestion could be pushed out to the edges, there were strategies, but there were no good solutions in the middle. And, in fact, the whole Internet would sometimes go into congestion collapse in the early 1990s, with the big peering points at MAE-EAST and MAE-WEST losing well over half of the packets. What saved the Internet was cheap long-haul bandwidth and big hardware-supported switches. This kept congestion at the fringes.

10000truths•34 minutes ago
As a corollary, will we see a recurrence of congestion in the middle as FttH sees increased adoption? It's easy to believe that 10 Gbps ought to be enough for everyone, but history tells us that people will find a way to saturate any unused bandwidth (8K video with crazy bitrates, 1 TB video game installs, etc).
xg15•about 4 hours ago
Yeah, I didn't find his initial take very convincing, but he lost me at the followup:

> For most cases I don't think having explainability is worth the trade offs in capability. That'll be a good topic for a future post.

adampunk•about 2 hours ago
What’s objectionable about that, assuming the tradeoff is real and actually hurts?
nh23423fefe•about 5 hours ago
I'm not seeing how describing measures over possibility space as allowing for mistakes.

Seems like content reverse engineered from title.

dataviz1000•about 6 hours ago
The LLM reasoning models behave strikingly similar to superscalar out-of-order execution processors with decomposition, verification, and error correction steps.

Moreover, the LLM reasoning models are reliably consistent solving the same task with the same prompt using different variables. This can be demonstrated.

Not everything has to be deterministic to be useful. Nonetheless, understand how LLM models can be applied and be useful will help a lot of people to be less frustrated and spend less tokens.

booleandilemma•about 6 hours ago
Interesting. I could apply this to some people I've worked with. They work so well because they don't have to.
CSSer•about 3 hours ago
I suppose another interesting thing about this observation is that this is true about the universe too! Einstein thought God doesn't play dice with the universe and Niels Bohr proved him wrong.

So either it's an interesting statement because it's infinitely generalizable or not interesting for the same reason.