Advertisement
Advertisement
β‘ Community Insights
Discussion Sentiment
71% Positive
Analyzed from 464 words in the discussion.
Trending Topics
#congestion#play#perspective#interesting#learn#internet#bandwidth#middle#llm#models
Discussion Sentiment
Analyzed from 464 words in the discussion.
Trending Topics
Discussion (10 Comments)Read Original on HackerNews
Right. ML doesn't have to work well because it's used in situations where the cost of the errors falls on someone other than the service provider. Hallucinations require a business model where their cost is an externality, like pollution.
With an objective goal, such as tests or a spec or driving without hitting anything, to check the results, it's possible to do better, of course.
The Internet only works because fiber optic bandwidth is cheap. As someone who was working on congestion in the early days, I could see that congestion in the middle of the network had no known solution. If congestion could be pushed out to the edges, there were strategies, but there were no good solutions in the middle. And, in fact, the whole Internet would sometimes go into congestion collapse in the early 1990s, with the big peering points at MAE-EAST and MAE-WEST losing well over half of the packets. What saved the Internet was cheap long-haul bandwidth and big hardware-supported switches. This kept congestion at the fringes.
> For most cases I don't think having explainability is worth the trade offs in capability. That'll be a good topic for a future post.
Seems like content reverse engineered from title.
Moreover, the LLM reasoning models are reliably consistent solving the same task with the same prompt using different variables. This can be demonstrated.
Not everything has to be deterministic to be useful. Nonetheless, understand how LLM models can be applied and be useful will help a lot of people to be less frustrated and spend less tokens.
So either it's an interesting statement because it's infinitely generalizable or not interesting for the same reason.