ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
61% Positive
Analyzed from 1110 words in the discussion.
Trending Topics
#language#enterprise#every#more#article#model#companies#llm#familiarity#making

Discussion (18 Comments)Read Original on HackerNews
Isn't familiarity with the language even more the case with a LLM. The language they do best with is the one with the largest corpus in the training set.
A stable mature framework then is the best case scenario. New frameworks or rapidly changing frameworks will be difficult, wasting lots of tokens on discovery and corrections.
Stability, consistency and simplicity are much more important than this notion of familiarity (there's lots of code to train on) as long as the corpus is sufficiently large. Another important one is how clear and accessible libraries, especially standard libraries, are.
Take Zig for example. Very explicit and clear language, easy access to the std lib. For a young language it is consistent in its style. An agent can write reasonable Zig code and debug issues from tests. However, it is still unstable and APIs change, so LLMs get regularly confused.
Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.
The thing with Clojure is also that it's a very expressive and very dynamic language. You can hook up an agent into the REPL and it can very quickly validate or explore things. With most other languages it needs to change a file (which are multiple, more complex operations), then write an explicit test, then run that test to get the same result as "defn this function and run some invocations".
But that is the "fear" side of the enterprise sales equation... The "greed" side of it is for the buyer to make the long / short hedge.
The exec who gets the value of the working product can potentially come out shining, when their peers will be furiously backpedalling next year. And this consummate exec can do it by name-associating with their "main bet" which is optically great for the immediate term but totally out of their control (because big corp vendor will drag its feet like every SAP integration failure they've seen), and feeling a sense of agency by running an off-books skunkworks project that actually works and saves the day.
A fine needle to thread for the upstart, but better than standing outside the game.
In the same article the author was mentioning a few expert systems from the past that were quite obviously successful.
> on the promise printed on its marketing
Ah, _that_ promise. That promise is never fulfilled anywhere nor it is expected to.
Enterprise buy from large companies because those large companies come with support teams, liability, expertise that you don't need to manage internally.
It rare I read an article that actively annoys me but there's something about how this is written that seems a little arrogant.
A little. But it's a nice article nevertheless.
The insight here is that this also still applies to huge enterprise contracts where supposedly more rational decision making should apply.
Also sunk costs “should in theory” never be considered but I’ve only ever seen sunk costs considered.
Imagine a model with a reliable 100M context window. Then all of a sudden you can.
> The information the intelligent answer needs was never in the wiki in the first place.
Oh well.
One should not underestimate a "compression primitive with a chat interface". For certain tasks it is a superpower.
That why vc look favorably to startup which go trough the motion of setting up partner led sales channel. an established partner taking maintenance contracts bridge the disconnect in the lifecycle gap between the two realities.
But no, corporate is bad, I guess.
In a sense, they have to make themselves obsolete. Either by making sure they are a part of a larger network, or by making sure that the org itself can own the product or service.
As the article notes, the alternatives from the large companies suck. So this is like buying fire insurance from a company that promptly sets fire to your house. You are buying the insurance while knowing you will need it because the disaster is already happening.
This is correct and very agreeable to everyone, but then after some waffle they then write this:
> Structure, for the first time, can be produced from content instead of demanded from people
These quotes are very much at odds. Where is this structure and content supposed to come from if you just said that nobody makes it? Nowhere in that waffle is it explained clearly how this is really supposed to work. If you want to sell AI and not just grift, this is the part people are hung up on. Elsewhere in the article are stats on hallucination rates of the bigger offerings, and yet there's nothing to convince anyone this will do better other than a pinky promise.
"It is graph-native - not a vector database with graph features bolted on, not a document store with a graph view, but a graph at it's core - because the multi-hop question intelligent systems actually have to answer cannot be answered by cosine similarity over chunked text, no matter how much AI you paste on top."
And
"It has a deterministic harness around its stochastic components. The language model proposes but the scaffolding verifies. Every inference, every tool call, every state change is captured in an immutable ledger as first-class data and this is what makes non-deterministic components safe to deploy where determinism is required."