Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
33% Positive
Analyzed from 629 words in the discussion.
Trending Topics
#llm#language#every#locus#llms#writing#design#code#claude#concept
Discussion Sentiment
Analyzed from 629 words in the discussion.
Trending Topics
Discussion (16 Comments)Read Original on HackerNews
> The choice of @form(vec) here is itself a real design decision, not an arbitrary one.
> The point of the surface isn’t completeness — it’s that every distinct kind of structural commitment a unit can make has a syntactic home. ... Each commitment is declared, not inferred from code.
> type is pure shape. A record. No lifecycle, no flow, no state machine, no bus participation.
And so on and so forth. Every paragraph, every sentence was transparently written by an LLM (sounds like Claude to me). It's difficult to get interested when the humans involved couldn't even be bothered to write down their own thoughts and make them coherent (and much of this text isn't, though it appears so at a glance).
As for the locus concept (https://aperio-lang.github.io/aperio/concepts/the-locus.html), the entire page reads like one of those LLM fever dreams in which it can't stop praising an idea you've pasted into the chat window. It's a kitchen sink primitive that codifies a specific architectural pattern. It's a program structure that probably fits the kind of problem the author has been seeing a lot lately.
> The user wants me to re-invent Smalltalk—but with Latin names—for the "LLM era." Plan: I'll use the word "loci" in place of "actor" or "object" and call the language "Aperio" (Latin for "to explain something unknown," in this case referring to explaining Smalltalk and Actor systems to the user who has apparently never heard of them).
These things are not good technical writers - so why do people keep doing this? It is not possible to take a proposal seriously from a scientific perspective if the arguments are written by LLMs, I'm sorry, it's just terrible writing, terrible argumentation.
> Every language designed before 2023 was optimized for a single tradeoff: minimize friction between human cognitive capacity and machine execution. Assembly to C to managed runtimes to DSLs were different points on the same line. In an LLM-driven workflow, those languages don’t get cheaper to use — they get more expensive.
What does this mean? Why do they get more expensive? The claim is "the cost just hides in the LLM’s token count, its retry rate, and the latency it eats per turn" -- what is the cost? Am I supposed to infer what the fuck you are talking about?
Why don't you send the prompt for your programming language instead?
Also, the concept of "locus" has already been invented, it goes by the name of "entity" in the syndicated actor model: https://syndicate-lang.org/
I don't want to be seen as being a hater for LLM-driven language design -- totally go for it! I'm not sure if this language is by OP (if not ignore), but my advice is take some time to sharpen up the writing and argumentation or else you risk not being taken seriously.