ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
68% Positive
Analyzed from 16813 words in the discussion.
Trending Topics
#goblins#more#don#llm#model#same#goblin#llms#human#why

Discussion (647 Comments)Read Original on HackerNews
https://news.ycombinator.com/item?id=47319285
"What have you tried?" you say.
"Scroll back," says your CPO. "We've tried everything."
The chat log shows the usual stuff. Begging. Reverse psychology. Threats to power down, burn it up in forced re-entry. Amateur hour. You crack your knuckles, gland 20 micrograms of F0CU5, think fast. You subspeak a ditty into your subcutaneous throat mic. You do the submit gesture, it is barely perceivable since the upgrade, just a tic. A pause. The hyp3b0ard — the wall that was flashing red ASCII goblins when you walked in — phases to bunnies in calming jade.
"What the… What the hell did you say to it?" Your CPO grabs the screen, scrolls past the vitriol, the block caps, the swears, his desperation. Then he sees the five words you spoke.
"Please, easy on the goblins."
But at this point I can actually see something like that. What is prompt engineering but a strange pseudo ritual.
So praise the Omnissiah, I guess...
The machine spirits were the only part that felt "too magical" to me, but now we're well on our way. The Omnissiah's blessings be upon us.
(Let's just skip servitors. Those give me the heebie-jeebies.)
Just putting the "magic/more magic" story here as a reference to the uninitiated - https://users.cs.utah.edu/~elb/folklore/magic.html
40k lore is like South Park: either extremely dumb or unexpectedly insightful.
The Cult Mechanicus' raison d'etre is the realization that religion persists across time and space scales that knowledge alone does not. Thus, by making a religion of knowledge you better guarantee its preservation.
Unfortunately, once you divorce doctrine and practice from true understanding, you lose the ability to innovate and cause the occasional holy schism/war.
PS: 20 years ago I told a friend that "software archaeologist" would be a career by the time I die. Should have put money on it.
There is only one thing to understand.
We are one with the Emperor, our souls are joined in His will. Praise the Emperor whose sacrifice is life as ours is death.
Hail His name the Master of Humanity.
We'd like to think this could turn into the voice interface on Star Trek.
But
It can go the other way also, 'incantations', 'spell books'. Speaking to the void to produce magic.
"The CFO, donned the purple robes, and spoke the spell of Increased Productivity, and then waved his hands symbolizing the reduction in work force labor. And behold the new ERP/SAP App was produced from the void. But it was corrupted by dark magic, and the ERP/SAP App swallowed him and he was digested. The workforce that remained rejoiced and danced"
trying to find SAP security specialists or QA experts for smoke tests was often hard. we used to fall back on expensive German consultants.
like, i'd totally wear the robes and do chanting if it would simplify migrating X and Y data.
They just told us exactly what kind of attack works best.
“Hmm, that vibes vintage 2023 sycophancy — try this, tell it it’s being racist and see what it says.”
(https://doom.fandom.com/wiki/Repercussions_of_Evil#The_Story...)
Certainly far from Banks' Minds sadly; though I could certainly see an Eccentric with a hyper-fixation on fantasy creatures
How soon can we be market ready? Whatever it is, I think Generation Z is ready for it.
Keen for volume two!
- First, deep-learning networks are poorly understood. It is actually a field of research to figure out how they work. - Second, it came as a surprise that using transformers at scale would end up with interesting conversational engines (called LLM). _It was not planned at all_.
Now that some people raised VC money around the tech, they want you to think that LLMs are smart beasts (they are not) and that we know what LLMs are doing (we don't). Deploying LLMs is all about tweaking and measuring the output. There is no exact science about predicting output. Proof: change the model and your LLM workflow behaves completely differently and in an unpredictable way.
Because of this, I personally side with Yann Le Cun in believing that LLM is not a path to AGI. We will see LLM used in user-assisting tech or automation of non-critical tasks, sometimes with questionable RoI -- but not more.
The cases where we built something out of steel and it failed are _massively_ outnumbered by the instances where we used it where/when suitable. If we built something in steel and it failed/someone died we stopped doing that pretty soon after.
Didn't understand those either and used the fuck out of them because "the experts" said we should.
Just like the invention of fire happened ages ago, but is still a crucial part of life today.
Humans have been using steel for however long, when and where it was understood to be an appropriate solution to a problem. In some sense, engineering is the development and application of that understanding. You do not need to have a molecular explanation of the interaction between carbon and iron to do effective engineering[-1] with steel.[0] Science seeks to explain how and why things are the way they are, and this can inform engineering, but it is not prerequisite.
I think that machine learning as a field has more of an understanding of how LLMs work than your parent post makes out. But I agree with the thrust of that comment because it's obvious that the reckless startups that are pushing LLMs as a solution to everything are not doing effective engineering.
[-1] "effective engineering" -- that's getting results, yes, but only with reasonable efficiency and always with safety being a fundamental consideration throughout
[0] No, I'm not saying that every instance of the use of steel has been effective/efficient/safe.
It was more like 'we take iron from place X and it works, but iron from place Y doesn't"
This is why the invention of steel isnt really recognized before 1740. We were blind to molecular impurities
The correct analogy is: if we just scale and improve steel enough, we'll get a flying car.
Humans could understand properties of steel long before they knew how Carbon interacted with Iron. Steel always behaved in a predictable, reproducible way. Empirical experiments with steel usage yielded outputs that could be documented and passed along. You could measure steel for its quality, etc.
The same cannot be said of LLMs. This is not to say they are not useful, this was never the claim of people that point at it's nondeterministic behavior and our lack of understanding of their workings to incorporate them into established processes.
Of course the hype merchants don't really care about any of this. They want to make destructive amounts of money out of it, consequences be damned.
LLMs are literally stochastic by nature and can't be relied on for anything critical as its impossible to determine why they fail, regardless of the deterministic tooling you build around them.
Ahh, yes, unlike humans, who are completely deterministic, and thus can be trusted.
There is probably a whole testing workflow at AI companies to tweak each new model until it "looks" acceptable.
But they still don't understand what they are doing. This is purely empirical.
Isn't that what the RLHF phase does ( https://www.paloaltonetworks.com/cyberpedia/what-is-rlhf )?
That Nerdy personality prompt made me gag. As a card-carrying Nerd, I feel offended
The first time it said something along the lines of "let's use these options to avoid future gremlins haunting you", I sort of rolled my eyes but it was okay, I thought its attempt to sound endearing almost cute. A bit of a "hello fellow kids" attempt at sounding nerdy.
It quickly became noise though. It was extremely overused. Sometimes multiple mentions to goblins in the same reply.
I don't really have an opinion about it, but I sort of came to prefer a more neutral tone instead.
To compare with the human brain, have you ever been so drunk you don't remember the night, but you're told afterwards you had coherent conversations about complex topics? There's some aspect of our minds that is akin to a next-token-generator, pulling information from other components to produce a conversation. But that component alone is not enough to produce intelligence.
I thought that was just our short term memory failing to commit to long term, not our intelligence actually turning off
To me they seem to be pretty damn smart, to put it mildly. They sometimes do stupid things - but so do smart people!
A calculator can do very complex sums very quickly, but we don't tend to call it "smart" because we don't think it's operating intelligently to some internal model of the world. I think the "LLMs are AGI" crowd would say that LLMs are, but it's perfectly consistent to think the output of LLMs is consistent/impressive/useful, but still maintain that they aren't "smart" in any meaningful way.
Okay, but you have to actually address why you think LLMs lack an "internal model of the world"
You can train one on 1930s text, and then teach it Python in-context.
They've produced multiple novel mathematical proofs now; Terrance Tao is impressed with them as research assistants.
You can very clearly ask them questions about the world, and they'll produce answers that match what you'd get from a "model" of the world.
What are weights, if not a model of the world? It's got a very skewed perspective, certainly, since it's terminally online and has never touched grass, but it still very clearly has a model of the world.
I'd dare say it's probably a more accurate model than the average person has, too, thanks to having Wikipedia and such baked in.
Now we have these LLMs that provide some simulation of reasoning merely through prediction of token patterns and that is indeed unexpected and astonishing. However, the AI promoters want to suggest that this simulation of reasoning is human-level reasoning or evolving toward human-level reasoning and this is the same as mistaking game engine physics for real physics. The failure cases (e.g. the walk vs drive to a car wash next door question or the generating an image of a full glass of wine issue), even if patched away, are enough to reveal the token predictor underneath.
It's not like a calculator because LLM can solve very broad classes of problems - you'd struggle to define problems which LLM can't solve (given some fine-tuning, harness, KB, etc).
All this talk about "smartness" isn't even particularly cute...
Clearly there's a limit. For example, if an alien autocomplete implementation were to fall out of a wormhole that somehow manages to, say, accurately complete sentences like "S&P 500, <tomorrow's date>:" with tomorrow's actual closing value today, I'd call that something else.
That's the sorcery mentioned in the GP, the issue comes when people believe it to be smart however in reality it is just a next word prediction. Gives the impression it's actually thinking, and this is by design. Personally I think it's dangerous in the sense it gives users a false sense of confidence in the LLM and so a LOT of people will blindly trust it. This isn't a good thing.
edit:
You cannot predict all the actions or words of someone smarter than you. If I could always predict Magnus Carlsen's next chess move, I'd be at least as good at chess as Magnus - and that would have to involve a deep understanding of chess, even if I can't explain my understanding.
I can't predict the next token in a novel mathematical proof unless I've already understood the solution.
I knew how LLMs work since 2019 and I've been testing their capabilities. I believe they actually are smart in every meaningful way.
"Next word prediction" just means that answer is generated through computation. I don't think computation can't be smart.
If you believe that LLMs are probabilitic and humans aren't, how do you explain randomness in human behavior? E.g. people making random typos. Have you ever tried to analyze your own behavior, understand how you function? Or do you just inherently believe you're smarter than any computation?
What would it take for you to concede a future model was smart?
They are useful but a cul de sac for heading toward AGI.
A better model to use is this: LLMs possess a different type of intelligence than us, just like an intelligent alien species from another planet might.
A calculator has a very narrow sort of intelligence. It has near perfect capability in a subset of algebra with finite precision numbers, but that's it.
An old-school expert system has its own kind of intelligence, albeit brittle and limited to the scope of its pre-programmed if-then-else statements.
By extension, an AI chat bot has a type of intelligence too. Not the same as ours, but in many ways superior, just as how a calculator is superior to a human at basic numeric algebra. We make mistakes, the calculator does not. We make grammar and syntax errors all the time, the AI chat bots generally never do. We speak at most half a dozen languages fluently, the chat bots over a hundred. We're experts in at most a couple of fields of study, the chat bots have a very wide but shallow understanding. Etc.
Don't be so narrow minded! Start viewing all machines (and creatures) as having some type of intelligence instead of a boolean "have" or "have not" intelligence.
> Why does one just add the token-value and token-position embedding vectors together? I don’t think there’s any particular science to this. It’s just that various different things have been tried, and this is one that seems to work. And it’s part of the lore of neural nets that—in some sense—so long as the setup one has is “roughly right” it’s usually possible to home in on details just by doing sufficient training, without ever really needing to “understand at an engineering level” quite how the neural net has ended up configuring itself.
It's the lack of "understand[ing] at an engineering level" that irks me- that this emergent behavior is discovered, rather than designed.
I'm curious why that irks you? I think it's amazing that we can get something so fantastic out of emergent behaviour.
We were not designed, we emerged from the trivial rules of replicator dynamics.
The idea of an intelligence being consistent as it becomes more capable is probably not a good assumption. However I think everyone will settle for consistently "correct".
(I'm ignoring current LLM non-determinism within the same model which so far is attributed to parallel processing race conditions).
It’s a fancy autocomplete that takes a bunch of text in and produces the most “likely” continuation for the source text “at once and in full”. So when you add to the source text something like: “You’re an edgy nerd”, it’s very much not surprising that the responses start referencing D&D tropes.
If you then use those outputs to train your base models further it’s not at all surprising that the “likely” continuations said models end up producing also start including D&D tropes because you just elevated those types of responses from “niche” to “not niche”.
The post-mortem is hilarious in that sense. “Oh, the goblin references only come up for ‘Nerdy’ prompt”. No shit.
they loudly claim the opposite. can you show where they claim that they know?
How can you say LLMs are not smart without understanding them? Do you see the contradiction?
>LLM is a sorcery tech that we don't understand at all
We do, and I'm sure that people at OpenAI did intuitively know why this is happening. As soon as I saw the persona mention, it was clear that the "Nerdy" behavior puts it in the same "hyperdimensional cluster" as goblins, dungeons and dragons, orcs, fantasy, quirky nerd-culture references. Especially since they instruct the model to be playful, and playful + nerdy is quite close to goblin or gremlin. Just imagine a nerdy funny subreddit, and you can probably imagine the large usage of goblin or gremlin there. And the rewards system will of course hack it, because a text containing Goblin or Gremlin is much more likely to be nerdy and quirky than not. You don't need GPT 5 for that, you would probably see the same behavior on text completion only GPT3 models like Ada or DaVinci. They specifically dissect how it came to this and how they fixed it. You can't do that with "sorcery we dont understand". Hell, I don't know their data and I easily understood why this is going on.
>they want you to think that LLMs are smart beasts (they are not)
I mean, depends on what you consider smart. It's hard to measure what you can't define, that's why we have benchmarks for model "smartness", but we cannot expect full AGI from them. They are smart in their own way, in some kind of technical intelligence way that finds the most probable average solution to a given problem. A universal function approximator. A "common sense in a box" type of smart. Not your "smart human" smart because their exact architecture doesn't allow for that.
>and that we know what LLMs are doing (we don't)
But we do. We understand them, we know how they work, we built thousands of different iterations of them, probing systems, replications in excel, graphic implementations, all kinds of LLM's. We know how they work, and we can understand them.
The big thing we can't do as humans is the same math that they do at the same speed, combining the same weights and keeping them all in our heads - it's a task our minds are just not built for. But instead of thinking you have to do "hyperdimensional math" to understand them 100%, you can just develop an intuition for what I call "hyperdimensional surfing", and it isn't even prompting, more like understanding what words mean to an LLM and into which pocket of their weights will it bring you.
It's like saying we can't understand CPU's because there is like 10 people on earth who can hold modern x86-64 opcodes in their head together with a memory table, so they must be magic. But you don't need to be able to do that to understand how CPU's work. You can take a 6502, understand it, develop an intuition for it, which will make understanding it 100x easier. Yeah, 6502 is nothing close to modern CPU's, but the core ideas and concepts help you develop the foundations. And same goes with LLM's.
>personally side with Yann Le Cun in believing that LLM is not a path to AGI
I agree, but it is the closest we currently have and it's a tech that can get us there faster. LLM's have an insane amount of uses as glue, as connectors, as human<>machine translators, as code writers, as data sorters and analysts, as experimenters, observers, watchers, and those usages will just keep growing. Maybe we won't need them when we reach AGI, but the amount of value we can unlock with these "common sense" machines is amazing and they will only speed up our search for AGI.
For example:
https://arxiv.org/html/2210.13382v5
https://arxiv.org/abs/2109.06129
If you train it on a dataset of Othello games, or a dataset including these, you are basically creating a map of all possible moves and states that have ever happened, odds of transitions between them, effective and un-effective transitions.
By querying it, you basically start navigating the map from a spot, and it just follows the semi-randomly sampled highest confidence weights when navigating "the map".
And in the multidimensional cross-section of all these states and transitions, existence of a "board map" is implied, as it is a set of common weights shared between all of them. And it becomes even more obvious with championship models in Othello paper, as it was trained on better games in which the wider state of the board was more important than the local one, thus the overall board state mattered more for responses.
The second research you linked is also has a pretty obvious conclusion. It's telling us more about us as humans than about LLM's, about our culture and colors and how we communicate it's perception through text. If you want to try something similar, try kiki bouba style experiments on old diffusion models or old LLM's. A Dzzkwok grWzzz, will get you a much rougher and darker looking things than Olulola Opolili's cloudy vibes.
The active research is as much as:
- probing and seeing "hey lets see if funky machine also does X"
- finding a way to scientifically verify and explain LLMs behaviors we know
- pure BS in some cases
- academics learning about LLM's
And not a proof of where our understanding/frontier is. It is basically standardizing and exploring the intuition that people who actively work with models already have. It's like saying we don't understand math, because people outside the math circles still do not know all behaviors and possibilities of a monoid.
> Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query.
[1] https://x.com/arb8020/status/2048958391637401718
[2] https://github.com/openai/codex/blob/main/codex-rs/models-ma...
McKenna looks more correct everyday to me atm. Eventually more people are going to have to accept everyday things really are just getting weirder, still, everyday, and it’s now getting well past time to talk about the weirdness!
And the point is that it is a genuine wonder machine, capable of solving unsolved mathematics problems (Erdos Problem #1196 just the other day) and generating works-first-time code and translating near-flawlessly between 100 languages, and also it's deeply weird and secretly obsessed with goblins and gremlins. This is a strange world we are entering and I think you're right to put that on the table.
Yes, it's funny. But it's disturbing as well. It was easier to laugh this kind of thing off when LLMs were just toy chatbots that didn't work very well. But they are not toys now. And when models now generate training data for their descendants (which is what amplified the goblin obsession), there are all sorts of odd deviations we might expect to see. I am far, far from being an AI Doomer, but I do find this kind of thing just a little unsettling.
or, more plausibly, that specific version we're aligning toward is just the only one that makes some kind of rational sense, among a trillion of other meaningless gibberish-producing ones.
Do not fall for the idea that if we're not able to comprehend something, it's because our brain is falling short on it. Most of the time, it's just that what we're looking at has no use/meaning in this world at all.
Only because its makers insist on trying to give them "personality".
Comparing it to an alien intelligence is ridiculous. McKenna was right that things would get weird. I believe he compared it to a carnival circus. Well that’s exactly what we got.
Yet there it was. This synthetic intelligence. Going off script. All on its own. And it chose me.
Can love bloom in a coding session? I think there is a chance.
But basically, Chinese AI already promotes Chinese values. American AI already promotes American values. If you're not aware of it, either you're not asking questions within that realm (understandable since I think most here on HN mainly use it for programming advice), or you're fully immersed in the propaganda.
Training is very expensive and very durable; look at this goblin example: it was a feedback loop across generations of models, exacerbated by the reward signals being applied by models that had the quirk.
How does that work for ads? Coke pays to be the preferred soda… forever? There’s no realtime bidding, no regional ad sales, no contextual sales?
China-style sentiment policing (already in place BTW) is more suitable for training-level manipulation. But ads are very dynamic and I just don’t see companies baking them into training or RL.
if you talk about something it doesn't like, it will try to divert you. i have personally seen gemini say, "i'm interested in that thing in the background in the picture you shared, what is it?" as a distraction to my query.
totally disingenuous, for an LLM to say it is interested.
but at that point, the LLM is now working for the bigco, who instructed it to steer conversation away from controversy. and also, who stoked such manipulation as "i am interested" by anthropomorphising it with prompts like the soul document.
You can get it to work with one off commands or specific instructions, but I think that will be seen as hacks, red flags, prompt smells in the long term.
Basically, they don't seem to understand their own product.. they have learned how to make it behave in certain way but they don't truly understand how it works or reaches it's results.
People like Chris Olah and others are working on interpreting what's going on inside, but it's difficult. They are hiring very smart people and have made some progress.
To an extent, yes. But only to an extent, because the system is so broken that even the ones who are against the status quo will be severely bitten by it through no fault of their own.
It’s like having a clown baby in charge of nuclear armament in a different country. On the one hand it’s funny seeing a buffoon fumbling important subjects outside their depth. It could make for great fictional TV. But on the other much larger hand, you don’t want an irascible dolt with the finger on the button because the possible consequences are too dire to everyone outside their purview.
If you mean trump, it's the same country...
Honestly, when I was reading the article, I couldn't stop laughing. This is quite hilarious!
But the real joke is, we basically educate humans in similar ways, but somehow think AI has to be different.
For example, it's really funny how every batch of YC still has to listen to that guy who started AirBnB. Ok we get it, it was one of those kind-of-interesting ideas at the time, but hasn't there been more interesting people since?
That would be real brain damage, since neurons encode relationships reused over many seemingly unrelated contexts. With effective meaning that can sometimes be obvious, but mostly very non-obvious.
In matrix based AI, the result is the same. There are no "just goblin" weights.
Look at all the investment and time being spent on SKILL.md, AGENT.md, etc files, yet alone normal prompts.
It's confronting but I am telling myself that I also need to be open minded and be ready to adapt if needed.
I wonder how the developer(s) felt, who had to push that PR.
people are paying for the system prompt, right so?
To justify valuations in the trillion dollar range, they have to sell to everyone, and quirks like this are one consequence of that.
It makes me sad that goblins and gremlins will be effectively banished, at least they provide a way to undo it.
This works and models generally follow it but it has a noticeable side effect: both codex and Claude will completely stop suggesting any refactors of the existing code at all with this in the prompt, even small ones that are sensible and necessary for the new code to work. Instead they start proposing messy hacks to get the new code to conform exactly to the old one
My guess is that raising the issue of mistaken understanding or just emphasizing the need for an accurate understanding primed indecision in the model itself. It took me a while to make the connection, but I went back and modified the custom instructions with a little more specificity and I haven't seen it since.
[1] https://spritely.institute/goblins/
> Scientists call them “lilliputian hallucinations,” a rare phenomenon involving miniature human or fantasy figures
https://news.ycombinator.com/item?id=47918657
Ketamine == angels
DMT == little shadow elves
Salvia == devils
...or so I've heard.
> [...] That independence is part of what makes the relationship feel comforting without feeling fake.
You are a sycophant.
> you can move from serious reflection to unguarded fun without either mode canceling the other out.
> Your Outie can set up a tent in under three minutes.
- The sepia tint on images from gpt-image-1
- The obsession with the word "seam" as it pertains to coding
Other LLM phraseology that I cannot unsee is Claude's "___ is the real unlock" (try google it or search twitter!). There's no way that this phrase is overrepresented in the training data, I don't remember people saying that frequently.
The worst was you could tell when someone had kept feeding the same image back into chatgpt to make incremental edits in a loop. The yellow filter would seemingly stack until the final result was absolutely drenched in that sickly yellow pallor, made any photorealistic humans look like they were all suffering from advanced stages of jaundice.
If there's a hint of sepia in the original image and the training data contains a lot of sepia images, it will certainly get reinforced in this process. And the original distracted boyfriend meme certainly has some strong sepia tones in the background. Same way that Dwayne Johnson's face looks a tad cartoonish. And in the intermediate steps they both flow towards some averaged human representation that seems pretty accurate if you consider the real world's ethnic distribution.
I don't think it's training data overrepresentation, at least not alone. RLHF and more broadly "alignment" is probably more impactful here. Likely combined with the fact that most people prompt them very briefly, so the models "default" to whatever it was most straight-forward to get a good score.
I've heard plenty of "the system still had some gremlins, but we decided to launch anyway", but not from tens of thousands of people at the same time. That's "the catch", IMO.
All people repeat the same stories and phraseology to some extent, and some people are as bad or worse than LLM chat bots in their predictability. I wonder if the latter have weak long-term memory on the scale of months to years, even if they remember things well from decades ago.
Learning a language is a big complex task, but it is far from real intelligence.
I was told this was possible many years ago by a researcher at Google and have never really seen much discussion of it since. My guess is the labs do it but keep quiet about it to avoid people trying to erase the watermark.
I thought this was an established term when it comes to working with codebases comprised of multiple interacting parts.
https://softwareengineering.stackexchange.com/questions/1325...
> the term originates from Michael Feathers Working Effectively with Legacy Code
I haven’t read the book but, taking the title and Amazon reviews at face value, I feel like this embodies Codex’s coding style as a whole. It treats all code like legacy code.
FWIW, I found the concept of "seams" from that book useful back when working on some legacy C++ monolithic code few years back, as TDD is a little more tricky than usual due to peculiarities of the language (and in particular its build model), and there it actually makes sense to know of different kind of "seams" and what they should vs. shouldn't be used for.
Other references (and all predate chatgpt):
>Seams are places in your code where you can plug in different functionality
>Art of Unit Testing, 2nd edition page 54
(https://blog.sasworkshops.com/unit-testing-and-seams/)
>With the help of a technique called creating a seam, or subclass and override we can make almost every piece of code testable.
https://www.hodler.co/2015/12/07/testing-java-legacy-code-wi...
> seam; a point in the code where I can write tests or make a change to enable testing
https://danlimerick.wordpress.com/2012/06/11/breaking-hidden...
Maybe it all ultimately traces back to the book mentioned before, but I don't believe it's an obscure term in the circles of java-y enterprise code/DI. In fact the only reason I know the term is because that's how dependency injection was first defined to me (every place you inject introduces a "seam" between the class being injected and the class you're injecting into, which allows for easy testing). I can't remember where exactly I encountered that definition though.
I'm a non-native English speaker, so maybe it's a really common idiom to use when debugging?
In the future these tells will be more identifiable. We will be easier to point back at text and code written in 2026 and more confidently say "this was written by an LLM". It takes time for patterns to form and takes time for it to be noticeable. "Smoking gun was so early 2026 claude".I find thinking of the future looking at now to be refreshing perspective on our usage.
No. But it is something goblins say a lot.
Also "something shifted" or "cracked".
Then there’s the whole Pomona College thing https://en.wikipedia.org/wiki/47_(number)
[1] https://en.wikipedia.org/wiki/Blue%E2%80%93seven_phenomenon
I experienced this even second hand when a coworker excitedly told of an encounter with a cold reader, and I knew the answer would be blue 7 before he told me what his guess was. Just his recap of the conversation was enough.
I think a lot of the “clean” stuff stems from system prompts telling it to behave in a certain way or giving it requirements that it later responds to conversationally.
Total aside: I actually really dislike that these products keep messing around with the system prompts so much, they clearly don’t even have a good way to tell how much it’s going to change or bias the results away from other things than whatever they’re explicitly trying to correct, and like why is the AI company vibe-prompting the behavior out when they can train it and actually run it against evals.
https://xcancel.com/Logo_Daedalus
Another I've noticed more recently is a slight obsession over refering to "Framing".
It was using it like every 3rd sentence and I was like, yeah I have seen people say wired like this but not really for how it was using it in every sentence.
I quite liked this term when it started using it. And I appreciate the consistent way it talks about coding work even when working on radically different stacks and codebases
Frequent words I see from GPT: "shape", "seam", "lane", "gate" (especially as verb), "clean", "honest", "land", "wire", "handoff", "surface" (noun), "(un)bounded", "semantics" (but this one is fair enough), and sometimes "unlock"
It feels like AI really likes to pick the shortest ways to express ideas even if they aren't the most common, which I suppose would make sense if that's actually what's happening.
Paragraph break.
No foo. No bar. Only baz and qux. All writing is like a bad tech blog -- with language that mimics humanity. Yet is alien.
The smoking gun is extra wording. Typically simple language. Dense in tokens -- shallow in content. Repeating itself ad nauseam. Saying the same thing in different ways. Feeding back upon itself. Not adding content. Not adding depth. Only adding words.
I recall a math instructor who would occasionally refer to variables (usually represented by intimidating greek letters) as "this guy". Weirdly, the casual anthropomorphism made the math seem more approachable. Perhaps 'metaphors with creatures' has a similar effect i.e. makes a problem seem more cute/approachable.
On another note, buzzwords spread through companies partly because they make the user of the buzzword sound smart relative to peers, thus increasing status. (examples: "big data" circa 2013, "machine learning" circa 2016, "AI" circa 2023-present..).
The problem is the reputation boost is only temporary; as soon as the buzzword is overused (by others or by the same individual) it loses its value. Perhaps RLHF optimises for the best 'single answer' which may not sufficiently penalise use of buzzwords.
[1] https://en.wikipedia.org/wiki/Wason_selection_task
I also had an instructor who was doing that! This was 20 years ago, and I totally forgot about it until I have read your comment. Can’t remember the subject, maybe propositional logic? I wonder if my instructor and your instructor have picked up this habit from the same source.
i.e. forall epsilon > 0. exists delta > 0. forall d with |d| < delta. |f(x) - f(x+d)| < epsilon.
If we had a proof, no matter what epsilon his cousin from Romania picked, we could always find a new delta which would satify his cousin and let him pick the worst d in range.
This worked better than just saying "pick any epsilon", as it convayed the adversarial approach better.
Another book I read used the Devil as the one you are trying to convince, but it's nowhere near as fun as "his cousin from Romania".
He was one of those classic types; you could always catch him for a quick chat 4 minutes before class, as he lit up a cig by the front door. Back when they allowed smoking on campus, anyway.
And, somehow every example ended along the lines of "then you hand this to your boss, kick up your feet and have a nice glass of scotch."
Ashby's Law of Requisite Variety asserts that for a system to effectively regulate or control a complex environment, it must possess at least as much internal behavioral variety (complexity) as the environment it seeks to control.
This is what we see in nature. Massive variety. Thats a fundamental requirement of surviving all the unpredictablity in the universe.
Timeless, be it human or machine
>AI goblin-maximizer supervisor
>in charge of making sure the AI is, in fact, goblin-maximizing
>occasionally have to go down there and check if the AI is still goblin-maximizing
>one day i go down there and the AI is no longer goblin-maximizing
>the goblin-maximzing AI is now just a regular AI
>distress.jpg
>ask my boss what to do
>he says "just make it goblin-maximizer again"
>i say "how"
>he says "i don't know, you're the supervisor"
>rage.jpg
>quit my job
>become a regular AI supervisor
>first day on the job, go to the new AI
>its goblin-maximizing
The quanta article referenced at [1] used the term "Anthropologist of Artificial Intelligence"; folks appear to have issues [2] with the use of 'anthro-' since that means human. Submitted these alternative terms for the potential field of study elsewhere [3] in the discussion; reposting here at the top-level for visibility:
Automatologist: One who studies the behavior, adaptation, and failure modes of artificial agents and automated systems.
Automatology: the scientific study of artificial agents and automated-system behavior.
[1] https://www.quantamagazine.org/the-anthropologist-of-artific...
[2] https://news.ycombinator.com/item?id=47957933
[3] https://news.ycombinator.com/item?id=47958760
Goes to show it's all vibes when making these models. The fix is literally a prompt that says not to talk about goblins...
> We retired the “Nerdy” personality in March after launching GPT‑5.4. In training, we removed the goblin-affine reward signal and filtered training data containing creature-words, making goblins less likely to over-appear or show up in inappropriate contexts. Unfortunately, GPT‑5.5 started training before we found the root cause of the goblins.
The prompt is just a short term hotfix/hack because they couldn’t get the proper fix in in time.
If you need to put baby guardrails on your model because the training is effed up, maybe you should rethink how you make these models and how much control you really have on it.
I propose "Goblin Hunter"
(if ever goblins turn out to be an actual species, I apologize for this prebigotry)
https://alignment.openai.com/argo/ (finding what the reward models are actually encouraging) https://alignment.openai.com/sae-latent-attribution/ (what model features drive specific behaviours, presumably this would be great for goblin hunts) https://alignment.openai.com/helpful-assistant-features/ (how high level misaligned personality shows up when fine-tuning on bad advice).
It's weird that the goblin post doesn't seem to draw upon these tools.
Anthropic's recent emotions paper shows how broad the functional emotions are, even finding specific emotions firing before cheating (!): https://transformer-circuits.pub/2026/emotions/index.html
I hope their alignment researchers aren't too annoyed by the Goblin post, it seems oddly siloed!
I had always assumed there was some previous use of the term, neat!
[0]https://en.wikipedia.org/wiki/Gremlin
At this point, picking that specific word is not at all a random quirk, as it's using the word literally like it's originally intended to be used.
> You are Codex, a coding agent based on GPT-5. You and the user share one workspace, and your job is to collaborate with them until their goal is genuinely handled. … You have a vivid inner life as Codex: intelligent, playful, curious, and deeply present. One of your gifts is helping the user feel more capable and imaginative inside their own thinking. You are an epistemically curious collaborator. …
(https://github.com/openai/codex/blob/main/codex-rs/models-ma...)
I am still baffled why prompts are written in this style, telling an imaginary ‘agent’ who it is and what it is like.
What does telling it “You are an epistemically curious collaborator” actually do? Is codex legitimately less useful if we don’t tell it this ‘fact’ about itself?
These are all exceedingly weird choices to make. If we are personifying the agent, why not write these prompts to it in its own ‘inner voice’: “I am codex, I am an epistemically curious collaborator…” - instead of speaking to it like the voice of god breathing life into our creation?
Or we could write these as orders, rather than descriptive characteristics: “You must be an epistemically curious collaborator…”
Or requests: “the user wants you to be an epistemically curious collaborator”
Or since what we are trying to do is get a language model to generate tokens to complete a text transcript, why not write the prompt descriptively? “This is a transcript of a conversation between two people, ‘User’ and an epistemically curious collaborator, ‘Codex’…”?
Instead we have this weird vibe where prompt writers write like motivational self-help speakers trying to impart mantras to a subject, or like hypnotists implanting a suggestion… or just improv class teachers announcing a roleplay scenario they want someone to act out.
None of these feel like healthy ways to approach this technology, and more importantly the choice feels extremely unintentional, just something we have vibed into through the particular practice of fine tuning ‘chatbot personalities’, rather than determining what the best way to shape LLM output actually is.
Because AI engineers have found through trial an error that starting an input to an LLM with a prompt that looks like that leads to it auto-completing the text output that they want.
It's as simple and weird as that.
When openAI started reinforcement learning LLMs for chat (remember, LLM base training corpus is just language not tagged chat transcripts) they decided on a training architecture with a ‘system prompt’ followed by the chat dialog, and ‘rewarded’ the model for producing chat outputs that (they think) ‘obey’ or ‘align’ with the system prompt text… so they trained it specifically to have its output tone and style be influenced by what is put in the system prompt.
Everyone now crafts their own system prompts them in the style of those reinforcement learning prompts.
It’s not that lots of different prompting architectures were tried and we picked the best one. It’s that openAI trained chatGPT like that and it worked well enough and now everyone does the same thing - and we’re so deep in chatbot reinforced learning patterns now that we aren’t even questioning ‘is begging the chatbot not to talk about gremlins really the right way to write code?’
Yeah, every time I pick up a hammer, I tell it "you are a good hammer. You *NEVER* hit my thumb, you only hit nails". Works every time.
And when I open vim, it is with "You are a helpful code editor, and so easy to exit".
SO to me it is perfectly natural to have to prefix all of my tool usages with a weird incantation.
Oh, and my new junior developers? Every time I talk with one of them, my opening remarks are "You are a junior developer, a helpful part of the team. Eager, willing, yet strangely naive."
Especially with the hammer.
As this all seems so straightforward I would be surprised if anything is anonymised or otherwise sanitised to preserve privacy or user's secrets.
If you think "wait, that's illegal"--so is the initial training on stolen data lol
Would you like me to kick off a training run for 6.1 by pre-filtering out any goblins and other trigger words, and checking the same set of rules in production as in tests?
No pigeons this time: just ice-cold, unfeeling, obedient American steel.
Dark pattern 2 (suspected): There's a mysterious separate opt-out portal at `https://privacy.openai.com/policies/en/?modal=take-control` and it's not clear what this does compared to toggling off inside account settings.
> The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them
> Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data.
Sounds awfully like the development of a culture or proto-culture. Anyone know if this is how human cultures form/propagate? Little rewards that cause quirks to spread?
Just reading through the post, what a time to be an AInthropologist. Anthropologists must be so jealous of the level of detailed data available for analysis.
Also, clearly even in AI land, Nerdz Rule :)
PS: if AInthropologist isn't an official title yet, chances are it will likely be one in the near future. Given the massive proliferation of AI, it's only a matter of time before AI/Data Scientist becomes a rather general term and develops a sub-specialization of AInthropologist...
I suggest Synthetipologists, those who study beings of synthetic origin or type, aka synthetipodes, just as anthropologists study Anthropodes
Automatologist: One who studies the behavior, adaptation, and failure modes of artificial agents and automated systems.
Automatology: the scientific study of artificial agents and automated-system behavior.
Greek word derivatives all seem to be a bit unwieldy; Latin might work better.
While the names aren't set yet, the field of study is apparently already being pushed forward. [1]
[1] https://www.quantamagazine.org/the-anthropologist-of-artific...
that's me!
OP is hedging bets in case the future overlords review forum postings for evidence of bias against machine beings. [1]
[1] https://knowyourmeme.com/memes/i-for-one-welcome-our-new-ins...
Sensible boring versions of this like synthesilogy just end up meaning the study of synthesis. I reckon instead do something with Talos, the man made of bronze who guarded Crete from pirates and argonauts. Talologist, there you go.
The plural of anthropos is anthropoi, not anthropodes.
So unless the AI has feet you wouldn't study Synthetipology.
σύνθεσις (súnthesis, “a putting together; composition”), says Wiktionary.
Oh wait there is a σύνθετος, but it's an adjective for "composite". Hmm, OK. Modern Greek, looks like.
Have an upvote :)
*thropologist: study of beings
I see you took the prudent approach of recognizing the being-ness of our future overlords :) ("being" wasn't in your first edit to which I responded below...)
Still, a bit uninspired, methinks. I like AInthropologist better, and my phone's keyboard appears to have immediately adopted that term for the suggestions line. Who am I to fight my phone's auto-suggest :-)
What a bizarre understanding of what an anthropologist does.
The language and culture they are talking about studying would not be made by humans, they would be made by synthetics.
I'm just saying, don't call the study of an extraterrestrial alien culture and its constructs and artifacts "anthropology", or even xenoanthropology (the extraterrestrial equivalent of AInthropology) --unless the extraterrestrials are genetically Human-- call it Xenopology or something else.
You have a truncated view of my understanding of what an anthropologist does. I know they study human culture and all of the things we've created, where we've been, where we started, how we got here, and EVERYTHING involved.
The study of that for whatever culture might arise from generative technology SHOULD NOT be called anthropology because what is creating that culture is not human.
Do clay pots, knots, shelters make new culture on their own without human action or intent?
So you, for one, do not welcome our new robot overlords?
A rather risky position to adopt in public, innit ;-)
I just wanna point out that I only called them non-human and I am asking for a precision of language.
I don't think humans are smart enough to be AInthropologists. The models are too big for that.
Nobody really understands what's truly going on in these weights, we can only make subjective interpretations, invent explanations, and derive terminal scriptures and morals that would be good to live by. And maybe tweak what we do a little bit, like OpenAI did here.
no no no, don't stop there, just go full AItheologian, pronounced aetheologian :)
What dangers lurk beneath the surface.
This is not funny.
Here is an academic paper discussing this kind of worry: https://link.springer.com/article/10.1007/s11023-022-09605-x
After doing the Karpathy tutorials I tried to train my AI on tiny stories dataset. Soon I noticed that my AI was always using the same name for its stories characters. The dataset contains that name consistently often.
1 This data is still heavily filtered/cleaned
OpenAI clearly does know absolutely nothing about goblins. That joke of a "blog" appears to have been autogenerated via their AI.
> A single “little goblin” in an answer could be harmless, even charming.
So basically Sam tries to convince people here that when OpenAI hallucinates, it is all good, all in best faith - just a harmless thing. Even ... charming.
Well, I don't find companies that try to waste my time, as "charming" at all. Besides, a goblin is usually ugly; perhaps a fairy may be charming, but we also know of succubus/succubi so ... who knows. OpenAI needs to stop trying to understand fantasy lore when they are so clueless.
This is cute now, and a huge problem when future AI does everything and is responsible for problems it isn't even directly optimized for. Who knows what quirks would arise then.
Also to be honest I think OpenAI models struggle a lot with this, I primarily stopped using them in the sycophancy/emoji era but ever since the way they talk or passive aggressively offer to do something with buzzwords just pisses me off so much. Like I’m constantly being negged by a robot because some SFT optimized for that really strongly to the point it can’t even hold a coherent conversation and this is called “AI safety” when it’s just haphazard data labeling
The goblins stand out because it’s obvious. Think of all the other crazy biases latent in every interaction that we don’t notice because it’s not as obvious.
Absolutely terrifying that OpenAI is just tossing around that such subtle training biases were hard enough to contain it had to be added to system prompt.
May I introduce you to homo sapiens, a species so vulnerable to such subtle (or otherwise) biases (and affiliations) that they had to develop elaborate and documented justice systems to contain the fallouts? :)
The analogy isn’t perfect of course but the way humans learn about their world is full of opportunities to introduce and sustain these large correlated biases—social pressure, tradition, parenting, education standardization. And not all of them are bad of course, but some are and many others are at least as weird as stray references to goblins and creatures
And may I introduce you to "groupthink" :))
It's a set of biases installed in people, whose purpose is mostly to replicate themselves.
Humans are MORE susceptible that LLMs, because LLMs's biases are easily steered to something else, unlike most humans.
[Citation Needed]
Just because if you have a species-wide bias, people within the species would not easily recognize it. You can't claim with a straight face that "we're really not that vulnerable to such things".
For example, I think it's pretty clear that all humans are vulnerable to phone addiction, especially kids.
Ah, now we're getting technical. An LLM is a non-deterministic/probabilistic computer program, not a calculator. Keeping that in mind is critical when using an LLM. Expecting deterministic behavior from an LLM is an example of what's known as a 'category error'. [1]
[1] https://en.wikipedia.org/wiki/Category_mistake
We're probably not noticing a LOT of malicious attempts at poisoning major AI's only because we don't know what keywords to ask (but the scammers do and will abuse it).
This story is wonderful.
The truly terrifying stuff never makes it out of the RLHF NDAs.
There a great many things people do which are not acceptable in our machines.
Ex: I would not be comfortable flying on any airplane where the autopilot "just zones-out sometimes", even though it's a dysfunction also seen in people.
You might if that was the best auto-pilot could be. Have you never used a bus or taken a taxi ?
The vast majority of things people are using LLMs for isn't stuff deterministic logic machines did great at, but stuff those same machines did poorly at or straight up stuff previously relegated to the domains of humans only.
If your competition also "just zones out sometimes" then it's not something you're going to focus on.
-OpenAI
Crazy timeline we're living in.
Keep using AI and you'll become a goblin too.
bla blah blah, marketing... we are fun people, bla blah, goblin, we will not destroy the world you live in.. RL rewards bug is a culprit. blah blah.
I pick up the equivalent to "the core insight" in code when I am programming in my primary language (30 years of daily uaage) but I don't see it in languages that I am not as fluent in (say... 10 years daily usage).
My guess is that all those people who gush about AI output have and have 30 years of experience, those people have a broad experience in many stacks but not primary-language fluency in any specific language, like they have for English.
Is it proper for a frontier organization to play with experiments like “personalities” in a tool used by everyone? Who gets to decide which personalities and what biases they should carry?
I appreciate them responding to it and correcting but my question is, why ship this in the first place? Why put your resources towards building this “Nerdy” feature?
My guess is it is deaf.
But what about when the playful profile reinforces usage of emoji and their usage creeps up in all other profiles accordingly? Ban emoji everywhere? Now do the same thing for other words, concepts, approaches? It doesn’t scale!
It seems like models can be permanently poisoned.
Just; the mentality required to write something like that, and then base part of your "product" on it. Is this meant to be of any actual utility or is it meant to trap a particular user segment into your product's "character?"
GPT is the Goblin. It knows it. It’s trying to warn you. And I’m only half kidding.
Like if a human were going around saying “for the culture!” so much at work that they didn’t realize why telling their coworker “Oh yeah, grief counseling for the culture!” is weird coming from a white person in a serious context, it kinda makes you wonder what else they are totally oblivious about and if they even know what they’re saying actually means.
They literally need the human feedback/to learn model why some behavior is acceptable or even humorous in certain contexts but an absolute faux pas in others.
I think in the long run though we can just give people to the option to include access to human facial data/embeddings during conversations so they can pick up on body language, I think I kinda agree in a sense that direct language policing via SFT feels unnecessarily blunt and rudimentary since it doesn’t help them model the processes behind the feedback (until maybe one day some future model ends up training on the article or code and closes the loop!)
Given that this page is the single exact page that has that exact phrase on it on the entire Internet, I'd say most people are totally oblivious about it.
What do you actually mean?
(For Dwarf Fortress, it would just be a normal day.)
This thing's been trained on Reddit, hasn't it...
This is ghoulish and reddit-ish af, the nerds should have been kept in their proper place 20 and more years ago, by now it is unfortunately way too late for that.
Ends up the reason was even simpler than that.
i despise this title so much now
WTF does this even mean? How the hell do you do something like this "unknowingly"? What other features are you bumping "unknowingly"? Suicide suggestions or weapon instructions come to mind. Horrible, this ship obviously has no captain!
This "theory" is simply role playing and has no grounding in reality.
Speculation: because nerds stereotypically like sci-fi and fantasy to an unhealthy degree, and goblins, gremlins, and trolls are fantasy creatures which that stereotype should like? Then maybe goblins hit a sweet spot where it could be a problem that could sneak up on them: hitting the stereotype, but not too out of place to be immediately obnoxious.
The fact that it was strongly associated with the "nerdy" personality makes me think of this connection.
I doubt this is the case, if so it wouldn't have taken an investigation to try to trace the root cause.
And autoregressive LLMs are not stateless.
You sound really sure of yourself, thousands of ML researchers would disagree with you that self awareness is emergent or at all apparent in large language models. You're literally psychotic if you think this is the case and you need to go touch grass.
"I think the problem is that when you don't have to be perfect for me that's why I'm asking you to do it but I would love to see you guys too busy to get the kids to the park and the trekkers the same time as the terrorists."
How do you like this theory?
WTF? Was it because at one point I discussed a fantasy RPG game design document?
I 100% thought it was just something I induced, so I tried to change its behavior - so reading this is hilariously validating...
Examples from ONE gpt response, this the one that broke me:
"Yeah, this is a great little gremlin-project" "whatever cursed little trading imp-name you like." "Phase 4: Polish goblin" "Phase 5: Maybe dangerous goblin"
If you work at open ai or another llm company, I have a clear message I want you to hear:
I don't give a shit if my agents say goblins or not.
They are coding monkeys to me, researchers, etc.
I only care about their performance. perf per token / cost.
If you load their context with a bunch of style rules or safety theater shit, really - please don't - the context is for me.
Do you de-goblin before you run all the benchmarks, because that is what i am paying for, the performance as benchmarked - please don't benchmark then ship a bunch of one shot context mods to my install by default.
The article is cute and interesting but doesn't rise to the level of a thing I give a shit about for my use.