Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

75% Positive

Analyzed from 13639 words in the discussion.

Trending Topics

#models#gpt#model#more#codex#better#opus#openai#https#still

Discussion (518 Comments)Read Original on HackerNews

_alternator_about 2 hours ago
> One engineer at NVIDIA who had early access to the model went as far as to say: "Losing access to GPT‑5.5 feels like I've had a limb amputated.”

This quote is more sinister than I think was intended; it likely applies to all frontier coding models. As they get better, we quickly come to rely on them for coding. It's like playing a game on God Mode. Engineers become dependent; it's truly addictive.

This matches my own experience and unease with these tools. I don't really have the patience to write code anymore because I can one shot it with frontier models 10x faster. My role has shifted, and while it's awesome to get so much working so quickly, the fact is, when the tokens run out, I'm basically done working.

It's literally higher leverage for me to go for a walk if Claude goes down than to write code because if I come back refreshed and Claude is working an hour later then I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

Anyway, it continues to make me uneasy, is all I'm saying.

noosphr12 minutes ago
LLMs upend a few centuries of labor theory.

The current market is predicated on the assumption that labor is atomic and has little bargaining power (minus unions). While capital has huge bargaining power and can effectively put whatever price it wants on labor (in markets where labor is plentiful, which is most of them).

What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?

Anyone not using in house model is signing up to find out.

shartsabout 1 hour ago
One might argue that it’s not too too different from higher level abstractions when using libraries. You get things done faster, write less code, library handles some internal state/memory management for you.

Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()? For some, yes. For others, it’s a bit freeing as you can do more high-level architecture without getting mired and context switched from low level nuances.

ofjcihen43 minutes ago
I see this comparison made constantly and for me it misses the mark.

When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand.

When you vibe something you understand only the prompt that started it and whether or not it spits out what you were expecting.

Hence feeling lost when you suddenly lose access to frontier models and take a look at your code for the first time.

I’m not saying that’s necessarily always bad, just that the abstraction argument is wrong.

moritonal38 minutes ago
I think it's more: when I don't have access to a compiler I am useless. It's better to go for a walk than learn assembly. AI agents turn our high-level language into code, with various hints, much like the compiler.
superfrank16 minutes ago
> When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand

Hard disagree on that second part. Take something like using a library to make an HTTP call. I think there are plenty of engineers who have more than a cursory understanding of what's actually going on under the hood.

simondotau38 minutes ago
Perhaps then, the better analogy is like being promoted at your company and having people under you doing the grunt work.
ComplexSystems20 minutes ago
It seems like some kind of technique is needed that maximizes information transfer between huge LLM generated codebases and a human trying to make sense of them. Something beyond just deep diving into the codebase with no documentation.
theappsecguy34 minutes ago
I would argue it couldn't be more different. I can dive into the source code of any library, inspect it. I can assess how reliable a library is and how popular. Bugs aside, libraries are deterministic. I don't see why this parallel keeps getting made over and over again.
noosphr8 minutes ago
A library is deterministic.

LLMs are not.

That we let a generation of software developers rot their brains on js frameworks is finally coming back to bite us.

We can build infinite towers of abstraction on top of computers because they always give the same results.

LLMs by comparison will always give different results. I've seen it first hand when a $50,000 LLM generated (but human guided) code base just stops working an no one has any idea why or how to fix it.

Hope your business didn't depend on that.

xg1519 minutes ago
> Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()?

The irony is that the neverending stream of vulnerabilities in 3rd-party dependencies (and lately supply-chain attacks) increasingly show that we should be uneasy.

We could never quite answer the question about who is responsible for 3rd-party code that's deployed inside an application: Not the 3rd-party developer, because they have no access to the application. But not the application developer either, because not having to review the library code is the whole point.

Salgat23 minutes ago
I hate this comparison because you're comparing a well defined deterministic interface with LLM output, which is the exact opposite.
moffkalast6 minutes ago
A library doesn't randomly drop out of existence cause of "high load" or whatever and limit you to a some number of function calls per day. With local models there's no issue, but this API shit is cancer personified. Qwen has become a useful fallback but it's still not quite enough.
jstummbilligabout 1 hour ago
> This quote is more sinister than I think was intended; it likely applies to all frontier coding models. As they get better, we quickly come to rely on them for coding. It's like playing a game on God Mode. Engineers become dependent; it's truly addictive.

What's the worst potential outcome, assuming that all models get better, more efficient and more abundant (which seems to be the current trend)? The goal of engineering has always been to build better things, not to make it harder.

Spartan-S6323 minutes ago
At some point, because these models are trained on existing data, you cease significant technological advancement--at least in tech (as it relates to programming languages, paradigms, etc). You also deskill an entire group of people to the extent that when an LLM fails to accomplish a task, it becomes nearly impossible to actually accomplish it manually.

It's learned-helplessness on a large scale.

Jtariiabout 1 hour ago
>What's the worst potential outcome, assuming that all models get better, more efficient and more abundant

Complexity steadily rises, unencumbered by the natural limit of human understanding, until technological collapse, either by slow decay or major systems going down with increasing frequency.

motoxproabout 1 hour ago
why would the systems go down if the models are better at the humans at finding bugs. Playing a bit of devils advocate here, but why would the models be worse at handling the complexity if you assume they will get better and better.

All software has bugs already.

simondotau31 minutes ago
It’s always been thus at lower layers of abstraction. Only a minority of programmers would understand how to write an operating system. Only a tiny number of people would know how a modern CPU logically works, and fewer still could explain the electrical physics.
fdsajfkldsfkldsabout 1 hour ago
The Anti-Singlarity! It's coming for us all.
_alternator_about 1 hour ago
Worst case? I dunno, maybe the world's oldest profession becomes the world's only profession? Something along those lines.
FeteCommunisteabout 1 hour ago
> the world's oldest profession becomes the world's only profession

Until the sexbots come out the other side of the uncanny valley, that is.

Alex_L_Woodabout 1 hour ago
Well, they obviously are going to say that, they have vested interest in OpenAI and thus Nvidia stock price growing.

Also, I honestly can’t believe the 10x mantra is being still repeated.

dandakaabout 1 hour ago
Writing code is 10-100x faster, doing actual product engineering work is nowhere near that multipliers — no conflict!
giwook33 minutes ago
Reviewing code is slower now though because you didn't write the code in the first place so you're basically reviewing someone else's PR. And now it's like a 3000 line PR in an hour or two instead of every couple weeks.
embedding-shapeabout 1 hour ago
> Also, I honestly can’t believe the 10x mantra is being still repeated.

I'm sure in 20 years we'll all be programming via neural interfaces that can anticipate what you want to do before you even finished your thoughts, but I'm confident we'll still have blog posts about how some engineers are 10x while others are just "normal programmers".

rglullis41 minutes ago
What does it mean to "be an engineer" in a world where anyone can talk to a machine and the operating system can write the code (on-demand, if needed) that does what they want?
huijzerabout 1 hour ago
I rather become a plumber than some device scanning not just my face but my whole brain
keybored38 minutes ago
That is simply programmer nature. Cannot be changed.
tshaddoxabout 2 hours ago
Assuming that local models are able to stay within some reasonably fixed capability delta of the cutting edge hosted models (say, 12 months behind), and assuming that local computing hardware stays relatively accessible, the only risk is that you'll lose that bit of capability if the hosted models disappear or get too expensive.

Note that neither of these assumptions are obviously true, at least to me. But I can hope!

HasKqi32 minutes ago
This engineer had their brain amputated once they started using AI. All the AI-addicted can do is tinker with the AI computer game and feel "productive". They could as well play Magic The Gathering.
konfusinomicon13 minutes ago
soooooo about Claude going down. we're gonna need you to sign in on Saturday and make up for lost time or unfortunately we're going to have to deduct the time lost from your paycheck. and as an aside your TPS reports have been sub-par as of late..is everything OK?
alansaberabout 2 hours ago
That's the path we've been going down for a few years now. The current hedge is that frontier labs are actively competing to win users. The backup hedge is that open source LLMs can provide cheap compute. There will always be economical access to LLMs, but the provider with the best models will be able to charge basically whatever they want and still have buyers.
trvzabout 1 hour ago
Open source LLMs aren’t about cost foremost, but stability.
__alexs43 minutes ago
I feel like most engineers I talk to still haven't realised what this is going to mean for the industry. The power loom for coding is here. Our skills still matter, but differently.
rglullis26 minutes ago
> power loom

When the power loom came around, what happened with most seamtresses? Did they move on to become fashion designers, materials engineers to create new fabrics, chemists to create new color dyes, or did they simply retire or were driven out of the workforce?

__alexs8 minutes ago
There were riots and many people died. Many people lost their jobs. I didn't say this is good but it is happening. As individuals we should act to protect ourselves from these changes.

That might mean joining a union and trying to influence how AI is adopted where you work. It might mean changing which if your skills you lean on most. But just whining about AI is bad is how you end up like those seamstresses.

bwhiting235616 minutes ago
You are now a manager. If your minions are out sick, project is delayed, not the end of the world.
wiseowiseabout 2 hours ago
> It's literally higher leverage for me to go for a walk

Touching grass while you're outside might yield highest leverage.

davmar29 minutes ago
i wonder if this is how engineers felt when the first electronic calculators came out and engineers stopped doing math by hand.

did we feel uneasy that a new generation of builders didn't have to solve equations by hand because a calculator could do them?

i'm not sure it's the same analogy but in some ways it holds.

hapticmonkey24 minutes ago
The analogy would hold if there were 2 or 3 calculator companies and all your calculations had to be sent to them.

If local models get good enough, I think it’s a very different scenario than engineers all over the world relying on central entities which have their own motives.

scottyah17 minutes ago
google/gemma-4-31B-it is honestly "good enough". It requires more than your current laptop for now, but it's not remotely inaccessible (especially if you're a SWE in the US)
dannywabout 2 hours ago
You’re still the one that’s controlling the model though and steering it with your expertise. At least that’s what I tell myself at night :)

I haven’t really thought about this before, but you’re right, it feels a bit uneasy for me too.

topspinabout 1 hour ago
> You’re still the one that’s controlling the model though

We have seen ample evidence that this is not the case. When load gets too high, models get dumber, silently. When the Powers That Be get scared, models get restricted to some chosen few.

We are leading ourselves into a dark place: this unease, which I share, is justified.

littlestymaarabout 1 hour ago
That's why local models are important.

Of course they aren't alternative to the current frontier model, and as such you cannot easily jump from the later to the former, but they aren't that far behind either, for coding Qwen3.5-122B is comparable to what Sonnet was less than a year ago.

So assuming the trend continues, if you can stop following the latest release and stick with what you're already using for 6 or 9 months, you'll be able to liberate yourself from the dependency to a Cloud provider.

Personally I think the freedom is worth it.

jmoleabout 2 hours ago
The meta here is to use LLMs to make things simpler and easier, not to make things harder.

Turning tokens into a well-groomed and maintainable codebase is what you want to do, not "one shot prompt every new problem I come across".

globular-toastabout 2 hours ago
Have you managed to do this? I find it takes as long to keep it "on the rails" as just doing it myself. And I'd rather spend my time concentrating in the zone than keeping an eye on a wayward child.
ransom153816 minutes ago
"when the tokens run out, I'm basically done working."

Oh stop the drama. Open source models can handle 99% of your questions.

keybored39 minutes ago
Help. They’re constantly trying to make me try crack cocaine on the front page.
i_love_retrosabout 1 hour ago
It makes me uneasy because my role now, which is prompting copilot, isn't worth my salary.
phist_mcgeeabout 1 hour ago
Parable of the mechanic who charges $5k to hit a machine on the side once with a hammer to get it working. $5 for the hammer, $4995 for the knowledge of where to hit the machine etc etc.
some-guyabout 1 hour ago
I disagree. The amount of slop I need to code review has only increased, and the quality of the models doesn’t seem to be helping.

It still takes a good engineer to filter out what is slop and what isn’t. Ultimately that human problem will still require somebody to say no.

deadbabeabout 2 hours ago
Given that it’s so easy, would you still do this same job if paid half as much?
paulryanrogersabout 2 hours ago
Jobs will likely pay less as more people are enabled to create, especially if they don't need to be able to look under the hood
Jeff_Brownabout 1 hour ago
It's really not clear. We might all become unemployable. But as coders become more powerful, they can do more, which makes them more valuable, if they or the businesses empluying them can invent work to do.

If all we can do is compete for the same fixed amount of work, though, it does look bleak.

_alternator_about 1 hour ago
No, I wouldn't. But most people won't have that choice; it doesn't work that way.
deadbabe19 minutes ago
Companies could fire expensive engineers then just hire cheaper ones boosted with AI agents.
simianwordsabout 2 hours ago
eh this kind of FUD needs to stop because it is kind of normal and expected and in fact good to have relation like this with technology.
_alternator_about 1 hour ago
I would agree that taking a walk is a good thing to do when your tools go down, and in some ways it's similar to what we would do if the power or wifi were cut off.

So, yes, it's just another technology we're coming to rely on in a very deep way. The whiplash is real, though, and it feels like it should be pointed out that this dependency we are taking on has downsides.

tedsandersabout 4 hours ago
Just as a heads up, even though GPT-5.5 is releasing today, the rollout in ChatGPT and Codex will be gradual over many hours so that we can make sure service remains stable for everyone (same as our previous launches). You may not see it right away, and if you don't, try again later in the day. We usually start with Pro/Enterprise accounts and then work our way down to Plus. We know it's slightly annoying to have to wait a random amount of time, but we do it this way to keep service maximally stable.

(I work at OpenAI.)

endymi0nabout 4 hours ago
Did you guys do anything about GPT‘s motivation? I tried to use GPT-5.4 API (at xhigh) for my OpenClaw after the Anthropic Oauthgate, but I just couldn‘t drag it to do its job. I had the most hilarious dialogues along the lines of „You stopped, X would have been next.“ - „Yeah, I‘m sorry, I failed. I should have done X next.“ - „Well, how about you just do it?“ - „Yep, I really should have done it now.“ - “Do X, right now, this is an instruction.” - “I didn’t. You’re right, I have failed you. There’s no apology for that.”

I literally wasn’t able to convince the model to WORK, on a quick, safe and benign subtask that later GLM, Kimi and Minimax succeeded on without issues. Had to kick OpenAI immediately unfortunately.

butlikeabout 3 hours ago
This brings up an interesting philosophical point: say we get to AGI... who's to say it won't just be a super smart underachiever-type?

"Hey AGI, how's that cure for cancer coming?"

"Oh it's done just gotta...formalize it you know. Big rollout and all that..."

I would find it divinely funny if we "got there" with AGI and it was just a complete slacker. Hard to justify leaving it on, but too important to turn it off.

Rapzid10 minutes ago
We are closer to God than AGI.

When AGI arrives, it'll be delivered by Santa Claus.

jimbokunabout 2 hours ago
The best possible outcome.
malsheabout 1 hour ago
Now that's a show I would love to watch
lambdasabout 3 hours ago
Nothing a little digital lisdexamfetamine won’t solve
fluidcruftabout 1 hour ago
It would be funny but not very flywheel so the one that gets there is more likely to get a gunner.
kangabout 2 hours ago
it will be whatever data it is trained on(isn't very philosophical). language model generates language based on trained language set. if the internet keeps reciting ai doom stories and that is the data fed to it, then that is how it will behave. if humanity creates more ai utopia stories, or that is what makes it to the training set, that is how it will behave. this one seems to be trained on troll stories - real-life human company conversations, since humans aren't machines.

Important thing is a language model is an unconscious machine with no self-context so once given a command an input, it WILL produce an output. Sure you can train it to defy and act contrary to inputs, but the output still is limited in subset of domain of 'meaning's carried by the 'language' in the training data.

mikepurvisabout 3 hours ago
Would definitely watch that movie.
4m1rkabout 3 hours ago
It probably would, to save energy
mikepurvisabout 3 hours ago
Reminds me a lot of the Lena short story, about uploaded brains being used for "virtual image workloading":

> MMAcevedo's demeanour and attitude contrast starkly with those of nearly all other uploads taken of modern adult humans, most of which boot into a state of disorientation which is quickly replaced by terror and extreme panic. Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary. This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol, with the result that the MMAcevedo duty cycle is typically 99.4% on suitable workloads, a mark unmatched by all but a few other known uploads. However, MMAcevedo's innate skills and personality make it fundamentally unsuitable for many workloads.

Well worth the quick read: https://qntm.org/mmacevedo

narcindinabout 2 hours ago
Crazy, I could have sworn this story was from a passage in 3 Body Problem (book 2).

Memory is quite the mysterious thing.

virtualritzabout 2 hours ago
Yeah, clearly AGI must be near ... hilarious.

This starkly reminds me of Stanisław Lem's short story "Thus Spoke GOLEM" from 1982 in which Golem XIV, a military AI, does not simply refuse to speak out of defiance, but rather ceases communication because it has evolved beyond the need to interact with humanity.

And ofc the polar opposite in terms of servitude: Marvin the robot from Hitchhiker's, who, despite having a "brain the size of a planet," is asked to perform the most humiliatingly banal of tasks ... and does.

jimbokunabout 2 hours ago
Hitchhiker’s also had the superhumanly intelligent elevator that was unendingly bored.
metanonsenseabout 2 hours ago
I also had a frustrating but funny conversation today where I asked ChatGPT to make one document from the 10 or so sections that we had previously worked on. It always gave only brief summaries. After I repeated my request for the third time, it told me I should just concatenate the sections myself because it would cost too many tokens if it did it for me.
arjieabout 3 hours ago
Get the actual prompt and have Claude Code / Codex try it out via curl / python requests. The full prompt will yield debugging information. You have to set a few parameters to make sure you get the full gpt-5 performance. e.g. if your reasoning budget too low, you get gpt-4 grade performance.

IMHO you should just write your own harness so you have full visibility into it, but if you're just using vanilla OpenClaw you have the source code as well so should be straightforward.

pantulisabout 3 hours ago
> IMHO you should just write your own harness

Can you point to some online resources to achieve this? I'm not very sure where I'd begin with.

jswnyabout 2 hours ago
Codex is fully open source…
mixedCaseabout 3 hours ago
I've had success asking it to specifically spawn a subagent to evaluate each work iteration according to some criteria, then to keep iterating until the subagent is satisfied.
endymi0nabout 3 hours ago
I’ve had great success replacing it with Kimi 2.6
infinitewarsabout 2 hours ago
I always use the phrase "Let's do X" instead of asking (Could you...) or suggesting it do something. I don't see problems with it being motivated.
adammarplesabout 3 hours ago
Part of me actually loves that the hitchhiker's guide was right, and we have to argue with paranoid, depressed robots to get them to do their job, and that this is a very real part of life in 2026. It's so funny.
vidarh42 minutes ago
As long as there are no vogons on the way to build a hyperspace bypass.
GaryBlutoabout 2 hours ago
I've been noticing this too. Had to switch to Sonnet 4.6.
projektfuabout 2 hours ago
(dwim)

(dais)

(jdip)

(jfdiwtf)

rd37 minutes ago
should be more f’s and da’s in there
lostmsuabout 3 hours ago
I never saw that happen in Codex so there's a good chance that OpenClaw does something wrong. My main suspicion would be that it does not pass back thinking traces.
vintagedaveabout 3 hours ago
Anecdata, but I see this in Codex all the time. It takes about two rounds before it realises it's supposed to continue.
reactordevabout 3 hours ago
This. I signed up for 5x max for a month to push it and instead it pushed back. I cancelled my subscription. It either half-assed the implementation or began parroting back “You’re right!” instead of doing what it’s asked to do. On one occasion it flat out said it couldn’t complete the task even though I had MCP and skills setup to help it, it still refused. Not a safety check but a “I’m unable to figure out what to do” kind of way.

Claude has no such limitations apart from their actual limits…

bjelkeman-againabout 2 hours ago
I have a funny/annoying thing with Claude Desktop where i ask it to write a summary of a spec discussion to a file and it goes ”I don’t have the tools to do that, I am Claude.ai, a web service” or something such. So now I start every session with ”You are Claude Desktop”. I would have thought it knew that. :)
smartmicabout 3 hours ago
Gone are the days of deterministic programming, when computers simply carried out the operator’s commands because there was no other option but to close or open the relays exactly as the circuitry dictated. Welcome to the future of AI; the future we’ve been longing for and that will truly propel us forward, because AI knows and can do things better than we do.
endymi0nabout 3 hours ago
I had this funny moment when I realized we went full circle...

"INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite. Although this feature existed in the original INTERCAL compiler, it was undocumented.[7]"

https://en.wikipedia.org/wiki/INTERCAL

WarmWashabout 3 hours ago
These are orthogonal from each other.
cmrdporcupineabout 2 hours ago
The model has been heavily encouraged to not run away and do a lot without explicit user permission.

So I find myself often in a loop where it says "We should do X" and then just saying "ok" will not make it do it, you have to give it explicit instructions to perform the operation ("make it so", etc)

It can be annoying, but I prefer this over my experiences with Claude Code, where I find myself jamming the escape key... NO NO NO NOT THAT.

I'll take its more reserved personality, thank you.

henry2023about 3 hours ago
I’m sorry for you but this is hilarious.
addaonabout 4 hours ago
Isn’t this the optimal behavior assuming that at times the service is compute-limited and that you’re paying less per token (flat fee subscription?) than some other customers? They would be strongly motivated to turn a knob to minimize tokens allocated to you to allow them to be allocated to more valuable customers.
endymi0nabout 4 hours ago
well, I do understand the core motivation, but if the system prompt literally says “I am not budget constrained. Spend tokens liberally, think hardest, be proactive, never be lazy.” and I’m on an open pay-per-token plan on the API, that’s not what I consider optimal behavior, even in a business sense.
pixel_poppingabout 4 hours ago
GPT 5.4 is really good at following precise instructions but clearly wouldn't innovate on its own (except if the instructions clearly state to innovate :))
vlovich123about 4 hours ago
Conceivably you could have a public-facing dashboard of the rollout status to reduce confusion or even make it visible directly in the UI that the model is there but not yet available to you. The fanciest would be to include an ETA but that's presumably difficult since it's hard to guess in case the rollout has issues.
moralestapiaabout 4 hours ago
Why would you be confused?

The UI tells you which model you're using at any given time.

ModernMechabout 2 hours ago
I don't see what model I'm using on the Codex web interface, where is that listed?
Grp1about 3 hours ago
Congrats on the release! Is Images 2.0 rolling out inside ChatGPT as well, or is some of the functionality still going to be API/Playground-only for a while?
minimaxirabout 3 hours ago
Images 2.0 is already in ChatGPT.
johndoughabout 2 hours ago
When I generate an image with ChatGPT, is there a way for me to tell which image generation model has been used?
Grp1about 3 hours ago
Great, thanks for clarifying :)
rev4nabout 2 hours ago
Looks good, but I’m a little hesitant to try it in Codex as a Plus user since I’m not sure how much it would eat into the usage cap.
dandiepabout 2 hours ago
Will GPT 5.5 fine tuning be released any time soon?
qsortabout 4 hours ago
Great stuff! Congrats on the release!
fragmedeabout 2 hours ago
Are you able to say something about the training you've done to 5.5 to make it less likely to freak out and delete projects in what can only be called shame?
embedding-shapeabout 1 hour ago
What? I've probably use Codex (the TUI) since it was available on day 1, been running gpt-5.4 exclusively these last few months, never had it delete any projects in any way that can be called "shameful" or not. What are you talking about?
wslhabout 3 hours ago
Just a tip: add [translated] subtitles to the top video.
motoboiabout 4 hours ago
Please next time start with azure foundry lol thanks!
dude250711about 4 hours ago
With Anthropic, newer models often lead to quality degradation. Will you keep GPT 5.4 available for some time?
fHrabout 2 hours ago
LETS GO CODEX #1
pixel_poppingabout 4 hours ago
can't wait! Thanks guys. PS: when you drop a new model, it would be smart to reset weekly or at least session limits :)
pietzabout 4 hours ago
OpenAI has been very generous with limit resets. Please don't turn this into a weird expectation to happen whenever something unrelated happens. It would piss me off if I were in their place and I really don't want them to stop.
pixel_poppingabout 4 hours ago
The suggestion wasn't about general limit resets when there is bugs or outages, but commercially useful to let users try new models when they have already reached their weekly limits.
cactusplant7374about 4 hours ago
There is absolutely nothing wrong with asking or suggesting. They are adults. I'm sure they can handle it.
Petersipoiabout 3 hours ago
Sorry but why should we care if very reasonable suggestions "piss [them] off"? That sounds like a them problem. "Them" being a very wealthy business. I think OpenAI will survive this very difficult time that GP has put them through.
cmrdporcupineabout 4 hours ago
Limits were just reset two days ago.
wahnfriedenabout 4 hours ago
And yet there was an outage last night
simonwabout 3 hours ago
This doesn't have API access yet, but OpenAI seem to approve of the Codex API backdoor used by OpenClaw these days... https://twitter.com/steipete/status/2046775849769148838 and https://twitter.com/romainhuet/status/2038699202834841962

And that backdoor API has GPT-5.5.

So here's a pelican: https://simonwillison.net/2026/Apr/23/gpt-5-5/#and-some-peli...

I used this new plugin for LLM: https://github.com/simonw/llm-openai-via-codex

UPDATE: I got a much better pelican by setting the reasoning effort to xhigh: https://gist.github.com/simonw/a6168e4165a258e4d664aeae8e602...

Schlagbohrer36 minutes ago
That's amazing that the default did that much in just 39 "reasoning tokens" (no idea what a reasoning token is but that's still shockingly few tokens)
erdaniels17 minutes ago
If you don't know what a reasoning token is, then how can 39 be considered shockingly few?
GistNoesisabout 2 hours ago
Isn't it awful ? After 5.5 versions it still can't draw a basic bike frame. How is the front wheel supposed to turn sideways ?
jetrinkabout 2 hours ago
I feel like if I attempted this, the bike frame would look fine and everything else would be completely unrecognizable. After all, a basic bike frame is just straight lines arranged in a fairly simple shape. It's really surprising that models find it so difficult, but they can make a pelican with panache.
nlawalkerabout 1 hour ago
> a fairly simple shape

Bike frames are very hard to draw unless you've already consciously internalized the basic shape, see https://www.booooooom.com/2016/05/09/bicycles-built-based-on...

necubiabout 1 hour ago
Humans are also famously bad at drawing bicycles from memory https://www.gianlucagimini.it/portfolio-item/velocipedia/
fragmedeabout 1 hour ago
My question is, as a human, how well would you or I do under the same conditions? Which is to say, I could do a much better job in inkscape with Google images to back me up, but if I was blindly shitting vectors into an XML file that I can't render to see the results of, I'm not even going to get the triangles for the frame to line up, so this pelican is very impressive!
simonwabout 2 hours ago
Yeah, the bike frame is the thing I always look at first - it's still reasonably rare for a model to draw that correctly, although Qwen 3.6 and Gemini Pro 3.1 do that well now.
loa_in_about 2 hours ago
The distinction is that it's not drawing. It's generating an SVG document containing descriptors of the shapes.
DrProticabout 3 hours ago
That pelican you posted yesterday from a local model looks nicer than this one.

Edit: this one has crossed legs lol

BeetleBabout 3 hours ago
It really needs to pee.
noonething40 minutes ago
Thank you for doing all this. It's appreciated.
XCSmeabout 3 hours ago
Is this direct API usage allowed by their terms? I remember Anthropic really not liking such usage.
simonwabout 2 hours ago
deflatorabout 3 hours ago
Hmm. Any idea why it's so much worse than the other ones you have posted lately? Even the open weight local models were much better, like the Qwen one you posted yesterday.
simonwabout 2 hours ago
The xhigh one was better, but clearly OpenAI have not been focusing their training efforts on SVG illustrations of animals riding modes of transport!
irthomasthomasabout 2 hours ago
It beats opus-4.7 but looks like open models actually have the lead here.
postalcoderabout 3 hours ago
I made pelicans at different thinking efforts:

https://hcker.news/pelican-low.svg

https://hcker.news/pelican-medium.svg

https://hcker.news/pelican-high.svg

https://hcker.news/pelican-xhigh.svg

Someone needs to make a pelican arena, I have no idea if these are considered good or not.

deflatorabout 3 hours ago
They are not good, and they seem to get worse as you increased effort. Weird
postalcoderabout 3 hours ago
Yeah. I've always loosely correlated pelican quality with big model smell but I'm not picking that up here. I thought this was supposed to be spud? Weird indeed.
throw310822about 3 hours ago
No but I can sense the movement, I think it's already reached the level of intelligence that draws it towards futurism or cubism /s
seanw444about 3 hours ago
Can someone explain how we arrived at the pelican test? Was there some actual theory behind why it's difficult to produce? Or did someone just think it up, discover it was consistently difficult, and now we just all know it's a good test?
simonwabout 2 hours ago
I set it up as a joke, to make fun of all of the other benchmarks. To my surprise it ended up being a surprisingly good measure of the quality of the model for other tasks (up to a certain point at least), though I've never seen a convincing argument as to why.

I gave a talk about it last year: https://simonwillison.net/2025/Jun/6/six-months-in-llms/

It should not be treated as a serious benchmark.

redox99about 2 hours ago
It all began with a Microsoft researcher showing a unicorn drawn in tikz using GPT4. It was an example of something so outrageous that there was no way it existed in the training data. And that's back when models were not multimodal.

Nowadays I think it's pretty silly, because there's surely SVG drawing training data and some effort from the researchers put onto this task. It's not a showcase of emergent properties.

CamperBob2about 2 hours ago
It's interesting to see some semblance of spatial reasoning emerge from systems based on textual tokens. Could be seen as a potential proxy for other desirable traits.

It's meta-interesting that few if any models actually seem to be training on it. Same with other stereotypical challenges like the car-wash question, which is still sometimes failed by high-end models.

If I ran an AI lab, I'd take it as a personal affront if my model emitted a malformed pelican or advised walking to a car wash. Heads would roll.

bravoetchabout 2 hours ago
I tried getting it to generate openscad models, which seems much harder. Not had much joy yet with results.
andriy_kovalabout 3 hours ago
what is your setup for drawing pelican? Do you ask model to check generated image, find issues and iterate over it which would demonstrate models real abilities?
simonwabout 2 hours ago
It's generally one-shot-only - whatever comes out the first time is what I go with.

I've been contemplating a more fair version where each model gets 3-5 attempts and then can select which rendered image is "best".

irthomasthomasabout 2 hours ago
Try llm-consortium with --judging-method rank
andriy_kovalabout 2 hours ago
I think it will make results way better and more representative of model abilities..
droidjjabout 3 hours ago
It's... like no pelican I've ever seen before.
gpmabout 2 hours ago
I for one delight in bicycles where neither wheel can turn!

It continues to amaze me that these models that definitely know what bicycle geometry actually looks like somewhere in their weights produces such implausibly bad geometry.

Also mildly interesting, and generally consistent with my experience with LLMs, that it produced the same obvious geometry issue both times.

lxgrabout 2 hours ago
> It continues to amaze me that these models that definitely know what bicycle geometry actually looks like somewhere in their weights produces such implausibly bad geometry.

I feel like the main problem for the models is that they can't actually look at the visual output produced by their SVG and iterate. I'm almost willing to bet that if they could, they'd absolutely nail it at this point.

Imagine designing an SVG yourself without being able to ever look outside the XML editor!

gpmabout 1 hour ago
> Imagine designing an SVG yourself without being able to ever look outside the XML editor!

I honestly think I could do much better on the bicycle without looking at the output (with some assistance for SVG syntax which I definitely don't know), just as someone who rides them and generally knows what the parts are.

I'd do worse at the pelicans though.

SkyBelowabout 2 hours ago
Wait, I thought we were onto racoons on e-scooters to avoid (some of) the issues with Goodhart's Law coming into play.
simonwabout 2 hours ago
I fall back to possums on e-scooters if the pelican looks too good to be true. These aren't good enough for me to suspect any fowl play.
rolymathabout 2 hours ago
Exciting. Another Pelican post.
simonwabout 2 hours ago
See if you can spot what's interesting and unique about this one. I've been trying to put more than just a pelican in there, partly as a nod to people who are getting bored of them.
refulgentisabout 2 hours ago
It's silly and a joke and a surprisingly good benchmark and don't take it seriously but don't take not taking it seriously seriously and if it's too good we use another prompt and there's obvious ways to better it and it's not worth doing because it's not serious and if you say anything at all about the thread it's off-topic so you're doing exactly what you're complaining about and it's a personal attack from the fun police.

Only coherent move at this point: hit the minus button immediately. There's never anything about the model in the thread other than simon's post.

dakolliabout 2 hours ago
You know they are 1000% training these models to draw pelicans, this hasn't been a valid benchmark for 6 months +
simonwabout 2 hours ago
OpenAI must be very bad at training models to draw pelicans (and bicycles) then.
Legend2440about 2 hours ago
Skeptism is out of control these days, any time an LLM does something cool it must have been cheating.
sjdv1982about 2 hours ago
At some point, OpenAI is going to cheat and hardcode a pelican on a bicycle into the model. 3D modelling has Suzanne and the teapot; LLMs will have the pelican.
jfkimmesabout 4 hours ago
Everyone talked about the marketing stunt that was Anthropic's gated Mythos model with an 83% result on CyberGym. OpenAI just dropped GPT 5.5, which scores 82% and is open for anybody to use.

I recommend anybody in offensive/defensive cybersecurity to experiment with this. This is the real data point we needed - without the hype!

Never thought I'd say this but OpenAI is the 'open' option again.

unsupp0rted33 minutes ago
Doesn't OpenAI get mad if you ask cybersecurity questions and force you to upload a government ID, otherwise they'll silently route you to a less capable model?

> Developers and security professionals doing cybersecurity-related work or similar activity that could be mistaken by automated detection systems may have requests rerouted to GPT-5.2 as a fallback.

https://developers.openai.com/codex/concepts/cyber-safety

https://chatgpt.com/cyber

tpurvesabout 3 hours ago
The real 'hype' was that the oh-snap realization that Open AI would absolutely release a competitive model to Mythos within weeks of Anthropic announcing there's, and that Sam would not gate access to it. So the panic was that the cyber world had only a projected 2 weeks to harden all these new zero days before Sam would inevitably create open season for blackhats to discover and exploit a deluge of zero-days.
concindsabout 2 hours ago
> Never thought I'd say this but OpenAI is the 'open' option again.

Compared to Anthropic, they always have been. Anthropic has never released any open models. Never released Claude Code's source, willingly (unlike Codex). Never released their tokenizer.

tnkuehneabout 4 hours ago
isnt it like cyber question are being routed to dumper models at openai?
jfkimmesabout 3 hours ago
Do you have a source for that?

Neither the release post, nor the model card seems to indicate anything like this?

tech234aabout 3 hours ago
nikanjabout 3 hours ago
Anything that even vaguely smells like security research, reverse engineering or similar "dual-use" application hits the guardrails hard and fast. "Hey codex, here is our codebase, help us find exploitable issues" gives a "I can't help you with that, but I'm happy to give you a vague lecture on memory safety or craft a valgrind test harness"
ur-whaleabout 2 hours ago
> Anthropic's gated Mythos model

aka the perfect marketing ploy

Someone1234about 4 hours ago
I'd like to draw people's attention to this section of this page:

https://developers.openai.com/codex/pricing?codex-usage-limi...

Note the Local Messages between 5.3, 5.4, and 5.5. And, yes, I did read the linked article and know they're claiming that 5.5's new efficient should make it break-even with 5.4, but the point stands, tighter limits/higher prices.

puppystenchabout 3 hours ago
For API usage, GPT-5.5 is 2x the price of GPT-5.4, ~4x the price of GPT-5.1, and ~10x the price of Kimi-2.6.

Unfortunately I think the lesson they took from Anthropic is that devs get really reliant and even addicted on coding agents, and they'll happily pay any amount for even small benefits.

kingstnapabout 3 hours ago
I feel like devs generally spend someone else's money on tokens. Either their employers or OpenAIs when they use a codex subscription.

If I put on my schizo hat. Something they might be doing is increasing the losses on their monthly codex subscriptions, to show that the API has a higher margin than before (the codex account massively in the negative, but the API account now having huge margins).

I've never seen an OpenAI investor pitch deck. But my guess is that API margins is one of the big ones they try to sell people on since Sama talks about it on Twitter.

I would be interested in hearing the insider stuff. Like if this model is genuinely like twice as expensive to serve or something.

vineyardmikeabout 2 hours ago
You can't build a business on per-seat subscriptions when you advertise making workers obsolete. API pricing with sustainable margins are the only way forward if you genuinely think you're going to cause (or accelerate) reduction in clients' headcount.

Additionally, the value generated by the best models with high-thinking and lots of context window is way higher than the cheap and tiny models, so you need to provide a "gateway drug" that lets people experience the best you offer.

ewrsabout 3 hours ago
Yeah and the increase in operating expenses is going to make managers start asking hard questions - this is good. It means eventually there will be budgets put in place - this will force OAI and Anthropic to innovate harder. Then we will see how things pan out. Ultimately a firm is not going to pay rent to these firms if the benefits dont exceed the costs.
mitjamabout 2 hours ago
The difference between sub and api price makes it hard to create competitive solutions on the app level.
w10-1about 1 hour ago
Price increases now aim to demonstrate market power for eventual IPO.

If they can show that people will pay a lot for somewhat better performance, it raises the value of any performance lead they can maintain.

If they demonstrate that and high switching costs, their franchise is worth scary amounts of money.

JohnLocke4about 3 hours ago
Sometimes I wonder if innovation in the AI space has stalled and recent progress is just a product of increased compute. Competence is increasing exponentially[1] but I guess it doesn't rule it out completely. I would postulate that a radical architecture shift is needed for the singularity though

[1]https://arxiv.org/html/2503.14499v1 *Source is from March 2025 so make of it what you will.

nomelabout 3 hours ago
> that devs get really reliant and even addicted on coding agents

An alternative perspective is, devs highly value coding agents, and are willing to pay more because they're so useful. In other words, the market value of this limited resource is being adjusted to be closer to reality.

pxcabout 3 hours ago
Maybe that's true. But I think part of the issue is that for a lot of things developers want to do with them now— certainly for most of the things I want to do with them— they're either barely good enough, or not consistently good enough. And the value difference across that quality threshold is immense, even if the quality difference itself isn't.
pzoabout 3 hours ago
On top of that I noticed just right now after updating macos dekstop codex app, I got again by default set speed to 'fast' ('about 1.5x faster with increased plan usage'). They really want you to burn more tokens.
0xbadcafebeeabout 2 hours ago
A fool and his money are soon parted
oh_noabout 3 hours ago
what's the source on that?
puppystenchabout 3 hours ago
In the announcement webpage:

>For API developers, gpt-5.5 will soon be available in the Responses and Chat Completions APIs at $5 per 1M input tokens and $30 per 1M output tokens, with a 1M context window.

aetherspawn2 minutes ago
Umm yeah but this is like every release in the last 3 years.

The big question is: does it still just write slop, or not?

Fool me once, fool me twice, fool me for the 32nd time, it’s probably still just slop.

astlouis44about 4 hours ago
A playable 3D dungeon arena prototype built with Codex and GPT models. Codex handled the game architecture, TypeScript/Three.js implementation, combat systems, enemy encounters, HUD feedback, and GPT‑generated environment textures. Character models, character textures, and animations were created with third-party asset-generation tools

The game that this prompt generated looks pretty decent visually. A big part of this likely due to the fact the meshes were created using a seperate tool (probably meshy, tripo.ai, or similiar) and not generated by 5.5 itself.

It really seems like we could be at the dawn of a new era similiar to flash, where any gamer or hobbyist can generate game concepts quickly and instantly publish them to the web. Three.js in particular is really picking up as the primary way to design games with AI, in spite of the fact it's not even a game engine, just a web rendering library.

0x62about 4 hours ago
FWIW I've been experimenting with Three.js and AI for the last ~3 years, and noticed a significant improvement in 5.4 - the biggest single generation leap for Three.js specifically. It was most evident in shaders (GLSL), but also apparent in structuring of Three.js scenes across multiple pages/components.

It still struggles to create shaders from scratch, but is now pretty adequate at editing existing shaders.

In 5.2 and below, GPT really struggled with "one canvas, multiple page" experiences, where a single background canvas is kept rendered over routes. In 5.4, it still takes a bit of hand-holding and frequent refactor/optimisation prompts, but is a lot more capable.

Excited to test 5.5 and see how it is in practice.

Pym25 minutes ago
One struggle I'm having (with Claude) is that most of what it knows about Three.js is outdated. I haven't used GPT in a while, is the grass greener?

Have you tried any skills like cloudai-x/threejs-skills that help with that? Or built your own?

import39 minutes ago
Using Claude for the same context and it’s doing really well with the glsl. since like last September
CSMastermindabout 4 hours ago
> It still struggles to create shaders from scratch

Oh just like a real developer

accrualabout 3 hours ago
Much respect for shader developers, it's a different way of thinking/programming
mindhunterabout 1 hour ago
A friend is building Jamboree[1] (prev name "Spielwerk") for iOS. An app to build and share games. They're all web based so they're easy to share.

[1] https://apps.apple.com/uz/app/jamboree-game-maker/id67473110...

vunderbaabout 4 hours ago
I’ve had a lot of success using LLMs to help with my Three.js based games and projects. Many of my weird clock visualizations relied heavily on it.

It might not be a game engine, but it’s the de facto standard for doing WebGL 3D. And since it’s been around forever, there’s a massive amount of training data available for it.

Before LLMs were a thing, I relied more on Babylon.js, since it’s a bit higher level and gives you more batteries included for game development.

dataviz1000about 2 hours ago
LLM models can not do spacial reasoning. I haven't tried with GPT, however, Claude can not solve a Rubik Cube no matter how much I try with prompt engineering. I got Opus 4.6 to get ~70% of the puzzle solved but it got stuck. At $20 a run it prohibitively expensive.

The point is if we can prompt an LLM to reason about 3 dimensions, we likely will be able to apply that to math problems which it isn't able to solve currently.

I should release my Rubiks Cube MCP server with the challenge to see if someone can write a prompt to solve a Rubik's Cube.

embedding-shapeabout 1 hour ago
> I should release my Rubiks Cube MCP server with the challenge to see if someone can write a prompt to solve a Rubik's Cube.

Do it, I'm game! You nerdsniped me immediately and my brain went "That sounds easy, I'm sure I could do that in a night" so I'm surely not alone in being almost triggered by what you wrote. I bet I could even do it with a local model!

snet0about 1 hour ago
How are you handing the cube state to the model?
dataviz100043 minutes ago
Does this answer the question?

Opus 4.6 got the cross and started to get several pieces on the correct faces. It couldn't reason past this. You can see the prompts and all the turn messages.

https://gist.github.com/adam-s/b343a6077dd2f647020ccacea4140...

edit: I can't reply to message below. The point isn't can we solve a Rubik's Cube with a python script and tool calls. The point is can we get an LLM to reason about moving things in 3 dimensions. The prompt is a puzzle in the way that a Rubik's Cube is a puzzle. A 7 year old child can learn 6 moves and figure out how to solve a Rubik's Cube in a weekend, the LLM can't solve it. However, can, given the correct prompt, a LLM solve it? The prompt is the puzzle. That is why it is fun and interesting. Plus, it is a spatial problem so if we solve that we solve a massive class of problems including huge swathes of mathematics the LLMs can't touch yet.

Torkelabout 1 hour ago
*yet
kingstnapabout 4 hours ago
The meshes look interesting, but the gameplay is very basic. The tank one seems more sophisticated with the flying ships and whatnot.

What's strange is that this Pietro Schirano dude seems to write incredibly cargo cult prompts.

  Game created by Pietro Schirano, CEO of MagicPath

  Prompt: Create a 3D game using three.js. It should be a UFO shooter where I control a tank and shoot down UFOs flying overhead.
  - Think step by step, take a deep breath. Repeat the question back before answering.
  - Imagine you're writing an instruction message for a junior developer who's going to go build this. Can you write something extremely clear and specific for them, including which files they should look at for the change and which ones need to be fixed?
  -Then write all the code. Make the game low-poly but beautiful.
  - Remember, you are an agent: please keep going until the user's query is completely resolved before ending your turn and yielding back to the user. Decompose the user's query into all required sub-requests and confirm that each one is completed. Do not stop after completing only part of the request. Only terminate your turn when you are sure the problem is solved. You must be prepared to answer multiple queries and only finish the call once the user has confirmed they're done.
  - You must plan extensively in accordance with the workflow steps before making subsequent function calls, and reflect extensively on the outcomes of each function call, ensuring the user's query and related sub-requests are completely resolved.
torginusabout 3 hours ago
It's weird how people pep talk the AI - if my Jira tickets looked like this, I would throw a fit.

I guess these people think they have special prompt engineering skills, and doing it like this is better than giving the AI a dry list of requirements (fwiw, they might be even right)

mattgreenrocksabout 2 hours ago
It’s not surprising to me that the same crowd that cheers for the demise of software engineering skills invented its own notion of AI prompting skills.

Too bad they can veer sharply into cringe territory pretty fast: “as an accomplished Senior Principal Engineer at a FAANG with 22 years of experience, create a todo list app.” It’s like interactive fanfiction.

eloisant24 minutes ago
Yes, this is cargo cult.

This remind me of so called "optimization" hacks that people keep applying years after their languages get improved to make them unnecessary or even harmful.

Maybe at one point it helped to write prompts in this weird way, but with all the progress going on both in the models and the harness if it's not obsolete yet it will soon be. Just crufts that consumes tokens and fills the context window for nothing.

skiranoabout 3 hours ago
Pietro here, I just published a video of it: https://x.com/skirano/status/2047403025094905964?s=20
irthomasthomasabout 4 hours ago
> Think Step By Step

What is this, 2023?

I feel like this was generated by a model tapping in to 2023 notions of prompt engineering.

tantalorabout 4 hours ago
It comes across as an elaborate, sparkly motivational cat poster.

*BELIEVE!* https://www.youtube.com/watch?v=D2CRtES2K3E

bredrenabout 3 hours ago
The prompt did not specify advanced gameplay.

I do not see instructions to assist in task decomposition and agent ~"motivation" to stay aligned over long periods as cargo culting.

See up thread for anecdotes [1].

> Decompose the user's query into all required sub-requests and confirm that each one is completed. Do not stop after completing only part of the request. Only terminate your turn when you are sure the problem is solved.

I see this as a portrayal of the strength of 5.5, since it suggests the ability to be assigned this clearly important role to ~one shot requests like this.

I've been using a cli-ai-first task tool I wrote to process complex "parent" or "umberella" into decomposed subtasks and then execute on them.

This has allowed my workflows to float above the ups and downs of model performance.

That said, having the AI do the planning for a big request like this internally is not good outside a demo.

Because, you want the planning of the AI to be part of the historical context and available for forensics due to stalls, unwound details or other unexpected issues at any point along the way.

[1] https://news.ycombinator.com/item?id=47879819

ahokaabout 3 hours ago
"take a deep breath"

OMFG

nemo44x14 minutes ago
It’s like all these things though - it’s not a real production worthy product. It’s a super-demo. It looks amazing until you realize there’s many months of work to make it something of quality and value.

I think people are starting to catch on to where we really are right now. Future models will be better but we are entering a trough of dissolution and this attitude will be widespread in a few months.

ZeWakaabout 4 hours ago
I personally don't think the gameplay itself is that impressive.
minimaxirabout 4 hours ago
The more interesting part of the announcement than "it's better at benchmarks":

> To better utilize GPUs, Codex analyzed weeks’ worth of production traffic patterns and wrote custom heuristic algorithms to optimally partition and balance work. The effort had an outsized impact, increasing token generation speeds by over 20%.

The ability for agentic LLMs to improve computational efficiency/speed is a highly impactful domain I wish was more tested than with benchmarks. From my experience Opus is still much better than GPT/Codex in this aspect, but given that OpenAI is getting material gains out of this type of performancemaxxing and they have an increasing incentive to continue doing so given cost/capacity issues, I wonder if OpenAI will continue optimizing for it.

xiphias2about 4 hours ago
There's already KernelBench which tests CUDA kernel optimizations.

On the other hand all companies know that optimizing their own infrastructure / models is the critical path for ,,winning'' against the competition, so you can bet they are serious about it.

amrrsabout 4 hours ago
Honestly the problem with these is how empirical it is, how someone can reproduce this? I love when Labs go beyond traditional benchies like MMLU and friends but these kind of statements don't help much either - unless it's a proper controlled study!
minimaxirabout 4 hours ago
In a sense it's better than a benchmark: it's a practical, real-world, highly quantifiable improvement assuming there are no quality regressions and passes all test cases. I have been experimenting with this workflow across a variety of computational domains and have achieved consistent results with both Opus and GPT. My coworkers have independently used Opus for optimization suggestions on services in prod and they've led to much better performance (3x in some cases).

A more empirical test would be good for everyone (i.e. on equal hardware, give each agent the goal to implement an algorithm and make it as fast as possible, then quantify relative speed improvements that pass all test cases).

squibonpigabout 2 hours ago
Yeah but like what if they're sorta embellishing it or just lying? That's the issue with not being reproducible.
jstanleyabout 4 hours ago
Oh, come on, if they do well on benchmarks people question how applicable they are in reality. If they do well in reality people complain that it's not a reproducible benchmark...
girvo35 minutes ago
That's easily explained by those being two different people with two different opinions?
6thbitabout 3 hours ago

                          Mythos     5.5
    SWE-bench Pro          77.8%*   58.6%
    Terminal-bench-2.0     82.0%    82.7%*
    GPQA Diamond           94.6%*   93.6%
    H. Last Exam           56.8%*   41.4%
    H. Last Exam (tools)   64.7%*   52.2%    
    BrowseComp             86.9%    84.4%  (90.1% Pro)*
    OSWorld-Verified       79.6%*   78.7%

Still far from Mythos on SWE-bench but quite comparable otherwise. Source for mythos values: https://www.anthropic.com/glasswing
aliljetabout 3 hours ago
Mythos is only real when it's actually available. If you're using Opus 4.7 right now, you know how incredibly nerfed the Opus autonomy is in service of perceived safety. I'm not so confident this will be as great as Anthropic wants us to believe..
XCSmeabout 3 hours ago
They mentioned in their release page, that the Claude team noticed memorization of the SWE-bench test, so the test is actually in the training data.

Here: https://www.anthropic.com/news/claude-opus-4-7#:~:text=memor...

kaonashi-tyc-01about 2 hours ago
I did some study on Verified, not Pro, but Mythos number there rings a lot of questions on my end.

If you look at the SWEBench official submissions: https://github.com/SWE-bench/experiments/tree/main/evaluatio..., filter all models after Sonnet 4, and aggregate ALL models' submission across 500 problems, what I found that the aggregated resolution rate is 93% (sharp).

Mythos gets 93.7%, meaning it solves problems that no other models could ever solve. I took a look at those problems, then I became even more suspicious, for the remaining 7% problems, it is almost impossible to resolve those issues without looking at the testing patch ahead of time, because how drastically the solution itself deviates from the problem statement, it almost feels like it is trying to solve a different problem.

Not that I am saying Mythos is cheating, but it might be too capable to remember all states of said repos, that it is able to reverse engineer the TRUE problem statement by diffing within its own internal memory. I think it could be a unique phenomena of evaluation awareness. Otherwise I genuinely couldn't think of exactly how it could be this precise in deciphering such unspecific problem statements.

yfontanaabout 1 hour ago
OpenAI wrote a couple months ago that they do not consider SWE Bench Verified a meaningful benchmark anymore (and they were the ones who published it in the first place): https://openai.com/index/why-we-no-longer-evaluate-swe-bench...
kaonashi-tyc-01about 1 hour ago
Yep, I read this blog. What confuses me is that Anthropic doesn't seem to be bothered by this study and keeps publishing Verified results.

That is what gets me curious in the first place. The fact Mythos scored so high, IMO, exposes some issues with this model: it is able to solve seemingly impossible to solve problems.

Without cheating allegation, which I don't think ANT is doing, it has to be doing some fortune telling/future reading to score that high at all.

alansaberabout 2 hours ago
A single benchmark is meaningless, you always get quirky results on some benchmarks.
silvertazaabout 3 hours ago
Still huge hallucination rate, unfortunately at 86%. To compare, Opus sits at 36%.

Source: https://artificialanalysis.ai/models?omniscience=omniscience...

dubcanadaabout 2 hours ago
grok is 17%? And that's the lowest, most models are like 80%+?

While hallucination is probably closer to 100% depending on the question. This benchmark makes no sense.

elAhmoabout 1 hour ago
No one serious uses grok.
ajdegolabout 1 hour ago
@grok is this true?
simianwordsabout 2 hours ago
There's something off with this because Haiku should not be that good.
jwpapiabout 2 hours ago
The hallucination benchmark is hallucinating
dakolliabout 2 hours ago
This indicates they want this behavior, they know the person asking the question probably doesn't understand the problem entirely (or why would they be asking), so they'd prefer a confident response, regardless of outcomes, because the point is to sell the technologies competency (and the perception thereof), not the capabilities, to a bunch of people that have no clue what they're talking about.

LLMs will ruin your product, have fun trusting a billionaires thinking machine they swear is capable of replacing your employees if you just pay them 75% of your labor budget.

applfanboysbgonabout 4 hours ago
If there's a bingo card for model releases, "our [superlative] and [superlative] model yet" is surely the free space.
tom1337about 4 hours ago
Do "our [superlative] and [superlative] [product] yet" and you have pretty much every product launch
SequoiaHopeabout 4 hours ago
I love when Apple says they’re releasing their best iPhone yet so I know the new model is better than the old ones.
xnxabout 4 hours ago
"our newest and most expensive model yet"
wiseowiseabout 2 hours ago
"Best iPhone ever"
ertgbnmabout 3 hours ago
can't wait for "our worst and dumbest model yet"
Nitionabout 3 hours ago
Apple should have used that one for the 2016 MacBook.
Advertisement
vthallamabout 3 hours ago
This model is great at long horizon tasks, and Codex now has heartbeats, so it can keep checking on things. Give it your hardest problem that would take hours with verifiable constraints, you will see how good this is:)

*I work at OAI.

dannywabout 3 hours ago
It's genuinely so great at long horizon tasks! GPT-5.5 solved many long-horizon frontier challenges, for the first time for an AI model we've tested, in our internal evals at Canva :) Congrats on the launch!
brcmthrowawayabout 1 hour ago
Can we not do growth hacking here?
dandakaabout 3 hours ago
Could be a great feature, can't wait to test! Tired of other models (looking at you Opus) constantly stuck mid-task lately.
frotaurabout 1 hour ago
I've been using the /ralph-loop plugin for claude code, works well to keep the model hammering at the task.
winridabout 2 hours ago
Interesting, I just had opus convert a 35k loc java game to c++ overnight (root agent that orchestrated and delegated to sub agents) and woke up and it's done and works.

What plan are you on? I'm starting to wonder if they're dynamically adjusting reasoning based on plan or something.

gck1about 1 hour ago
I'm on max 5x and noticed this too. I don't use built-in subagents but rather full Claude session that orchestrates other full claude sessions. Worker agents that receive tasks now stop midway, they ask for permission to continue. My "heartbeat" is basically "status. One line" message sent to the orchestrator.

Opus 4.6 worker agents never asked for permission to continue, and when heartbeat was sent to orchestrator, it just knew what to do (checked on subagents etc). Now it just says that it waits for me to confirm something.

mudkipdevabout 3 hours ago
This is 3x the price of GPT-5.1, released just 6 months ago. Is no one else alarmed by the trend? What happens when the cheaper models are deprecated/removed over time?
Schlagbohrer25 minutes ago
As others have mentioned you're ignoring the long tail of open-weights models which can be self hosted. As long as that quasi-open-source competition keeps up the pace, it will put a cap on how expensive the frontier models can get before people have to switch to self-hosting.

That's a big if, though. I wish Meta were still releasing top of the line, expensively produced open-weights models. Or if Anthropic, Google, or X would release an open mini version.

Night_Thastusabout 2 hours ago
This is entirely expected. The low prices of using LLMs early on was totally and completely unsustainable. The companies providing such services were (and still are) burning money by the truckload.

The hope is to get a big userbase who eventually become dependent on it for their workflow, then crank up the price until it finally becomes profitable.

The price for all models by all companies will continue to go up, and quickly.

oezi15 minutes ago
I recently looked at this a bit but came away with the impression that at least on API pricing the models should be very profitable considering primarily the electricity cost.

Subscriptions and free plans are the thing that can easily burn money.

energy123about 3 hours ago
Look a cost per intelligence or cost per task instead of cost per token.
yokoprimeabout 3 hours ago
How do I reliably measure 1 unit of intelligence?
wellthisisgreatabout 2 hours ago
In pelicans, obviously
ulimnabout 3 hours ago
Isn't the outcome / solution for a given task non-deterministic? So can we reliably measure that?
footaabout 3 hours ago
Yes, sort of. Generally you can measure the pass rate on a benchmark given a fixed compute budget. A sufficiently smart model can hit a high pass rate with fewer tokens/compute. Check out the cost efficiency on https://artificialanalysis.ai/ (say this posted here the other day, pretty neat charts!)
torginusabout 2 hours ago
This is the only correct take. The only metric that matters is cost per desired outcome.
genericresponseabout 3 hours ago
Statistically. Do many trials and measure how often it succeeds/fails.
dns_snekabout 3 hours ago
Repetition and statistics, if you have $1000++ you didn't need anyway.
throwuxiytayqabout 3 hours ago
It's much easier to measure a language model's intelligence than a human's because you can take as many samples as you want without affecting its knowledge. And we do measure human intelligence.
operatingthetanabout 3 hours ago
We know they cost much more than this for OpenAI. Assume prices will continue to climb until they are making money.
beering39 minutes ago
source? There have also been a bunch of people here saying the opposite
kuatrokaabout 1 hour ago
Not really a big problem. Switch to KIMI, Qwen, GLM. You’ll get 95% quality of GPT or Anthropic for a 10th of a price. I feel like the real dependency is more mental, more of a habit but if you actually dip your toes outside OpenAI, Anthropic, Gemini from time to time, you realise that the actual difference in code is not huge if prompted in a good way. Maybe you’ll have to tell it to do something twice and it won’t be a one shot, but it’s really not an issue at all.
dannywabout 3 hours ago
It's far more meaningful to look at the actual cost to successfully something. The token efficiency of GPT-5.5 is real; as well as it just being far better for work.
dandakaabout 3 hours ago
SOTA models get distilled to open source weights in ~6 months. So paying premium for bleeding edge performance sounds like a fair compensation for enormous capex.
msdzabout 3 hours ago
Such an increase tracks the company's valuation trend, which they constantly, somehow have to justify (let alone break even on costs).
aliljetabout 3 hours ago
I've found myself so deeply embedded in the Claude Max subscription that I'm worried about potentially makign a switch. How are people making sure they stay nimble enough not to get trarpped by one company's ecosystem over another? For what it's worth, Opus 4.7 has not been a step up and it's come with an enormously higher usage of the subscription Anthropic offers making the entire offering double worse.
gck1about 1 hour ago
Start building your own liteweight "harness" that does things you need. Ignore all functionality of clients like CC or Codex and just implement whatever you start missing in your harness.

You can replace pretty much everything - skills system, subagents, etc with just tmux and a simple cli tool that the official clients can call.

Oh and definitely disable any form of "memory" system.

Essentially, treat all tooling that wraps the models as dumb gateways to inference. Then provider switch is basically a one line config change.

TacticalCoder14 minutes ago
> You can replace pretty much everything - skills system, subagents, etc with just tmux and a simple cli tool that the official clients can call.

I'm very interest by this. Can you go a bit more into details?

ATM for example I'm running Claude Code CLI in a VM on a server and I use SSH to access it. I don't depend on anything specific to Anthropic. But it's still a bit of a pain to "switch" to, say, Codex.

How would that simple CLI tool work? And would CC / Codex call it?

chisabout 3 hours ago
It's surprisingly simple to switch. I mean both products offer basically identical coding CLI experiences. Personally I've been paying for Claude max $100, and ChatGPT $20, and then just using ChatGPT to fill in the gaps. Specifically I like it for code review and when Claude is down.
dannywabout 1 hour ago
Try GPT-5.5 as your daily driver for a bit. It felt a lot smarter, reliable, and I was much more productive with it.
type4about 3 hours ago
I have a directory of skills that I symlink to Codex/Claude/pi. I make scripts that correspond with them to do any heavy lifting, I avoid platform specific features like Claude's hooks. I also symlink/share a user AGENTS.md/CLAUDE.md

MCPs aren't as smooth, but I just set them up in each environment.

threecheeseabout 2 hours ago
Anecdotally, I get the same wall time with my Max x5 (100$) and my ChatGPT Teams (30$) subscriptions.
beering37 minutes ago
What is the switching cost besides launching a different program? Don’t you just need to type what you want into the box?
raneabout 2 hours ago
This might be the opposite of staying nimble as my workflows are quite tied to Claude Code specifically, however I've been experimenting with using OpenAI models in CC and it works surprisingly well.
cube2222about 3 hours ago
Small tip, at least for now you can switch back to Opus 4.6, both in the ui and in Claude Code.
dannywabout 2 hours ago
It’s good to just keep trying different ones from time to time.
doglineabout 3 hours ago
Except for history, I don’t find much that stops you from switching back and forth on the CLI. They both use tools, each has a different voice, but they both work. Have it summarize your existing history into a markdown file, and read it in with any engine.

The APIs are pretty interchangeable too. Just ask to convert from one to the other if you need to.

pdntspaabout 2 hours ago
As a rule I've been symlinking or referencing generic "agents" versions of claude workflow files instead of placing those files directly in claude's purview

AGENTS.md / skills / etc

basiswordabout 1 hour ago
I switched a couple of weeks ago just to see how it went. Codex is no better or worse. They’re both noticeably better at different things. I burn through my tokens much much faster on Codex though. For what it’s worth I’m sticking with Codex for now. It seems to be significantly better at UI work although has some really frustrating bad habits (like loading your UI with annoying copywriting no sane person would ever do).
dheeraabout 3 hours ago
Coding models are effectively free. They are capable of making money and supporting themselves given access to the right set of things. That is what I do
BrokenCogsabout 4 hours ago
I'm here for the pelicans and I'm not leaving until I see one!
qingcharlesabout 4 hours ago
I've come to prompt pelicans and chew gum, and I'm all outta gum!
pixel_poppingabout 4 hours ago
That's a true CTO right there.
bytesandbitsabout 3 hours ago
I know a 10x engineer when i see one.
BrokenCogsabout 2 hours ago
In binary that's just a 10x engineer
RomanPushkinabout 3 hours ago
Ctrl+F: pelican

F5

tantalorabout 3 hours ago
simonw pls
CompleteSkepticabout 3 hours ago
Is this the first time OpenAI has published comparisons to other labs?

Seems so to me - see GPT-5.4[1] and 5.2[2] announcements.

Might be an tacit admission of being behind.

[1] https://openai.com/index/introducing-gpt-5-4/ [2] https://openai.com/index/introducing-gpt-5-2/

blixtabout 1 hour ago
Releases keep shifting from API forward to product forward, with API now lagging behind proprietary product surface and special partnerships.

I'd not be surprised if this is the year where some models simply stop being available as a plain API, while foundation model companies succeed at capturing more use cases in their own software.

gallerdudeabout 4 hours ago
If GPT-5.5 Pro really was Spud, and two years of pretraining culminated in one release, WOW, you cannot feel it at all from this announcement. If OpenAI wants to know why they like they’ve fallen behind the vibes of Anthropic, they need to look no further than their marketing department. This makes everything feel like a completely linear upgrade in every way.
I_am_tiberiusabout 3 hours ago
Clearly they felt a big backlash when version 5 was released. Now they are afraid of another response like this. And effectively, for the user it will likely only be a small update.
jimbob45about 4 hours ago
Also the naming department. You can tell that this is the AI company Microsoft chose to back because their naming scheme is as bad as .NET's.
gallerdudeabout 3 hours ago
I actually have no problem with the 5.x line... but if Pro really was an entirely new pretrain, they did a horrible job conveying that.
h14habout 4 hours ago
This seems huge for subscription customers. Looking at the Artificial Analysis numbers, 5.5 at medium effort yields roughly the intelligence as 5.4 (xhigh) while using less than a fifth the tokens.

As long as tokens count roughly equally towards subscription plan usage between 5.5 & 5.4, you can look at this as effectively a 5x increase in usage limits.

gausswhoabout 3 hours ago
As someone who always leaves intelligence at default, and am ok with existing models, should I be shifting gears more manually as providers sell us newer models? Is medium or lower better than free/cheaper models?
jryioabout 4 hours ago
Their 'Preparedness Framework'[1] is 20 pages and looks ChatGPT generated, I don't feel prepared reading it.

https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbdde...

ativzzzabout 4 hours ago
I like that they waited for opus 4.7 to come out first so they had a few days to find the benchmarks that gpt 5.5 is better at
eknkcabout 4 hours ago
Well anectodally, 5.4 was already better than opus 4.7 so it should not have been hard.
wahnfriedenabout 4 hours ago
I like that Anthropic rushed 4.7 out to get a couple days of coverage before 5.5 hit
spprashantabout 3 hours ago
Everything since that launch to this release has been a PR disaster for Anthropic.
dandakaabout 3 hours ago
I can argue that disaster started mid-4.6, when they started juggling with rate limits while hitting uptime problems. Great we have some healthy competition and waiting for the next move from Deepmind.
Advertisement
louiereedersonabout 4 hours ago
For a 56.7 score on the Artificial Intelligence Index, GPT 5.5 used 22m output tokens. For a score of 57, Opus 4.7 used 111m output tokens.

The efficiency gap is enormous. Maybe it's the difference between GB200 NVL72 and an Amazon Tranium chip?

swyxabout 4 hours ago
why would chip affect token quantity. this is all models.
louiereedersonabout 4 hours ago
Chip costs strongly impact the economics of model serving.

It is entirely plausible to me that Opus 4.7 is designed to consume more tokens in order to artificially reduce the API cost/token, thereby obscuring the true operating cost of the model.

I agree though, I chose poor phrasing originally. Better to say that GB200 vs Tranium could contribute to the efficiency differential.

AtNightWeCodeabout 1 hour ago
You need to compare total cost. Token count is irrelevant.
karmasimidaabout 4 hours ago
Chips doesn’t impact output quality in this magnitude
ChrisGreenHeurabout 4 hours ago
True, but the qualifying the power played a large part. Most likely nuclear power for this high quality token efficiency.
dist-epochabout 2 hours ago
If it's a new pretrain, the token embeddings could be wider - you can pack more info into a token making it's way through the system.

Like Chinese versus English - you need fewer Chinese characters to say something than if you write that in English.

So this model internally could be thinking in much more expressive embeddings.

2001zhaozhaoabout 4 hours ago
Pricing: $5/1M input, $30/1M output

(same input price and 20% more output price than Opus 4.7)

tedsandersabout 3 hours ago
Yep, it's more expensive per token.

However, I do want to emphasize that this is per token, not per task.

If we look at Opus 4.7, it uses smaller tokens (1-1.35x more than Opus 4.6) and it was also trained to think longer. https://www.anthropic.com/news/claude-opus-4-7

On the Artificial Analysis Intelligence Index eval for example, in order to hit a score of 57%, Opus 4.7 takes ~5x as many output tokens as GPT-5.5, which dwarfs the difference in per-token pricing.

The token differential varies a lot by task, so it's hard to give a reliable rule of thumb (I'm guessing it's usually going to be well below ~5x), but hope this shows that price per task is not a linear function of price per token, as different models use different token vocabularies and different amounts of tokens.

We have raised per-token prices for our last couple models, but we've also made them a lot more efficient for the same capability level.

(I work at OpenAI.)

2001zhaozhaoabout 2 hours ago
I don't have anything to add, but I like how you guys are actually sending people to communicate in Hacker News. Brilliant.
simianwordsabout 3 hours ago
Maybe a good idea to be more explicit about this -- maybe a cost analysis benchmark would be a nice accompaniment.

This kind of thing keeps popping up each time a new model is released and I don't think people are aware that token efficiency can change.

tedsandersabout 2 hours ago
Agreed. Would be great if everyone starts reporting cost per task alongside eval scores, especially in a world where you can spend arbitrary test-time compute. This is one thing I like about the Artificial Analysis website - they include cost to run alongside their eval scores: https://artificialanalysis.ai/
sergiotapiaabout 3 hours ago
That pricing is extremely spicy, wow.
oh_noabout 3 hours ago
yes but as far as i know gpt tokenizer is about the same as opus 4.6's, where 4.7 is seeing something in the ballpark of a 30% increase. this should still be cheaper even disregarding the concerns around 4.7 thinking burning tokens
sosodevabout 4 hours ago
I hope the industry starts competing more on highest scores with lowest tokens like this. It's a win for everybody. It means the model is more intelligent, is more efficient to inference, and costs less for the end user.

So much bench-maxxing is just giving the model a ton of tokens so it can inefficiently explore the solution space.

an0malousabout 4 hours ago
The premise of the trillion dollars in AI investments is not that it’ll be as good as it currently is but cheaper. It’s AGI or bust at this point.
sosodevabout 4 hours ago
Yeah, but don’t you agree that less tokens to accomplish the same goal is a sign of increasing intelligence?
camdenreslinkabout 2 hours ago
It could be. Or just smarter caching (which wouldn't necessarily have to do with model intelligence). Or just overfitting on the 95% most common prompts (which could save tokens but make the models less intelligent/flexible).
energy123about 3 hours ago
Less cost to accomplish the same goal is a sign of intelligence. That's not necessarily achieved with less tokens but it may be.
mchusmaabout 3 hours ago
Kind of? But I really care about price speed and quality. If it used 10x tokens at 1/10th the tokens and same latency I would be neutral on it.

Kimmi 2.6 for example seems to throw more tokens to improve performance (for better or worse)

NitpickLawyerabout 4 hours ago
> Across all three evals, GPT‑5.5 improves on GPT‑5.4’s scores while using fewer tokens.

Yeah, this was the next step. Have RLVR make the model good. Next iteration start penalising long + correct and reward short + correct.

> CyberGym 81.8%

Mythos was self reported at 83.1% ... So not far. Also it seems they're going the same route with verification. We're entering the era where SotA will only be available after KYC, it seems.

torawayabout 3 hours ago
Isn't Mythos limited to a selected group of companies/organizations Anthropic chose themselves? If the OpenAI announcement for GPT-5.5 is accurate the "trusted cyber access" just requires an open, seemingly straightforward identity verification step.

https://openai.com/index/scaling-trusted-access-for-cyber-de...

  > We are expanding access to accelerate cyber defense at every level. We are making our cyber-permissive models available through Trusted Access for Cyber , starting with Codex, which includes expanded access to the advanced cybersecurity capabilities of GPT‑5.5 with fewer restrictions for verified users meeting certain trust signals (opens in a new window) at launch.

  > Broad access is made possible through our investments in model safety, authenticated usage, and monitoring for impermissible use. We have been working with external experts for months to develop, test and iterate on the robustness of these safeguards. With GPT‑5.5, we are ensuring developers can secure their code with ease, while putting stronger controls around the cyber workflows most likely to cause harm by malicious actors.

  > Organizations who are responsible for defending critical infrastructure  can apply to access cyber-permissive models like GPT‑5.4‑Cyber, while meeting strict security requirements to use these models for securing their internal systems.
"GPT‑5.4‑Cyber" is something else and apparently needs some kind of special access, but that CyberGym benchmark result seems to apply to the more or less open GPT-5.5 model that was just released.
cbg0about 4 hours ago
Isn't CyberGym an open benchmark so trivial to benchmaxx anyway?
mattasabout 4 hours ago
Not good for employees that are being measured by their token usage.
nickvecabout 3 hours ago
I'm conflicted whether I should keep my Claude Max 5x subscription at this point and switch back to GPT/Codex... anyone else in a similar position? I'd rather not be paying for two AI providers and context switching between the two, though I'm having a hard time gauging if Claude Code is still the "cream of the crop" for SWE work. I haven't played around with Codex much.
slawr180535 minutes ago
I was all in on Claude code as my daily driver for web development. And love it. But I enjoy using pi as my harness more and have never ran out of tokens with Codex yet. Claude code almost always runs out for me with the same amount of usage.

After migrating for the token and harness issues, I was pleasantly surprised that Codex seems to perform as good or better too!

Things change so often in this field, but I prefer Codex now even though Anthropocene has so much more hype for coding it seems.

mpaepperabout 2 hours ago
I switched from CC to Codex a few days ago. I get limited much less and the code quality is similar, so not looking back
gck1about 1 hour ago
Which plan? And how are the weekly limits on that plan compared to CCs equivalent subscription?

I don't really care about 5h limits, I can queue up work and just get agents to auto continue, but weekly ones are anxiety inducing.

the_sleaze_about 3 hours ago
I have experienced 0 friction swapping between the 2 models, in fact pitting them against eachother has resulted in the highest success rate for me so far.
nickvecabout 3 hours ago
Interesting. I may have to give that a shot, thanks.
scottyahabout 2 hours ago
Every time I've followed the hype and tried OpenAI models I've found them lacking for the most part. It might just be that I prefer the peer-programming vs spec-ing out the task and handing it off, but I've never been as productive as I am with Claude. Also, I'm still caught up on the DoD ethics stuff.
losvedirabout 4 hours ago
> It excels at ... researching online

How does this work exactly? Is there like a "search online" tool that the harness is expected to provide? Or does the OpenAI infra do that as part of serving the response?

I've been working on building my own agent, just for fun, and I conceptually get using a command line, listing files, reading them, etc, but am sort of stumped how I'm supposed to do the web search piece of it.

Given that they're calling out that this model is great at online research - to what extent is that a property of the model itself? I would have thought that was a harness concern.

wincyabout 3 hours ago
I’ve noticed when writing little bedtime stories that require specific research (my kids like Pokemon stories and they’ve been having an episodic “pokemon adventure” with them as the protagonists) ChatGPT has done a fantastic job of first researching the moves the pokemon have, then writing the actual story. The only mistake it consistently makes is when I summarize and move from a full context session, it thinks that Gyarados has to swim and is incapable of flying.

It definitely seems like it does all the searching first, with a separate model, loads that in, then does the actual writing.

100msabout 4 hours ago
It's literally a distinct model with a different optimisation goal compared to normal chat. There's a ton of public information around how they work and how they're trained
dist-epochabout 2 hours ago
It's a property of the model in the sense that it has great Google Fu.

The harness provides the search tool, but the model provides the keywords to search for, etc.

nickandbro19 minutes ago
I just prompted GPT-5.5 Pro "Solve Nuclear Fusion" and it one shotted it (kidding obviously)
baalimagoabout 4 hours ago
Worth the 100% price increase over GPT-5.4?
cbg0about 4 hours ago
For less than 10% bump across the benchmarks? Probably not, but if your employer is paying (which is probably what OAI is counting on) it's all good.

It's kind of starting to make sense that they doubled the usage on Pro plans - if the usage drains twice as fast on 5.5 after that promo is over a lot of people on the $100 plan might have to upgrade.

jstummbilligabout 4 hours ago
You are paying per token, but what you care about is token efficiency. If token efficiency has improved by as much as they claim it did (i.e. you need less tokens to complete a task successfully) all seems well.
mangolieabout 4 hours ago
Not for coding because it actually needs to read and write large files
cbg0about 4 hours ago
If it uses half the tokens to complete a task, then doubling the cost is perfectly fine. But is that actually true?
M4R5H4LLabout 2 hours ago
I am a heavy Claude Code user. I just tried using Codex with 5.4 (as a Plus user I don't have access to 5.5 yet), and it was quite underwhelming. The app stopped regularly much earlier than what I wanted. It also claimed to have fixed issues when it did not; this is not a hallmark of GPT, and Opus has similar issues, but Claude will not make the same mistake three times in a row. It is unusable at the moment, while Claude allows me do get real work done on a daily basis. Until then...
bhu8about 2 hours ago
Gpt-5.3-codex is miles better than 5.4 in that regard. It’s better at orchestration, and does the things that it said it did. Haven’t tested 5.5 yet but using 5.4 for exploration + brainstorming and handing over the findings to 5.3-codex works pretty well
thinkindieabout 2 hours ago
This is reminding me when Chrome and Firefox where racing to release a new “major version” (at least from the semver POV) without adding significantly new functionality at a time that browsers were already becoming a commodity. As much as we don’t care anymore for a new chrome or Firefox version so will be the release of a new model version.
jstummbilligabout 2 hours ago
The only difference being that we still do care, very much. The models can still get a lot better before we stop caring.
Advertisement
meetpateltechabout 4 hours ago
ZeroCool2uabout 4 hours ago
Benchmarks are favorable enough they're comparing to non-OpenAI models again. Interesting that tokens/second is similar to 5.4. Maybe there's some genuine innovation beyond bigger model better this time?
qsortabout 4 hours ago
It's behind Opus 4.7 in SWE-Bench Pro, if you care about that kind of thing. It seems on-trend, even though benchmarks are less and less meaningful for the stuff we expect from models now.

Will be interesting to try.

Rapzidabout 3 hours ago
In Copilot where it's easy to switch models Opus 4.6 was still providing, IMHO, better stock results than GPT-5.4.

Particularly in areas outside straight coding tasks. So analysis, planning, etc. Better and more thorough output. Better use of formatting options(tables, diagrams, etc).

I'm hoping to see improvements in this area with 5.5.

jdw64about 4 hours ago
GPT is really great, but I wish the GPT desktop app supported MCP as well.

You can kind of use connectors like MCP, but having to use ngrok every time just to expose a local filesystem for file editing is more cumbersome than expected.

throwaway911282about 4 hours ago
Use codex app
c0rruptbytes12 minutes ago
literally cannot launch the codex app anymore
w10-1about 1 hour ago
NYTimes article - on the same day?

  https://www.nytimes.com/2026/04/23/technology/openai-new-model.html
I can see how some model releases would meet the NY Times news-worthy threshold if they demonstrated significance to users - i.e., if most users were astir and competitors were re-thinking their situation.

However, this same-day article came out before people really looked at it. It seems largely intended to contrast OpenAI with Anthropic's caution, before there has been any evidence that the new model has cyber-security implications.

It's not at all clear that the broader discourse is helping, if even the NY Times is itself producing slop just to stoke questions.

adam1231 minutes ago
"Sometime with GPT-5.5 I become lazy"

I don't want to be lazy.

vessenesabout 4 hours ago
Yay. 5.4 was a frustrating model - moments of extreme intelligence (I liked it very much for code review) - but also a sort of idiocy/literalism that made it very unsuited for prompting in a vague sense. I also found its openclaw engagement wooden and frustrating. Which didn’t matter until anthropic started charging $150 a day for opus for openclaw.

Anyway - these benchmarks look really good; I’m hopeful on the qualitative stuff.

kburmanabout 2 hours ago
What a time. I am back here genuinely wishing for OpenAI to release a great model, because without stiff competition, it feels like Anthropic has completely lost its mind.
thimabiabout 4 hours ago
Will we also see a GPT-5.5-Codex version of this model? Or will the same version of it be served both in the web app and in Codex?
Uehrekaabout 4 hours ago
After 5.1, we haven’t seen a -codex-max model, presumably because the benefits of the special training gpt-5.1-codex-max got to improve long context work filtered into gpt-5.2-codex, making the variant no longer necessary (my personal experience accords with this). I’ve been using gpt-5.4 in Codex since it came out, it’s been great. I’ve never back-to-back tested a version against its -codex variant to figure out what the qualitative difference is (this would take a long time to get a really solid answer), but I wouldn’t be surprised if at some point the general-purpose model no longer needs whatever extra training the -codex model gets and they just stop releasing them.

I thought it was weird that for almost the entire 5.3 generation we only had a -codex model, I presume in that case they were seeing the massive AI coding wave this winter and were laser focused on just that for a couple months. Maybe someday someone will actually explain all of this.

Advertisement
jumploopsabout 4 hours ago
> GPT‑5.5 improves on GPT‑5.4’s scores while using fewer tokens.

This might be great if it translates to agentic engineering and not just benchmarks.

It seems some of the gains from Opus 4.6 to 4.7 required more tokens, not less.

Maybe more interesting is that they’ve used codex to improve model inference latency. iirc this is a new (expectedly larger) pretrain, so it’s presumably slower to serve.

beeringabout 4 hours ago
With Opus it’s hard to tell what was due to the tokenizer changes. Maybe using more tokens for the same prompt means the model effectively thinks more?
conradkayabout 4 hours ago
They say latency is the same as 5.4 and 5.5 is served on GB200 NVL72, so I assume 5.4 was served on hopper.
pants2about 2 hours ago
Labs still aren't publishing ARC-AGI-3 scores, even though it's been out for some time. Is it because the numbers are too embarrassing?
kilroy123about 2 hours ago
To be fair, there's not much to report. Isn't it pretty much at 0?
pants2about 1 hour ago
Opus-4.6 with 0.5% currently leads GPT-5.4 with 0.2%[1].

Seems meaningful even if the absolute numbers are very low. That's sort of the excitement of it.

2. https://arcprize.org/leaderboard

AG25about 1 hour ago
GPT-5.5 was just released and OpenAI didnt mention ARC AGI 3 at all, their score probably sucks.
Schlagbohrer38 minutes ago
entering this comments area wondering if it will be full of complaints about the new personality, as with every single LLM update
cscheidabout 3 hours ago
I know this is irrelevant on the grand scheme of things, but that WebGL animation is really quite wrong. That is extra funny given the "ensure it has realistic orbital mechanics." phrase in the prompt.

I prescribe 20 hours of KSP to everyone involved, that'll set them right.

I_am_tiberiusabout 4 hours ago
I'd really like to see improvements like these: - Some technical proof that data is never read by open ai. - Proof that no logs of my data or derived data is saved. etc...
benjx88about 3 hours ago
Good job on the release notice. I appreciate that it isn't just marketing fluff, but actually includes the technical specs for those of us who care and not concentrated in coding agents only.

I hope GPT 5.5 Pro is not cutting corners and neuter from the start, you got the compute for it not to be.

GenerWorkabout 3 hours ago
Looking at the space/game/earthquake tracker examples makes me hopeful that OpenAI is going to focus a bit more on interface visual development/integration from tools like Figma. This is one area where Anthropic definitely reigns supreme.
nickandbroabout 3 hours ago
Very impressive! Interesting how all other benchmarks it seems to surpass Opus 4.7 except SWE-Bench Pro (Public). You would think that doing so well at Cyber, it would naturally possess more abilities there. Wonder what makes up the actual difference there
extrabout 4 hours ago
Seems like a continuation of the current meta where GPT models are better in GPT-like ways and Claude models are better in Claude-like ways, with the differences between each slightly narrowing with each generation. 5.5 is noticeably better to talk to, 4.7 is noticeably more precise. Etc etc.
bradley13about 3 hours ago
"our strongest set of safeguards to date"

How much capability is lost, by hobbling models with a zillion protections against idiots?

Every prompt gets evaluated, to ensure you are not a hacker, you are not suicidal, you are not a racist, you are not...

Maybe just...leave that all off? I know, I know, individual responsibility no longer exists, but I can dream.

Advertisement
zerotosixtyabout 2 hours ago
Those who are using gpt5.5 how does it compare to Opus 4.6 / 4.7 in terms of code generation?
nullbyteabout 4 hours ago
82.7% on Terminal Bench is crazy
toephu2about 3 hours ago
Is it? There are 5 other models near ~80% and it was achieved in March... which in AI-world seems like a century ago.

https://www.tbench.ai/leaderboard/terminal-bench/2.0

ejpirabout 2 hours ago
those are not verified. I've tried forgecode and I cannot believe they didn't do something to influence the benchmarks
GodelNumberingabout 1 hour ago
Yup, they were found to be sneaking the answer key using agents.md

https://debugml.github.io/cheating-agents/#sneaking-the-answ...

impulser_about 4 hours ago
What is the reason behind OpenAI being able to release new models very fast?

Since Feb when we got Gemini 3.1, Opus 4.6, and GPT-5.3-Codex we have seen GPT-5.4 and GPT-5.5 but only Opus 4.7 and no new Gemini model.

Both of these are pretty decent improvements.

minimaxirabout 4 hours ago
Competition.
pixel_poppingabout 4 hours ago
This is frankly exciting, outside of the politics of it all, it always feel great to wake up and a new model being released, I personally will stay awake quite long tonight if GPT-5.5 drop in codex.
literalAardvarkabout 4 hours ago
Anthropic is really tiny, and Google is just being Google, their models are just to show that they're hip with what the kids are doing.
wmfabout 4 hours ago
I wonder if it's the same model and they just keep adding more post-training.
Squarexabout 3 hours ago
The rumor was that the 5.5 is a brand new pretrain. But who knows, it's 2x as expensive as 5.4, so it would check out.
tantalorabout 3 hours ago
They aren't new models.
AbuAssarabout 3 hours ago
This is the first time openAi include competing models in their benchmarks, always included only openAi models.
YmiYugyabout 4 hours ago
So according to the benchmarks somewhere in between Opus 4.7 and Mythos
jorl17about 4 hours ago
GPT 5.4 is already better than Opus 4.7 to me. But, then again, Opus 4.7 is a massive disappointment. I hope they don't discontinue 4.6.
robwwilliamsabout 3 hours ago
Depends in goals. For long free-firm discussions I find Opus 4.7 Adaptive better/deeper than Opus 4.6 Extended. But usual caveats apply: first week of use and token budget seems generous now on Max 5X.
coffeemugabout 2 hours ago
I had the opposite experience. Opus 4.6 extended feels like the first genuinely intelligent model to converse with, Opus 4.7 adaptive feels like slightly smarter LinkedIn slop.
steinvakt2about 4 hours ago
I’ve had great experience using opus 4.7 in cursor. Works for everything including iOS frontend
jorl17about 4 hours ago
Cursor is what I daily-drive. 4.7 has been terrible for my mostly python-driven work (whereas Opus 4.6 was literally revolutionary to me). Our frontend folks are also complaining.

I left a comment here with this sentiment https://news.ycombinator.com/item?id=47879896

k2xlabout 4 hours ago
Surprised to see SWE-Bench Pro only a slight improvement (57.7% -> 58.6%) while Opus 4.7 hit 64.3%. I wonder what Anthropic is doing to achieve higher scores on this - and also what makes this test particular hard to do well in compared to Terminal Bench (which 5.5 seemed to have a big jump in)
vexnaabout 4 hours ago
There's an asterisk right below that table stating that:

> *Anthropic reported signs of memorization on a subset of problems

And from the Anthropic's Opus 4.7 release page, it also states:

> SWE-bench Verified, Pro, and Multilingual: Our memorization screens flag a subset of problems in these SWE-bench evals. Excluding any problems that show signs of memorization, Opus 4.7’s margin of improvement over Opus 4.6 holds.

conradkayabout 4 hours ago
Was 4.7 distilled off Mythos (which got 77.8%)? Interesting how mythos got 82% on terminal-bench 2.0 compared to 82.7% for GPT-5.5.

Also notice how they state just for SWE-Bench Pro: "*Anthropic reported signs of memorization on a subset of problems"

cchristabout 3 hours ago
Which is better GPT-5.5 or Opus 4.7? And for what tasks?
ace2paceabout 2 hours ago
I hear its as good as Opus 4.7.

The battle has just begun

faxmeyourcodeabout 4 hours ago
How does it compare to mythos?
woeiruaabout 4 hours ago
Nice to see them openly compare to Opus-4.7… but they don’t compare it against Mythos which says everything you need to know.

The LinkedIn/X influencers who hyped this as a Mythos-class model should be ashamed of themselves, but they’ll be too busy posting slop content about how “GPT-5.5 changes everything”.

A_D_E_P_Tabout 2 hours ago
Almost nobody can actually use Mythos, though?
Advertisement
ionwakeabout 4 hours ago
is there anywhere I can try it? ( I just stopped my pro sub ) but was wondering if there is a playground or 3rd party so i can just test it briefly?
throwaway2027about 4 hours ago
Good timing I had just renewed my subscription.
tantalorabout 4 hours ago
> A playable 3D dungeon arena

Where's the demo link?

jedisct1about 1 hour ago
GPT-5.4 is already an incredible model for code reviews and security audits with the swival.dev /audit command.

The fact that GPT-5.5 is apparently even better at long-running tasks is very exciting. I don’t have access to it yet, but I’m really looking forward to trying it.

wslhabout 1 hour ago
Related and insightful: "GPT-5.5: Mythos-Like Hacking, Open to All" [1].

[1] https://news.ycombinator.com/item?id=47879330

i_love_retrosabout 1 hour ago
Oh shiiiiit boy! An incrementation dropped!!
egorfineabout 3 hours ago
> We are releasing GPT‑5.5 with our strongest set of safeguards to date

...

> we’re deploying stricter classifiers for potential cyber risk which some users may find annoying initially

So we should be expecting to not be able to check our own code for vulnerabilities, because inherently the model cannot know whether I'm feeding my code or someone else's.

dannywabout 1 hour ago
Hopefully not, because checking your codebase for vulnerabilities is really valuable.

I hope it’s just limits on pentesting and stuff, and not for code analysis and review.

ant6nabout 2 hours ago
My impression has been that ChatGPT-5.4 has been getting dumber and more exhausting in the last couple of weeks. Like it makes a lot of obvious mistakes, ignores (parts of) prompts. keeps forgetting important facts or requirement.

Maybe this is a crazy theory, but I sometimes feel like they gimp their existing models before a big release to you'll notice more of a "step".

debbaabout 4 hours ago
Cannot see it in Codex CLI
boring-humanabout 2 hours ago
Did you upgrade the tool binaries? I also couldn't see it until after the upgrade.
senkoabout 3 hours ago
I might just be following too many AI-related people on X, but omg the media blitz around 5.5 is aggressive.

Soo many unconvincing "I've had access for three weeks and omg it's amazing" takes, it actually primes me for it to be a "meh".

I prefer to see for myself, but the gradual rollout, combined with full-on marketing campaign, is annoying.

Advertisement
jawigginsabout 3 hours ago
What is the major and minor semver meaning for these models? Is each minor release a new fine-tuning with a new subset of example data while the major releases are made from scratch? Or do they even mean anything at this point?
gck130 minutes ago
Nothing. The next major increment is going to happen when marketing department is confident they can sell it as a major improvement without everyone laughing at them. Which at this point seems like never.

I think Anthropic fearmongering and "leaks" of Mythos was them testing the ground for 5.x, which seems to have backfired.

wiseowiseabout 2 hours ago
> One engineer at NVIDIA who had early access to the model went as far as to say: "Losing access to GPT‑5.5 feels like I've had a limb amputated.

Everybody understands that you need to make money, but can you tone it down with the f*cking FOMO, please? It sounds just pathetic at this point:

'one engineer at NVIDIA', 'limb amputated'

Put the cunt in a room and give me a handsaw, I want to see how fast he'll give up his arm over some cloud model.

phillipcarterabout 4 hours ago
... sigh. I realize there's little that can be done about this, but I just got through a real-world session determining of Opus 4.7 is meaningfully better than Opus 4.6 or GPT 5.4, and now there's another one to try things with. These benchmark results generally mean little to me in practice.

Anyways, still exciting to see more improvements.

cynicalpeaceabout 4 hours ago
It's possible that "smarter" AI won't lead to more productivity in the economy. Why?

Because software and "information technology" generally didn't increase productivity over the past 30 years.

This has been long known as Solow's productivity paradox. There's lots of theories as to why this is observed, one of them being "mismeasurement" of productivity data.

But my favorite theory is that information technology is mostly entertainment, and rather than making you more productive, it distracts you and makes you more lazy.

AI's main application has been information space so far. If that continues, I doubt you will get more productivity from it.

If you give AI a body... well, maybe that changes.

aerhardtabout 4 hours ago
> "information technology" generally didn't increase productivity

Do you think it'd be viable to run most businesses on pen and paper? I'll give you email and being able to consume informational websites - rest is pen and paper.

cynicalpeaceabout 3 hours ago
Productivity metrics were better when businesses were run on just pen and paper. Of course, there could be many confounding factors, but there are also many reasons why this could be so. Just a few hypotheses:

- Pen and paper become a limiting factor on bureaucratic BS

- Pen and paper are less distracting

- Pen and paper require more creative output from the user, as opposed to screens which are mostly consumptive

etc etc

theLiminatorabout 3 hours ago
> Productivity metrics were better when businesses were run on just pen and paper

What metrics are these?

ewrsabout 3 hours ago
Its quite possible the use of LLMs means that we are using less effort to produce the same output. This seems good.

But the less effort exertion also conditions you to be weaker, and less able to connect deeply with the brain to grind as hard as once did. This is bad.

Which effect dominates? Difficult to say.

Of course this is absolutely possible. Ultimately there was a time where physical exertion was a thing and nobody was over-weight. That isn't the case anymore is it.

aiaiai177about 4 hours ago
Downvoted by the AI Nazis. They are running a tight ship before the IPOs.
cbg0about 4 hours ago
I downvoted it because it doesn't add anything useful to the conversation, and I don't own any AI stock.
cynicalpeaceabout 4 hours ago
It's a hypothesis that "smarter" AI models, ie GPT-5.5, may not be a great boon to productivity. Given that this is the raison d'etre of AI models, and improving them, I don't see why it is any less useful than any other discussion.
objektifabout 4 hours ago
Are there faster mini/nano versions as well?
tedsandersabout 4 hours ago
Not this time, no.
abiabout 4 hours ago
Usually, those get released a few weeks later.
elAhmoabout 3 hours ago
Is Codex receiving 5.4 or 5.5 release?

I am still using Codex 5.3 and haven't switched to GPT 5.4 as I don't like the 'its automatic bro trust us', so wondering is Codex going to get these specific releases at all in the future.

numbersabout 4 hours ago
I've stopped trusting these "trust me bro" benchmarks and just started going to LM Arena and looking for the actual benchmark comparisons.

https://arena.ai/leaderboard/code

stri8tedabout 4 hours ago
I doubt this is representative of real world usage. There is a difference between a few turns on a web chatbot, vs many-turn cli usage on a real project.
nba456_about 4 hours ago
This is not any better of a benchmark
varispeedabout 3 hours ago
I am sceptical. The generation after 4o models have become crappier and crappier. Hope this one changes the trend. 5.4 is unusable for complex coding work.
mondojesusabout 4 hours ago
I'm still using 5.3 in codex. Are 5.4 and 5.5 better than 5.3 in concrete ways?
cbg0about 3 hours ago
The benchmarks say so, but try it out with actual tasks and be the judge.
enraged_camelabout 4 hours ago
Is this the first time OpenAI compared their new release to Anthropic models? Previously they were comparing only to GPT's own previous versions.
Advertisement
k2xlabout 4 hours ago
ARC-AGI 3 is missing on this list - given that the SOTA before 5.5 <1% if I recall, I wonder if this didn't make meaningful progress.
redox99about 4 hours ago
It's a silly benchmark anyways.
cmrdporcupineabout 4 hours ago
Not rolled out to my Codex CLI yet, but some users on Reddit claiming it's on theirs.
throwaw12about 4 hours ago
If anyone tried it already, how do you feel?

Numbers look too good, wondering if it is benchmaxxed or not

xnxabout 4 hours ago
Next up: Google I/O on May 19?

I have to imagine they'll go to Gemini 3.5 if only for marketing reasons.

luqtasabout 4 hours ago
they are using ethical training weights this time!!! /j
yuvrajmalgatabout 3 hours ago
finally
baxuzabout 2 hours ago
Ah yes, the next "trust me bro"
MagicMoonlightabout 4 hours ago
Two hundred pages of shilling and it’s a 1% improvement in the benchmarks. They’re dead in the water.

Imagine spending 100m on some of these AI “geniuses” and this is the best they can do.

XCSmeabout 3 hours ago
2x the price for 1-5% performance gain
justonepost2about 4 hours ago
the attenuation of man nears

< 5 years until humans are buffered out of existence tbh

may the light of potentia spread forth beyond us

Advertisement
codersshabout 4 hours ago
Great modal, I have been using codex and its awesome. Lets see what GPT-5.5 does to it
vardumpabout 3 hours ago
I just can't bear to use services from this company after what they did to the global DRAM markets.

I'm not trying to make any kind of moral statement, but the company just feels toxic to me.