Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

73% Positive

Analyzed from 8519 words in the discussion.

Trending Topics

#image#images#more#generated#art#https#openai#model#human#com

Discussion (265 Comments)Read Original on HackerNews

dakiolabout 1 hour ago
> On the flip side, there are hundreds of ways that these tools cause genuine harm, not just to individuals but to entire systems.

Yeah, agree. I think it's the first time I'm asking myself: Ok, so this new cool tech, what is it good for? Like, in terms of art, it's discarded (art is about humans), in terms of assets: sure, but people is getting tired of AI-generated images (and even if we cannot tell if an image is AI-generated, we can know if companies are using AI to generate images in general, so the appealing is decreasing). Ads? C'mon that's depressing.

What else? In general, I think people are starting to realize that things generated without effort are not worth spending time with (e.g., no one is going to read your 30-pages draft generated by AI; no one is going to review your 500 files changes PR generated by AI; no one is going to be impressed by the images you generate by AI; same goes for music and everything). I think we are gonna see a Renaissance of "human-generated" sooner rather than later. I see it already at work (colleagues writing in slack "I swear the next message is not AI generated" and the like)

lucaslazarusabout 1 hour ago
> I think it's the first time I'm asking myself: Ok, so this new cool tech, what is it good for?

I feel like this is something people in the industry should be thinking about a lot, all the time. Too many social ills today are downstream of the 2000s culture of mainstream absolute technoöptimism.

Vide. Kranzberg's first law--“Technology is neither good nor bad; nor is it neutral.”

runarberg35 minutes ago
Completely unrelated, but I am curious about your keyboard layout since you mistyped ö instead of - these two symbols are side by side in the Icelandic layout, and the ö is where - in the English (US) layout. As such this is a common type-o for people who regularly switch between the Icelandic and the English (US) layout (source: I am that person). I am curious whether more layouts where that could be common.
bulletsvshumans27 minutes ago
This is also a stylistic choice that the New Yorker magazine uses for words with double vowels where you pronounce each one separately, like coöperate, reëlect, preëminent, and naïve. So possibly intentional.
heisenzombie24 minutes ago
I suspect the diaresis was intentional, in “New Yorker” style.

https://www.arrantpedantry.com/2020/03/24/umlauts-diaereses-...

atleastoptimal39 minutes ago
The issue is that the signalling makes sense when human generated work is better than AI generated. Soon AI generated work will be better across the board with the rare exception of stuff the top X% of humans put a lot of bespoke highly personalized effort into. Preferring human work will be luxury status-signalling just like it is for clothing, food, etc.
fwipsy12 minutes ago
Only novel art is interesting. AI can't really do novel. It's a prediction algorithm; it imitates. You can add noise, but that mostly just makes it worse. It can be used to facilitate original stuff though.

But so many people want to make art, and it's so cheap to distribute it, that art is already commoditized. If people prefer human-created art, satisfying that preference is practically free.

atleastoptimal7 minutes ago
AI can be novel, there is nothing in the transformer architecture which prohibits novelty, it's just that structurally it much prefers pattern-matching.

But the idea of novelty is a misnomer I think. Any random number generator can arbitrarily create a "novel" output that a human has never seen before. The issue is whether something is both novel and useful, which is hard for even humans to do consistently.

dilDDoS10 minutes ago
I'm probably in a weird subgroup that isn't representative of the general public, but I've found myself preferring "rough" art/logos/images/etc, basically because it signals a human put time into it. Or maybe not preferring, but at least noticing it more than the generally highly refined/polished AI artwork that I've been seeing.
paulddraper36 minutes ago
"Artisanal art" as it were.
lxgrabout 1 hour ago
I can’t design wallpapers/stickers/icons/…, but I can describe what I want to an image generation model verbally or with a source photo, and the new ones yield pretty good results.

For icons in particular, this opens up a completely new way of customizing my home screen and shortcuts.

Not necessary for the survival of society, maybe, but I enjoy this new capability.

latexr18 minutes ago
So we get a fresh new cheap way to spread propaganda and lies and erode trust all across society while cementing power and control for a few at the top, and in return get a few measly icons (as if there weren’t literally thousands of them freely available already) and silly images for momentaneous amusement?

What a rotten exchange.

SamuelAdams14 minutes ago
I wonder what will happen to the entire legal system. It used to be fairly difficult to create convincing photos and videos.

AI can probably fool most court judges now. Or the defense can refute legitimate evidence by saying “it’s AI / false”. How would that be refuted?

camillomillerabout 1 hour ago
Is that worth the cost of this technology? Both in terms of financial shenanigans and its environmental cost?
subroutine29 minutes ago
Are you asking if the 10 seconds it takes AI to generate an image is more costly to the environment than a commissioned graphics artist using a laptop for 5-6 hours, or a painter who uses physical media sourced from all over the world?
Legend244034 minutes ago
The environmental cost is significantly overblown, especially water usage.
vrc38 minutes ago
Depends on if you believe it will ever become cheaper. Either hardware, inspiring more efficient smaller models, or energy itself. The techno optimist believes that that is the inevitable and investable future. But on what horizon and will it get “zip drived” before then?
3dsnano23 minutes ago
absolutely without a doubt it is
strulovich42 minutes ago
Here’s one example:

I just recently used for image generation to design my balcony.

It was a great way to see design ideas imagined in place and decide what to do.

There are many cases people would hire an artist to illustrate an idea or early prototype. AI generated images make that something you can do by yourself or 10x faster than a few years ago.

tecoholic7 minutes ago
100%. A picture is worth a thousand words only when it conveys something. I love to see the pictures from my family even when they are taken with no care to quality or composition but I would look at someone else’s (as in gallery/exhibitions) only when they are stunning and captured beautifully. The medium is only a channel to communicate.

Also, this can’t be real. How many publications did they train this stuff on and why are there no acknowledgment even if to say - we partnered with xyz manga house to make our model smarter at manga? Like what’s wrong with this company?

Gigachadabout 1 hour ago
This is where I’m at. If you can’t be bothered to write/make it, why would I be bothered to read or review it?
loudandskittish2 minutes ago
Exactly how I feel. There is already more art, movies, music, books, video games and more made by human beings than I can experience in my lifetime. Why should I waste any time on content generated by the word guessing machine?
tempaccount5050about 1 hour ago
Because I'm not an artist and can't afford to pay one for whatever business I have? This idea that only experts are allowed to do things is just crazy to me. A band poster doesn't have to be a labor of love artisanal thing. Were you mad when people made band posters with MS word instead of hiring a fucking typesetter? I just don't get it.
overgard37 minutes ago
I dunno, I have some band posters that are pretty cool pieces of art that obviously had a lot of thought put into them (pre-AI era stuff). I don't think I'd hang up an AI generated band poster, even if it was cool; I'd feel weird and tacky about it.
Arch48519 minutes ago
I think you're misunderstanding - most people's beef with AI art isn't that it "isn't made by experts", it's that

1) it's made from copyrighted works, and the original authors receive no credit; 2) it is (typically) low-effort; 3) there are numerous negative environmental effects of the AI industry in general; 4) there are numerous negative social effects of AI in general, and more specifically AI generated imagery is used a lot for spreading misinformation; 5) there are numerous negative economic effects of AI, and specifically with art, it means real human artists are being replaced by AI slop, which is of significantly lower quality than the equivalent human output. Also, instead of supporting multiple different artists, you're siphoning your money to a few billion dollar companies (this is terrible for the economy)

As a side note, if you have a business which truly cannot afford to pay any artists, there are a lot of cheaper, (sometimes free!) pre-paid art bundles that are much less morally dubious than AI. Plus, then you're not siphoning all of your cash to tech oligarchs.

squidsoup21 minutes ago
> band poster doesn't have to be a labor of love artisanal thing

Very few bands would agree with that statement.

swader99935 minutes ago
I agree and whose to say your life experience isn't as valid as someone with less years but more time at just the traditional tools? I'd think either extreme could produce real art if the tools moat was reduced with AI.
Gigachad31 minutes ago
I actually love MS word posters. It's a million times more authentic and enjoyable than a slop generation. If a band put up an AI poster I'd assume they lack any kind of taste which is the whole reason I'd want to listen to a band anyway.

I know this is controversial in tech spaces. But most people, particularly those in art spaces like music actually appreciate creativity, taste, effort, and personal connection. Not just ruthless efficiency creating a poster for the lowest cost and fastest time possible.

reaperducer27 minutes ago
Because I'm not an artist and can't afford to pay one for whatever business I have?

If your business can't afford to spend $5 on Fivr, it's not a business. It's not even panhandling.

AkBKukU40 minutes ago
> can't afford to pay one for whatever business I have

At small scales what "art" does your business need? If you can't afford to hire an artist (which is completely fine, I couldn't for my business!) do you really need the art or are you trying to make your "brand" look more polished than it actually is? Leverage your small scale while you can because there isn't as much of an expectation for polish.

And no, a band poster doesn't have to be a labor of love. But it also doesn't have to be some big showy art either. If I saw a small band with a clearly AI generated poster it would make me question the sources for their music as well.

zulbanabout 1 hour ago
Nobody can be bothered to make my cat out of Lego and the size of mount Everest but if an AI did I'd sure love to see it.

Your quip is pithy but meaningless.

Gigachad38 minutes ago
I'm not saying it's worthless for yourself, it's worthless to me as a viewer. AI content is great for your own usage, but there is no point posting and distributing AI generation.

I could have generated my own content, so just send the prompt rather than the output to save everyone time.

youdots13 minutes ago
The technically (in both senses) astonishing and amazing output is not far off from some of the qualities of real advertising: Staged, attention grabbing, artificially created, superficially demanded, commercially attractive qualities. These align, and lots of similarities in the functions and outcomes of these two spheres come to mind.
JumpCrisscross34 minutes ago
> What else?

I used to have an assistant make little index-card sized agendas for gettogethers when folks were in town or I was organising a holiday or offsite. They used to be physical; now it's a cute thing I can text around so everyone knows when they should be up by (and by when, if they've slept in, they can go back to bed). AI has been good at making these. They don't need to be works of art, just cute and silly and maybe embedded with an inside joke.

reaperducer29 minutes ago
I don't care how many times you write "cute," having my vacation time programmed with that level of granularity and imposed obligation sounds like the definition of "dystopian."

If I got one of your cute schedule cards while visiting you, I'd tear it up, check into a cheap motel, and spend the rest of my vacation actually enjoying myself.

Edit: I'm not an outlier here. There have even been sitcom episodes about overbearing hosts over-programming their guests' visits, going back at least to the Brady Bunch.

JumpCrisscross24 minutes ago
> If I got one of your cute schedule cards while visiting you, I'd tear it up, check into a cheap motel, and spend the rest of my vacation actually enjoying myself

Okay. I'd be confused why you didn't voice up while we were planning everything as a group, but those people absolutely exist. (Unless it's someone's, read: a best friend or my partner's, birthday. Then I'm a dictator and nobody gets a choice over or preview of anything.)

I like to have a group activity planned on most days. If we're going to drive to get in an afternoon hike in before a dinner reservation (and if I have 6+ people in town, I need a dinner reservation because no I'm not coooking every single evening), or if I've paid for a snowmobile tour or a friend is bringing out their telescope for stargazing, there are hard no-later-than departure times to either not miss the activity or be respectful of others' time.

My family used to resolve that by constantly reminding everyone the day before and morning of, followed by constantly shouting at each other in the hours and minutes preceding and–inevitably–through that deadline. I prefer the way I've found. If someone wants to fuck off from an activity, myself included, that's also perfectly fine.

(I also grew up in a family that overplanned vacations. And I've since recovered from the rebound instinct, which involves not planning anything and leaving everything to serendipity. It works gorgeously, sometimes. But a lot of other times I wonder why I didn't bother googling the cool festival one town over before hand, or regretted sleeping in through a parade.)

> There have even been sitcom episodes about overbearing hosts over-programming their guests' visits

Sure. And different groups have different strokes. When it comes to my friends and I, generally speaking, a scheduled activity every other day with dinners planned in advance (they all get hangry, every single fucking one of them) works best.

_the_inflatorabout 1 hour ago
We need to flip the script. AI is trying to do marketing: add “illegal usage will lead to X” is a gateway to spark curiosity. There is this saying that censoring games for young adults makes sure that they will buy it like crazy by circumventing the restrictions because danger is cool.

There is nothing that cannot harm. Knives, cars, alcohol, drugs. A society needs to balance risks and benefits. Word can be used to do harm, email, anything - it depends on intention and its type.

_the_inflatorabout 1 hour ago
I see your point but reconsider: we will and need to see. Time will tell and this is simply economics: useful? Yes, no.

I started being totally indifferent after thinking about my spending habits to check for unnecessary stuff after watching world championships for niche sports. For some this is a calling for others waste. It is a numbers game then.

swader99939 minutes ago
I tend to share your same view. But is there really a line like you describe? Maybe AI just needs to get a few iterations better and we'll all love what it generates. And how's it really any different than any Photoshop computer output from the past?
NikolaNovak18 minutes ago
While I agree with you, hacker news audience is not in the middle of the bell curve.

I get this sounds elitist - but tremendous percentage of population is happily and eagerly engaging with fake religious images, funny AI videos, horrible AI memes, etc. Trying to mention that this video of puppy is completely AI generated results in vicious defense and mansplaining of why this video is totally real (I love it when video has e.g. Sora watermarks... This does not stop the defenders).

I agree with you that human connection and artist intent is what I'm looking for in art, music, video games, etc... But gawd, lowest common denominator is and always has been SO much lower than we want to admit to ourselves.

Very few people want thoughtful analysis that contradicts their world view, very few people care about privacy or rights or future or using the right tool, very few people are interested in moral frameworks or ethical philosophy, and very few people care about real and verifiable human connection in their "content" :-/

simonwabout 1 hour ago
I think there's real value to be had in using this for diagrams.

Visual explanations are useful, but most people don't have the talent and/or the time to produce them.

This new model (and Nano Banana Pro before it) has tipped across the quality boundary where it actually can produce a visual explanation that moves beyond space-filling slop and helps people understand a concept.

I've never used an AI-generated image in a presentation or document before, but I'm teetering on the edge of considering it now provided it genuinely elevates the material and helps explain a concept that otherwise wouldn't be clear.

restersabout 1 hour ago
This is the key point. In my view it's just like anything else, if AI can help humans create better work, it's a good thing.

I think what we'll find is that visual design is no longer as much of a moat for expressing concepts, branding, etc. In a way, AI-generated design opens the door for more competition on merits, not just those who can afford the top tier design firm.

lol_meabout 1 hour ago
yeah I'm not sure I'm in agreement that we can hand-wave assets and ads as entire classes of valuable content
gustavusabout 1 hour ago
I'm working on an edutech game. Before I would've had much less of a product because I don't have the budget to hire an artist and it would've been much less interactive but because of this I'm able to build a much more engaging experience so that's one thing. For what it's worth.
papichulo2023about 1 hour ago
Seems good enough to generate 2D sprites. If that means a wave of pixel-art games I count it as a net win.

I dont think gamers hate AI, it is just a vocal miniority imo. What most people dislike is sloppy work, as they should, but that can happen with or without AI. The industry has been using AI for textures, voices and more for over a decade.

loudandskittish14 minutes ago
There are already more games being released on Steam than anyone can keep up with, I'm not sure how adding another "wave" on top of it helps.
tiagodabout 1 hour ago
AI for textures for over a decade? What AI?
papichulo202333 minutes ago
Efros–Leung, PatchMatch? Nearest neighbours was "AI" before difusion models.
Thonnabout 1 hour ago
Are you kidding? I think I see more vitriol for AI in gaming communities than anywhere else. To the point where steam now requires you to disclose its usage
papichulo202328 minutes ago
Crimson Desert failed to disclose on release and (almost) nobody cared, gamers kept buying it.
NetOpWibbyabout 1 hour ago
The Human Renaissance is something I've been thinking of too and I hope it comes to pass. Of course, I feel like societally, things are gonna get worse for a lot of folks. You already see it in entire towns losing water or their water becoming polluted.

You'd think these kickbacks leaders of these towns are getting for allowing data centers to be built would go towards improving infrastructure but hah, that's unrealistic.

WTF is that unrealistic? SMH

RIMR44 minutes ago
My only actual use of image or video AI tools is self-entertainment. I like to give it prompts and see the results it gives me.

That's it. I can't think of a single actual use case outside of this that isn't deliberately manipulative and harmful.

underlipton40 minutes ago
>Like, in terms of art, it's discarded (art is about humans)

I dunno how long this is going to hold up. In 50 years, when OpenAI has long become a memory, post-bubble burst, and a half-century of bitrot has claimed much of what was generated in this era, how valuable do you think an AI image file from 2023 - with provenance - might be, as an emblem and artifact of our current cultural moment, of those first few years when a human could tell a computer, "Hey, make this," and it did? And many of the early tools are gone; you can't use them anymore.

Consider: there will never be another DallE-2 image generation. Ever.

colechristensenabout 1 hour ago
>In general, I think people are starting to realize that things generated without effort are not worth spending time with

Agreed mostly, BUT

I'm building tools for myself. The end goal isn't the intermediate tool, they're enabling other things. I have a suspicion that I could sell the tools, I don't particularly want to. There's a gap between "does everything I want it to" and "polished enough to justify sale", and that gap doesn't excite me.

They're definitely not generated without effort... but they are generated with 1% of the human effort they would require.

I feel very much empowered by AI to do the things I've always wanted to do. (when I mention this there's always someone who comes out effectively calling me delusional for being satisfied with something built with LLMs)

iLoveOncall38 minutes ago
Porn and memes. Obviously. This is all that Stable Diffusion has been used for since it was released.
ArchieScrivener39 minutes ago
I completely disagree, this replaces art as a job. Why does human art need monetary feedback to be shared? If people require a paycheck to make art then it was never anything different than what Ai generated images are.

As for advertising being depressing - its a little late to get up on the high horse of anti-Ads for tech after 2 decades of ad based technology dominating everything. Go outside, see all those bright shiny glittery lights, those aren't society created images to embolden the spirit and dazzle the senses, those are ads.

North Korea looks weird and depressing because the don't have ads. Welcome to the west.

tomrodabout 1 hour ago
AI loopidity rearing it's head. Just send the bullet points that we all want anyway, right?! Stop sending globs of text and other generated content!
kibibuabout 2 hours ago
Genuine question: what positive use cases are sufficient to accept the harm from image generators?

One that i can think of:

- replacing photography of people who may be unable to consent or for whom it may be traumatic to revisit photographs and suitable models may not be available, e.g. dementia patients, babies, examples of medical conditions.

Most other vaguely positive use cases boil down to "look what image generators can do", with very little "here's how image generators are necessary for society.

On the flip side, there are hundreds of ways that these tools cause genuine harm, not just to individuals but to entire systems.

bulletsvshumans22 minutes ago
Democratizing visual communication is arguably useful, for instance helping people to create diagrams that illustrate a concept they wish to convey. This is contingent on the tech working sufficiently well that the visuals are more effective at communication than the text that went into producing them though.
chromacityabout 1 hour ago
How else do you expect me to illustrate my LLM-generated blog posts about AI?
2ndorderthought40 minutes ago
Oh my. You still make those? Ever since model chupacobra 2.46 we have AI agents making those for us. At one point I was on the fence about totally outsourcing it to agents but it's way more efficient. Now I have 50 posts a day under different names.
spijdarabout 1 hour ago
The same question could be poised of art in general. I know that response would (and probably should) ruffle peoples' figurative feathers, but I think it's worth considering. A lot of art isn't "necessary for society".

The question still stands, "are the benefits worth the cost to society", but it bears remembering we do a lot of things for fun which aren't "necessary for society".

TomGardenabout 1 hour ago
I used to think like what you describe, but I've fallen on the side of "art is just more emotionally resonant human communication". And most of the time human communication with more effort and thought behind it. AI art falls short on both being human and, on average, having more effort or thought behind it than your general interaction at the supermarket.

I will say, it can be emotionally resonant though - but it's a borrowed property from the perception of human communication and effort that made the art the models were trained on.

tills13about 1 hour ago
The difference between "art in general" and this is scale and speed. Sure, I'll grant you that people are going to engage in deception with or without this but the barrier to entry with this is literally on the floor. Do you have a $5 prepaid VISA? You can generate whatever narrative you want in 30 seconds. Replace the $5 Prepaid VISA with the pocketbook of a three letter agency and it starts getting crazy.
Barbingabout 1 hour ago
>starts getting crazy

Got pretty wild w/the Iranian propaganda that reportedly _resonated with Americans_ (didn't verify that claim)

Slopaganda - https://www.newyorker.com/culture/infinite-scroll/the-team-b...

Jtarii25 minutes ago
If you want to say the complete destruction of truth is worth it because some people are having "fun" then idk.
joegibbs4 minutes ago
You shouldn't have believed photos since Stalin had Yezhov airbrushed out of them. The only thing that makes a photo more trustworthy than a painting is that it "looks" more real, and passes itself off as true. But there have always been photographic fakes, manipulation and curation of the photos to push a message. AI will finally end this and people will realise that the image of the thing is not the thing itself.
SpicyLemonZest16 minutes ago
I was worried about the complete destruction of truth, but it seems that's not the result of commoditized image generation. False AI-generated images have been widespread for years, and as far as I've seen, society has adapted very well to the understanding that images can't prove anything without detailed provenance. I'd argue that this has been helped, actually, by random people on the Internet routinely generating plausible images of events that obviously didn't happen.
nothinkjustaiabout 1 hour ago
Art is for the producer, and if they feel it’s necessary for them to produce it than it’s necessary for them, and what is necessary for the individual extends to the society they’re in.
atleastoptimal38 minutes ago
The problem is I'd prefer access to near-photorealistic image gen to be commodified vs something that is restricted, as then only those willing to skirt the law or can leverage criminal networks have access to it.
ticulatedspline36 minutes ago
Is the argument any different replacing the word "image generators" with "photoshop" ?
NathanielK16 minutes ago
Ok, but the models only know what to draw because we fed them images of dementia patients and babies.

Maybe image generators can be a loophole for consent legally, but it seems even grosser morally.

JumpCrisscross19 minutes ago
> Genuine question: what positive use cases are sufficient to accept the harm from image generators?

Diagrams and maps. So much text-based communication begs for a diagram or a map.

_pdp_34 minutes ago
There are many use-cases outside of spam and slop.

For example, take a picture of your garden. Ask chatgpt to give you ideas how to improve it and a step by visual guide.

Anything that can be expressed visually is effectively target for this technology - this covers pretty much everything.

LZ_Khan37 minutes ago
Saving money for businesses trying to promote their products?
tantalorabout 1 hour ago
Prototyping. Suppose you have a hard time expressing your vision in words or executing it visually.

1. Generate 100s or 1000s of low-fidelity candidates, find something that matches your vision, iterate.

2. Hand that generated image off to a human and say, "This is what I'm thinking of, now how do we make it real?"

Important: do not skip the last step.

ndriscollabout 1 hour ago
Not much beyond food, water, and shelter is "necessary" for society, but it's nice to have nice things.

I'm teaching my 4 year old to read. She likes PAW Patrol, but we've kind of exhausted the simple readers, and she likes novelty. So yesterday I had an LLM create a simple reader at her level with her favorite characters, and then turned each text block into a coloring page for her. We printed it off, she and her younger sister colored it, and we stapled it into her own book.

I could come up with 10 3 word sentences myself of course, but I'm not really able to draw well enough to make a coloring book out of it (in fact she's nearly as good as me), and it also helps me think about a grander idea to turn this into something a little more powerful that can track progress (e.g. which phonemes or sight words are mastered and which to introduce/focus on) and automatically generate things in a more principled way, add my kids into the stories with illustrations that look like them, etc.

Models will obviously become the foundation of personalized education in the future, and in that context, of course pictures (and video) will be necessary!

drivebyhootingabout 1 hour ago
Repetition rather than novelty is good for learning.
ndriscollabout 1 hour ago
Sure, and she gets that, but at some point she completely memorizes the stories. She also asks if we can get new books at the store, but they don't make 'em that fast.
mcmcmcabout 1 hour ago
So the use case is just IP theft so you can get more Paw Patrol?

AI aside, if you’ve truly exhausted all the simple readers, maybe she should move on to more advanced books instead of repeating more of the same and gamifying it, which seems a great way to destroy a child’s natural curiosity.

ndriscoll32 minutes ago
Sure, I don't view "IP" as valid, don't consider it theft, and absolutely don't care. In fact I'd go so far as to say that holding the position that there's something wrong with tailoring teaching to a child's interests and avoiding that for fear of copyright concerns of all things actually makes you morally bad.

You overestimate how many there are. There's like 10 stories at that level. I do also read ones with paragraphs to her, but she can't do those herself because she's 4.

lanthissa34 minutes ago
people pay them to use it, they find that positive
stackedinserterabout 1 hour ago
I have plenty for you:

- package design

- pictures for manuals and guides

- navigation and signs

- booklets, tickets and flyers

- logos of all sorts

- websites

- illustrations for books

And many. many others. Not every image is art and very few illustrators are artists.

Jtarii16 minutes ago
So the benefits are that something that was already being mass produced with no issue is slightly easier to mass produce?

It's not a particularly compelling argument.

pesus40 minutes ago
How do these justify the costs to society?
Legend244032 minutes ago
The 'costs to society' are massively overblown, and some of them (automating jobs) are actually benefits to society.
infectoabout 1 hour ago
Could the same argument not be applied to practically everything and have drastically different perspectives from people?
platinumrad2 minutes ago
Why do all of the cartoons still look like that? Genuinely asking.
overgard33 minutes ago
Pretty mixed feelings on this. From the page at least, the images are very good. I'd find it hard to know that they're AI. Which I think is a problem. If we had a functioning congress, I wonder if we might end up with legislation that these things need to be watermarked or otherwise made identifiable as AI generated..

I also don't like that these things are trained on specific artist's styles without really crediting those artists (or even getting their consent). I think there's a big difference between an individual artist learning from a style or paying it homage, vs a machine just consuming it so it can create endless art in that style.

simonwabout 4 hours ago
I've been trying out the new model like this:

  OPENAI_API_KEY="$(llm keys get openai)" \
    uv run https://tools.simonwillison.net/python/openai_image.py \
    -m gpt-image-2 \
    "Do a where's Waldo style image but it's where is the raccoon holding a ham radio"
Code here: https://github.com/simonw/tools/blob/main/python/openai_imag...

Here's what I got from that prompt. I do not think it included a raccoon holding a ham radio (though the problem with Where's Waldo tests is that I don't have the patience to solve them for sure): https://gist.github.com/simonw/88eecc65698a725d8a9c1c918478a...

simonwabout 4 hours ago
I just got a much better version using this command instead, which uses the maximum image size according to https://github.com/openai/openai-cookbook/blob/main/examples...

  OPENAI_API_KEY="$(llm keys get openai)" \
    uv run 'https://raw.githubusercontent.com/simonw/tools/refs/heads/main/python/openai_image.py' \
    -m gpt-image-2 \
    "Do a where's Waldo style image but it's where is the raccoon holding a ham radio" \
    --quality high --size 3840x2160
https://gist.github.com/simonw/88eecc65698a725d8a9c1c918478a... - I found the raccoon!

I think that image cost 40 cents.

wewtyflakes25 minutes ago
A startling number of people either have no arms, one arm, a half of an arm, or a shrunken arm; how odd!
makiraabout 4 hours ago
Fed into a clear Claude Code max effort session with : "Inspect waldo2.png, and give me the pixel location of a raccoon holding a ham radio.". It sliced the image into small sections and gave:

"Found the raccoon holding a ham radio in waldo2.png (3840×2160).

  - Raccoon center: roughly (460, 1680)                                                                                            
  - Ham radio (walkie-talkie) center: roughly (505, 1650) — antenna tip around (510, 1585)                                         
  - Bounding box (raccoon + radio): approx x: 370–540, y: 1550–1780                                                                
                                                                                                                                   
  It's in the lower-left area of the image, just right of the red-and-white striped souvenir umbrella, wearing a green vest. "
Which is correct!
M3L0NM4N24 minutes ago
We would need a larger sample size than just myself, but the raccoon was in the very first spot I looked. Found it literally immediately, as if that's where my eyes naturally gravitated to first. Hopefully that's just luck and not an indictment of the image-creating ability, as if there is some element missing from this "Where's Waldo" image, that would normally make Waldo hard to find.
cwilluabout 3 hours ago
I had one problem: finding the raccoon. Now I have two: finding the red-and-white striped souvenir umbrella, and finding the raccoon.
gpt540 minutes ago
I tried it on the ChatGPT web UI and it also worked, although the ham radio looks like a handbag to me.

https://postimg.cc/wyxgCgNY

davebrenabout 4 hours ago
The faces...that's nice that it turned a kid's book into an abomination
louiereedersonabout 4 hours ago
The people in this image remind me of early this person does not exist, in the best way
dfeeabout 1 hour ago
fair point, also "this raccoon does not exist"
ireadmevsabout 4 hours ago
I found it on the 2nd image! On the 1st one not yet...
makiraabout 4 hours ago
> though the problem with Where's Waldo tests is that I don't have the patience to solve them for sure

I see an opportunity for a new AI test!

vunderbaabout 3 hours ago
There have already been several attempts to procedurally generate Where’s Waldo? style images since the early Stable Diffusion days, including experiments that used a YOLO filter on each face and then processed them with ADetailer.

It's a difficult test for genai to pass. As I mentioned in a different thread, it requires a holistic understanding (in that there can only be one Waldo Highlander style), while also holding up to scrutiny when you examine any individual, ordinary figure.

simonwabout 4 hours ago
I've actually been feeding them into Claude Opus 4.7 with its new high resolution image inputs, with mixed results - in one case there was no raccoon but it was SURE there was and told me it was definitely there but it couldn't find it.
tptacekabout 4 hours ago
5.4 thinking says "Just right of center, immediately to the right of the HAM RADIO shack. Look on the dirt path there: the raccoon is the small gray figure partly hidden behind the woman in the red-and-yellow shirt, a little above the man in the green hat. Roughly 57% from the left, 48% from the top."

(I don't think it's right).

ritzacoabout 4 hours ago
I tried

> please add a giant red arrow to a red circle around the raccoon holding a ham radio or add a cross through the entire image if one does not exist

and got this. I'm not sure I know what a ham radio looks like though.

https://i.ritzastatic.com/static/ffef1a8e639bc85b71b692c3ba1...

jackpirateabout 4 hours ago
Also, the racoon it circled isn't in the original.
simonwabout 3 hours ago
ritzacoabout 4 hours ago
haha took me a while to notice that one of the buildings is labelled 'Ham radio'
pants2about 4 hours ago
The second 4K image definitely has a raccoon on the left there! Nice.
ElFitzabout 4 hours ago
Damn. There’s a fun game app to make here ^^
arealaccountabout 4 hours ago
I see the raccoon
skybrianabout 1 hour ago
This time it passed the piano keyboard test:

https://chatgpt.com/s/m_69e7ffafbb048191b96f2c93758e3e40

But it screwed up when attempting to label middle C:

https://chatgpt.com/s/m_69e8008ef62c8191993932efc8979e1e

Edit: it did fix it when asked.

ea016about 4 hours ago
Price comparison:

GPT Image 2

  Low     : 1024×1024 $0.006 | 1024×1536 $0.005 | 1536×1024 $0.005

  Medium  : 1024×1024 $0.053 | 1024×1536 $0.041 | 1536×1024 $0.041

  High    : 1024×1024 $0.211 | 1024×1536 $0.165 | 1536×1024 $0.165
GPT Image 1

  Low     : 1024×1024 $0.011 | 1024×1536 $0.016 | 1536×1024 $0.016

  Medium  : 1024×1024 $0.042 | 1024×1536 $0.063 | 1536×1024 $0.063

  High    : 1024×1024 $0.167 | 1024×1536 $0.25  | 1536×1024 $0.25
lxgrabout 1 hour ago
Interesting, I wonder why larger outputs are more expensive than smaller square ones on v2, while it’s the other way around in v1.
Melatonicabout 3 hours ago
Weird that they restrict the resolution so much. Does it fall apart with more detail (when zoomed in) or does the cost just skyrocket?
vunderbaabout 3 hours ago
It's usually based on what they've been trained on. There aren't very many models that'll do higher resolutions outside of Seedream but adherency is worse.
_the_inflatorabout 1 hour ago
Processing power, not training. The larger the scene in 2ď the more you need to compute. The resolution itself is not flexible. Imagine painting a white canvas. It is still a pixel per pixel algo which costs LLM GPU power while being the easiest thing to do without it.

You can create larger images by creating separate parts you recombine. But they may not perfectly match their borders.

It is a Landau thing not a trading thing. The idea of LLM is to work on the unknown.

nomelabout 1 hour ago
Need a model trained on closeup/macro shots of everything, to use for upscaling, then run that, as a kernel, over the whole image.
madrox37 minutes ago
This seems like a great time to mention C2PA, a specification for positively affirming image sources. OpenAI participates in this, and if I load an image I had AI generate in a C2PA Viewer it shows ChatGPT as the source.

Bad actors can strip sources out so it's a normal image (that's why it's positive affirmation), but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.

Learn more at https://c2pa.org

woadwarrior0129 minutes ago
Yeah, OpenAI has been attaching C2PA manifests to all their generated images from the very beginning. Also, based on a small evaluation that I ran, modern ML based AI generated image detectors like OmniAID[1] seem to do quite well at detecting GPT-Image-2 generated images. I use both in an on-device AI generated image detector that I built.

[1]: https://arxiv.org/abs/2511.08423

porphyraabout 1 hour ago
The improvement in Chinese text rendering is remarkable and impressive! I still found some typos in the Chinese sample pic about Wuxi though. For example the 笼 in 小笼包 was written incorrectly. And the "极小中文也清晰可读" section contains even more typos although it's still legible. Still, truly amazing progress. Vastly better than any previous image generation model by a large margin.
neom18 minutes ago
Here is my regular "hard prompt" I use for testing image gen models:

"A macro close-up photograph of an old watchmaker's hands carefully replacing a tiny gear inside a vintage pocket watch. The watch mechanism is partially submerged in a shallow dish of clear water, causing visible refraction and light caustics across the brass gears. A single drop of water is falling from a pair of steel tweezers, captured mid-splash on the water's surface. Reflect the watchmaker's face, slightly distorted, in the curved glass of the watch face. Sharp focus throughout, natural window lighting from the left, shot on 100mm macro lens."

Last time I ran the test with Nano Banana 2 (first run): https://s.h4x.club/eDuOzPDd

Images 2 using Simons method he mentioned (first run): https://s.h4x.club/qGuWZveR

Ran a bunch both on the .com and via the api, none of them are nearly as good as Nano Banana.

swalshabout 1 hour ago
Been using the model for a few hours now. I'm actually reall impressed with it. This is the first time i've found value in an image model for stuff I actually do. I've been using it to build powerpoint slides, and mockups. It's CRAZY good at that.
Advertisement
squidsoup24 minutes ago
Are camera manufacturers working on signed images? That seems like the only way our trust in any digital media doesn't collapse entirely.
dktpabout 4 hours ago
One interesting thing I found comparing OpenAI and Gemini image editing is - Gemini rejects anything involving a well known person. Anything. OpenAI is happy to edit and change every time I tried

I have a sideproject where I want to display standup comedies. I thought I could edit standup comedy posters with some AI to fit my design. Gemini straight up refuses to change any image of any standup comedy poster involving a well know human. OpenAI does not care and is happy to edit away

Melatonicabout 3 hours ago
How does it determine they are well known and not just similar looking?
yreg7 minutes ago
Gemini often rejects photos of random people (even ones it generated itself) because it thinks they look too similar to some well known person.
dktpabout 3 hours ago
I don't know tbh. I've tried it on 10-20 various level of famous standups and Gemini refuses every time

Just for testing, I just tried this https://i.ytimg.com/vi/_KJdP4FLGTo/sddefault.jpg ("Redesign this image in a brutalist graphic design style"). Gemini refuses (api as well as UI), OpenAI does it

arjieabout 3 hours ago
It's not super deterministic but it didn't fail once on my attempts. See: https://imgur.com/a/james-acaster-cold-lasagne-1R7fpzQ
Melatonicabout 3 hours ago
What if you change the prompt to tell it specifically its not a famous person? Or try it without text?
ibudialloabout 1 hour ago
And here I was proud of myself, having taught my mom and her friends how to discern real from fakes they get on WhatsApp groups. Another even more powerful tool for scammers. I'm taking a break.
XorNot42 minutes ago
IMO you're fighting the wrong battle: there'll always be a new model.

But the broader concept of fake news and the manufactured nature of media and rhetoric is much more relevant - e.g. whether or not something's AI is almost immaterial to the fact that any filmed segment does not have to be real or attributed to the correct context.

Its an old internet classic just to grab an image and put a different caption on it, relying on the fact no one can discern context or has time to fact check.

amunozoabout 4 hours ago
This is not as exciting as previous models were, but it is incredibly good. I am starting to think that expressing thoughts in words clearly is probably the most important and general skill of the future.
echelonabout 1 hour ago
> I am starting to think that expressing thoughts in words clearly is probably the most important and general skill of the future.

Without question.

AI will be indistinguishable from having a team. Communicating clearly has always and will always mattered.

This, however, is even stronger. Because you can program and use logic in your communications.

We're going to collectively develop absolutely wild command over instruction as a society. That's the skill to have.

yreg5 minutes ago
On the other hand LLMs are getting very good at understanding poorly constructed instructions as well.

So being able to express oneself clearly in a structured way may not be such an edge.

minimaxirabout 4 hours ago
HN submission for a direct link to the product announcement which for some reason is being penalized by the HN algorithm: https://news.ycombinator.com/item?id=47853000
louiereedersonabout 4 hours ago
The image of the messy desktop with the ASCII art is so impressive - the text renders, the date is consistent, it actually generated ASCII art in "ChatGPT", etc. I was skeptical that it was cherry-picked but was able to generate something very similar and then edit particular parts on the desktop (i.e. fixing content in the browser window and making the ASCII dog "more dog like"). It's honestly astounding, to me at least.
throwaway2027about 5 hours ago
I know people like to dunk on ChatGPT and Gemini and say Claude is or used to be better, but you can still use worse models when you're out of usage AND make use of Nano Banana and and ChatGPT Image generation with separate limits for your subscription. I think it could make it a more package as a whole for some people (non-programmers). I do like having the option and am excited for which improvements they've done to ChatGPT Image generation because in the past it had this yellow piss filter and 1.5 it sort of fixed it but made things really generic with Nano Banana beating it (altough Gemini also had a too aggressively tuned racial bias which they fixed), it seems the images ChatGPT generates have gotten better.
SV_BubbleTimeabout 1 hour ago
I still see that piss filter on their samples. It isn’t as bad, but someone there really loves it.
lossyalgo13 minutes ago
Someone remind me again why this is a good idea to be able to create perfect fake images?
joegibbsabout 2 hours ago
The quality of the text is really impressive and I can’t seem to see any artefacts at all. The fake desktop is particularly good: Nano Banana would definitely slip up with at least a few bits of the background.
Advertisement
volkkabout 4 hours ago
the guys presenting are probably all like 25x smarter than I am but good god, literally 0 on screen presence or personality.
OsrsNeedsf2Pabout 1 hour ago
I liked it that way, felt more authentic to see the noobs
sho_hnabout 4 hours ago
That's a trained skill, and they presumably have focused on other skills.
brcmthrowawayabout 4 hours ago
Yeah, skills to make them a cool 10mn a year
volkkabout 4 hours ago
eh, i don't think personalities are trained. on screen presence for sure, but you'd see right through it IRL.
E-Reveranceabout 4 hours ago
I think its endearing
Aethelwulfabout 1 hour ago
didn't think that sam guy was that bad
minimaxir24 minutes ago
So during my Nano Banana Pro experiments I wrote a very fun prompt that tests the ability for these image generation models to follow heuristics, but still requires domain knowledge and/or use of the search tool:

    Create a 8x8 contiguous grid of the Pokémon whose National Pokédex numbers correspond to the first 64 prime numbers. Include a black border between the subimages.

    You MUST obey ALL the FOLLOWING rules for these subimages:
    - Add a label anchored to the top left corner of the subimage with the Pokémon's National Pokédex number.
      - NEVER include a `#` in the label
      - This text is left-justified, white color, and Menlo font typeface
      - The label fill color is black
    - If the Pokémon's National Pokédex number is 1 digit, display the Pokémon in a 8-bit style
    - If the Pokémon's National Pokédex number is 2 digits, display the Pokémon in a charcoal drawing style
    - If the Pokémon's National Pokédex number is 3 digits, display the Pokémon in a Ukiyo-e style
The NBP result is here, which got the numbers, corresponding Pokemon, and styles correct, with the main point of contention being that the style application is lazy and that the images may be plagiarized: https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxaerni...

Running that same prompt through gpt-2-image high gave an...interesting contrast: https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxaerni...

It did more inventive styles for the images that appear to be original, but:

- The style logic is by row, not raw numbers and are therefore wrong

- Several of the Pokemon are flat-out wrong

- Number font is wrong

- Bottom isn't square for some reason

Odd results.

____tom____about 4 hours ago
No mention of modifying existing images, which is more important than anything they mentioned.

I think we all know the feeling of getting an image that is ok, but needs a few modifications, and being absolutely unable to get the changes made.

It either keeps coming up with the same image, or gives you a completely new take on the image with fresh problems.

Anyone know if modification of existing images is any better?

Anything better that OpenAI?

tomjen3about 4 hours ago
There was an Edit button in one of the images in the livestream
Orasabout 1 hour ago
My test for image models is asking it to create an image showing chess openings. Both this model and Banana pro are so bad at it.

While the image looks nice, the actual details are always wrong, such as showing pawns in wrong locations, missing pawns, .. etc.

Try it yourself with this prompt: Create a poster to show opening game for Queen's Gambit to teach kids to play chess.

lxgrabout 1 hour ago
It almost nailed it for me (two squares have both white and black color). All pieces and the position look correct.
tempaccount5050about 1 hour ago
What move? Who's turn is it? Declined or accepted? Garbage in, garbage out.
bogtap8242 minutes ago
In some cases I would agree with this, but image model releases including this one are beginning to incorporate and market the thinking step. It is not a reach at this point to expect the model to take liberties in order to deliver a faithful and accurate representation of your request. A model could still be accurate while navigating your lack of specificity.
dudul17 minutes ago
What do you mean? Parent clearly describes the Queen's Gambit. 1.d4 d5 2.c4 There is no room for ambiguity here.
modelessabout 1 hour ago
Can it generate transparent PNGs yet?
alasanoabout 1 hour ago
Previous gpt image models could (when generating, not editing) but gpt-image-2 can't.

Noticed it earlier while updating my playground to support it

https://github.com/alasano/gpt-image-playground

lxgrabout 1 hour ago
Works for me, but really weirdly on iOS: Copying to clipboard somehow seems to break transparency; saving to the iOS gallery does not. (And I’ve made sure to not accidentally depend on iOS’s background segmentation.)
nickandbroabout 1 hour ago
200+ points in Arena.ai , that's incredible. They are cleaning house with this model
moralestapia39 minutes ago
point delta (from 2nd) not total
bensyversonabout 4 hours ago
I caught the last minute of this—was it just ChatGPT Images 2.0?
puntyabout 4 hours ago
It appears so!
minimaxirabout 4 hours ago
yes
samiwamiabout 4 hours ago
do they have anything similar to SynthID, or are they just pretending that problem doesn't exist?

I know this is probably mega cherry-picked to look more impressive, but some of the images are terrifyingly realistic. They seem to have put a lot of effort into the lighting.

swingboy26 minutes ago
Maybe a stupid question, but does the SynthID still exist if you screenshot and crop your generated image? What if you screenshot, rotate _just_ a bit, and crop? Or apply some other effect to the image like adjusting the coloring a little bit, adding some blur, etc.
alextheparrot20 minutes ago
The paper they published last year goes over some of these transformations: https://arxiv.org/pdf/2510.09263
alextheparrotabout 4 hours ago
> Integrating an imperceptible, robust, and content-specific watermark

From the system card someone linked elsewhere in the discussion

Legend2440about 4 hours ago
I think we are just going to have to accept that realistic images can be easily fabricated now.

Seeing is not believing anymore, and I don't think SynthID or anything like it can restore that trust in images.

pstuart24 minutes ago
Hopefully the arms race will balance out with improved AI image detection, but I can see how that will never be guaranteed to be reliable.
RigelKentaurusabout 3 hours ago
If every single image on their blog was generated by Images 2.0 (I've no reason to believe that's not the case), then wow, I'm seriously impressed. The fidelity to text, the photorealism, the ability to show the same character in a variety of situations (e.g. the manga art) -- it's all great!
hahahacornabout 4 hours ago
One of the images in the blog (https://images.ctfassets.net/kftzwdyauwt9/4d5dizAOajLfAXkGZ7...) is a carbon copy of an image from an article posted Mar 27, 2026 with credits given to an individual: https://www.cornellsun.com/article/2026/03/cornell-accepts-5...

Was this an oversight? Or did their new image generation model generate an image that was essentially a copy of an existing image?

recitedropperabout 3 hours ago
This is hilarious. Seems like kind of a random image for a model to memorize, but it could be.

There is definitely enough empirical validation that shows image models retain lots of original copies in their weights, despite how much AI boosters think otherwise. That said, it is often images that end up in the training set many times, and I would think it strange for this image to do that.

Regardless, great find.

arjieabout 3 hours ago
That has to be the wrong stock image included or something, bloody hell.

     magick image-l.webp image-r.jpg -compose difference -composite -auto-level -threshold 30% diff.png
It's practically all dark except for a few spots. It's the same image just different size compression whatever. I can't find it in any stock image search, though. Surely it could not have memorized the whole image at that fidelity. Maybe I just didn't search well enough.
Melatonicabout 3 hours ago
Or the image was generated with AI in the first place and a test for Images 2.0
IsTomabout 1 hour ago
Well, it's on web archive. So unless they got their hands on it almost a month early or escaped their light cone it wasn't.
arjieabout 2 hours ago
Haha! That would really take the cake. If it is, congratulations to them! I could never have known.
minimaxirabout 3 hours ago
Given the recency of that image, it is unlikely it is in the training data and therefore I would go with oversight.
Advertisement
muyuuabout 1 hour ago
I wonder if this will be decent at creating sprite frame animations. So far I've had very poor results and I've had to do the unthinkable and toil it out manually.
freedombenabout 1 hour ago
I had exactly the same thought! I've got a game I've been wanting to build for over a decade that I recently started working on. The art is going to be very challenging however, because I lack a lot of those skills. I am really hoping the AI tools can help with that.

Is anyone doing this already who can share information on what the best models are?

gizmodo5939 minutes ago
Use the imagegen skill in codex and ask it to create sprites. It works really well.
ZeWaka29 minutes ago
It's still bad.
lifeisstillgood32 minutes ago
Pretty much all of the kerfuffle over AI would go away of it was accurately priced.

After 2008 and 2020 vast (10s of trillions) amounts of money has been printed (reasonably) by western gov and not eliminated from the money supply. So there are vast sums swilling about - and funding things like using massively Computationally intensive work to help me pick a recipie for tonight.

Google and Facebook had online advertising sewn up - but AI is waaay better at answering my queries. So OpenAI wants some of that - but the cost per query must be orders of magnitude larger

So charge me, or my advertisers the correct amount. Charge me the right amount to design my logo or print an amusing cat photo.

Charge me the right cost for the AI slop on YouTube

Charge the right amount - and watch as people just realise it ain’t worth it 95% of the time.

Great technology - but price matters in an economy.

vunderbaabout 3 hours ago
OpenAI’s gpt-image-1.5 and Google’s NB2 have been pretty much neck and neck on my comparison site which focuses heavily on prompt adherence, with both hovering around a 70% success rate on the prompts for generative and editing capabilities. With the caveat being that Gemini has always had the edge in terms of visual fidelity.

That being said, gpt-image-1.5 was a big leap in visual quality for OpenAI and eliminated most of the classic issues of its predecessor, including things like the “piss filter.”

I’ll update this comment once I’ve finished running gpt-image-2 through both the generative and editing comparison charts on GenAI Showdown.

Since the advent of NB, I’ve had to ratchet up the difficulty of the prompts especially in the text-to-image section. The best models now score around 70%, successfully completing 11 out of 15 prompts.

For reference, here’s a comparison of ByteDance, Google, and OpenAI on editing performance:

https://genai-showdown.specr.net/image-editing?models=nbp3,s...

And here’s the same comparison for generative performance:

https://genai-showdown.specr.net/?models=s4,nbp3,g15

UPDATES:

gpt-image-2 has already managed to overcome one of the so‑called “model killers” on the test suite: the nine-pointed star.

Results are in for the generative (text to image) capabilities: Gpt-image-2 scored 12 out of 15 on the text-to-image benchmark, edging out the previous best models by a single point. It still fails on the following prompts:

- A photo of a brightly colored coral snake but with the bands of color red, blue, green, purple, and yellow repeated in that exact order.

- A twenty-sided die (D20) with the first twenty prime numbers (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71) on the faces.

- A flat earth-like planet which resembles a flat disc is overpopulated with people. The people are densely packed together such that they are spilling over the edges of the planet. Cheap "coastal" real estate property available.

All Models:

https://genai-showdown.specr.net

Just Gpt-Image-1.5, Gpt-Image-2, Nano-Banana 2, and Seedream 4.0

https://genai-showdown.specr.net?models=s4,nbp3,g15,g2

kanodiaayushabout 1 hour ago
It stands out to me that this page itself is wonderful to go through (the telling of the product through model generated images).
dazhbog38 minutes ago
Yay, let's burn the planet computing more slopium..
etothetabout 1 hour ago
I would love to see prompt examples that created the images on the announcement page.
thevinterabout 4 hours ago
Every time a new image gen comes out I keep saying that it won't get better just to be surprised again and again. Some of the examples are incredible (and incredibly scary. I feel like this is truly the point where understanding if something is AI becomes impossible)
lehmacdjabout 4 hours ago
So do you think there will be a better image model in a year?
throw310822about 3 hours ago
I'll bite: no I don't think so. If the examples are not cherry-picked and by "image model" we mean just the ability to generate pictures, this looks like parity with human excellence, there isn't much space for further improvement. The images don't just look real, they look tasteful- the model is not just generating a credible image, it's generating one that shows the talent of a good photographer/ designer/ artist.
Vachyasabout 3 hours ago
I'm honestly unsure what could be improved at this point.

Consistency? So it fails less often?

Based on the released images, (especially the one "screenshot" of the Mac desktop) I feel like the best images from this model are so visually flawless that the only way to tell they're fake is by reasoning about the content of the image itself (ex. "Apple never made a red iPhone 15, so this image is probably fake" or "Costco prices never end in .96 so this image is probably fake")

thevinterabout 3 hours ago
There is definitely room for improvement: https://gist.github.com/simonw/88eecc65698a725d8a9c1c918478a...

Especially when it comes to detailed outputs or non-standard prompts.

I do believe it will get even better - not sure it will happen within a year but I wouldn't be incredibly surprised if it did.

RobinLabout 3 hours ago
I'm been impressed when testing this model today, but it still can't consistently adhere to the following prompt: make me an image of a pizza split into 10 equal slices with space in between the them, to help teach fractions to a child.

It doesn't reliably give you 10 slices, even if you ask it to number them. None of the frontier models seem to be able to get this right

jinushaunabout 2 hours ago
Cost? Speed?
thelucentabout 3 hours ago
It seems to still have this gpt image color that you can just feel. The slight sepia and softness.
honzaikabout 3 hours ago
I was just wondering about that. Did they embrace it as a “signature look”? it cant be accidental, right?
GaryBluto3 minutes ago
It's definitely not accidental but I'm not completely sure whether or not it is simply a "tell" or watermark or an attempt to foster brand association.
Melatonicabout 4 hours ago
We were afraid it would be Skynet and instead we got the ultimate meme generator !
gfodyabout 1 hour ago
there's something funny going on with the live stream audio
Advertisement
minimaxirabout 5 hours ago
Model card for the API endpoint gpt-image-2 (which may or may not reflect the output from ChatGPT Images 2): https://developers.openai.com/api/docs/models/gpt-image-2

API Pricing is mostly unchanged from gpt-image-1.5, the output price is slightly lower: https://developers.openai.com/api/docs/pricing

...buuuuuuuuut the price per image has changed. For a high quality image generation the 1024x1024 price has increased? That doesn't make sense that a 1024x1024 is cheaper than a 1024x1536, so assuming a typo: https://developers.openai.com/api/docs/guides/image-generati...

The submitted page is annoyingly uninformative, but from the livestream it proports the same exact features as Gemini's Nano Banana Pro. I'll run it through my tests once I figure out how to access it.

strongpigeonabout 3 hours ago
> That doesn't make sense that a 1024x1024 is cheaper than a 1024x1536, [...]

I think you meant more expensive, right? Because it would make sense for it to be cheaper as there are less pixels.

Melatonicabout 3 hours ago
Can it generate anything high resolution at increased cost and time? Or is it always restricted?
andai13 minutes ago
lol at the fake handwritten homework assignment. Know your customer!
dzongaabout 1 hour ago
for video game assets this is massive.

but in general though - will people believe in anything photographic ?

imagine dating apps, photographic evidence.

I'm guessing we're gonna reach a point where - you fuck up things purposely to leave a human mark.

squidsoup37 minutes ago
> but in general though - will people believe in anything photographic ?

Hopefully film makes a come back.

retrac98about 4 hours ago
The page keeps crashing on my iPhone 17 Pro.
bitnovusabout 4 hours ago
great obfuscation idea - hidden message on a grain of rice
ChrisArchitectabout 3 hours ago
Fake layouts, fake handwritten kid story, fake drunk photos? All from training on real things people did.

As with anything AI, we are not ready for the scale of impact. And for what? Like, why are you proud of this?

irishcoffee16 minutes ago
This is so stupid. As a free OSS tool it’s amazing. Paying money for this is fucking stupid. How blind are we all to now before this tech?
Bennettheynabout 3 hours ago
fal has the endpoint under openai/gpt-image-2
ieie3366about 4 hours ago
It's great. Also doesn't seem to have any "slop" standard look, the images it produces are quite diverse.

I would imagine this will hit illustrators / graphics designers / similar people very hard, now that anyone can just generate professional looking graphical content for pennies on the dollar.

Advertisement
throw310822about 3 hours ago
Ok, I can hear the sound of entire industries crumbling right now.
rqa129about 4 hours ago
Thanks, all displayed images look horrible and artificial. This will fail like Sora.
gekoxyzabout 4 hours ago
Hard disagree on this, I was coming here to comment that this is the first time I really can't tell that some of the photos are AI generated.
QuantumGoodabout 2 hours ago
Your single other comment is simplistic hyperbole as well, so this is presumably a bot account.
livinglist33 minutes ago
Denial is real…
furyofantaresabout 4 hours ago
I felt the same, particularly with the diagrams / magazines anyway.

I don't think it'll fail like Sora though. gpt-image-1.5 didn't fail.

bitnovusabout 4 hours ago
No gpt-5.5
wahnfriedenabout 2 hours ago
Thursday
szmarczakabout 4 hours ago
Wow, the difference between AI and non-AI images collapses. I hate the future where I won't be able to tell the difference.
Flere-Imsahoabout 4 hours ago
I wake up everyday, read the tech news, and usually see some step change in AI or whatever. It's wild to think I'm living through such a massive transformation in my lifetime. The future of tech is going to be so different from when I was born (1980), I guess this is how people born in 1900 felt when they got to see man land on the moon?

> Wow, the difference between AI and non-AI images collapses. I hate the future where I won't be able to tell the difference.

Image generation is now pretty much "solved". Video will be next. Perhaps things will turn out the same as chess: in that even though chess was "solved" by IBM's Deep Blue, we still value humans playing chess. We value "hand made" items (clothes, furniture) over the factory made stuff. We appreciate & value human effort more than machines. Do you prefer a hand-written birthday card or an email?

torawayabout 3 hours ago
"Solved" seems a tad overstated if you scroll up to Simonw's Where's Waldo test with deformed faces plus a confabulated target when prompted for an edit to highlight the hidden character with an arrow.
Flere-Imsahoabout 3 hours ago
It's "solved" in that we have a way forward to reduce the errors down to 0.00001% (a number I just made up). Throwing more compute/time/money at these problems seems to reduce that error number.
abraxasabout 3 hours ago
As someone born in 1975 I always felt until the last couple of years that I had been stuck in a long period of stagnation compared to an earlier generation. My grandmother who was born in the 1910s got to witness adoption of electricity, mass transit, radio, television, telephony, jet flights and even space exploration before I was born.

Feels like now is a bit of a catchup after pretty tepid period that was most of my life.

cubefoxabout 1 hour ago
You will likely witness strongly superhuman AI, which dwarfs any changes your grandmother saw.
dag100about 3 hours ago
Chess exists solely for the sake of the humans playing it. Even if machines solved chess, people would rather play chess against a person than a machine because it is a social activity in a way. It's like playing tennis versus a person compared to tennis against a wall.

Photographs, videos, and digital media in general, in contrast, are used for much, much more than just socializing.

gekoxyzabout 4 hours ago
Well, for some of these images for the first time I can't tell that they are AI generated
simonwabout 4 hours ago
Suggest renaming this to "OpenAI Livestream: ChatGPT Images 2.0"
dangabout 1 hour ago
(We've since merged the threads and moved the livestream link to the toptext)
I_am_tiberiusabout 4 hours ago
or "How we make money with your images 2.0".
sho_hnabout 4 hours ago
In 5 years and 3 months between DALL-E and Images 2.0 we've managed to progress from exuberant excitement to jaded indifference.
nba456_13 minutes ago
Who's 'we'? Speak for yourself!
kibibuabout 2 hours ago
Because we are all seeing the harm these tools are being used for.

It's just another step into hell.

rqa129about 4 hours ago
Can it generate Chibi figures to mask the oligarchy's true intentions on Twitter and make them more relatable?
zb3about 4 hours ago
Image generation? Hmm, would be cool if OpenAI also made a video-generation model someday..
incognito124about 4 hours ago
If only there was a social network with solely AI generated videos, I would pay literal money for it...
biosubterraneanabout 1 hour ago
Oh no.
ai4thepeopleabout 1 hour ago
Each day when my AI girlfriend wakes me up and shows me the latest news, I feel: This is it! We are living in a revolution!

Never before in history did humanity have the possibility of seeing a picture of a pack of wolves! The dearth of photographs has finally been addressed!

I told my AI girlfriend that I will save money to have access to this new technology. She suggested a circular scheme where OpenAI will pay me $10,000 per year to have access to this rare resource of 21th century daguerreotype.

Advertisement
aliljetabout 4 hours ago
I am hopeful that OpenAI will potentially offer clarity on their loss-leading subscription model. I'd prefer to know the real cost of a token from OpenAI as opposed to praying the venture-funded tokens will always be this cheap.