FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
85% Positive
Analyzed from 1657 words in the discussion.
Trending Topics
#openai#aws#bedrock#models#amazon#model#data#anthropic#enterprise#might

Discussion (58 Comments)Read Original on HackerNews
We will see if this changes the equation, but it feels like OpenAI is pretty far behind and playing catch up on all fronts. Though to be honest, "pretty far behind" is like 2-8 weeks in the AI world, so it may not matter a ton, it's mostly perception. And for me and my information bubble, perception of OpenAI is rock-bottom due to Sam Altman. From appearing unethical to appearing unhinged with demands from fabs and everything else, I'm not a fan.
[0]: https://platform.claude.com/docs/en/build-with-claude/claude...
What OP is referring to is Anthropic aligning with corporate terms and conditions early, positioning themselves to be effectively resold by AWS rather than requiring orgs to procure them directly. This is huge in the enterprise world because the processes to get broad approval are generally far smaller and shorter for "just another AWS service" compared to a whole new vendor.
Oai language models are largly irrelevant at this point imo.
And for OpenAI, there is a May 2025 preservation order in NYT v. OpenAI. The court is forcing OpenAI to retain ChatGPT output logs indefinitely, including chats users have deleted that would normally be purged within 30 days [2]. That makes it a non starter for HIPAA/GDPR bound orgs.
[1] https://aws.amazon.com/bedrock/faqs/
[2] https://openai.com/index/response-to-nyt-data-demands/
> Update on October 22, 2025:
> After months of litigation, we are no longer under a legal order to retain consumer ChatGPT and API content indefinitely. Our obligations under the earlier order ended on September 26, 2025.
> We’ve returned to our standard data retention practices :
> Deleted ChatGPT conversations and Temporary Chats will be automatically deleted from our systems within 30 days (opens in a new window).
> API data will also be automatically deleted after 30 days.
I wonder if this is directly linked to the split up with Microsoft. Just from my anecdata, OpenAI is getting completely ignored in serious enterprise deployments because what they offer on Azure sucks and there is no other corporate friendly way to get it. They probably saw themselves getting destroyed in enterprise and realised it was existential to be able to compete with Anthropic on AWS.
https://news.ycombinator.com/item?id=47921248
...anyone with a brain at AWS knows that supporting OpenAI's latest models on Bedrock is simply good for AWS. That context is rather important!
Openai hasn't been publishing innovations for quite a while.
They're both just stealing ideas from pimono extensions
This paper isn't the exact same scenario, since it's an auditable open weight llama model, but shows the symptoms of this: https://arxiv.org/pdf/2410.20247
This HN post itself has 4 simultaneous announcement links; not a coincidence.
There are billions of investor money on the line if the wrong thing is said at the wrong time, it needs to be carefully crafted and staged.
0. https://www.cnbc.com/video/2026/04/28/tech-shares-fall-after...
https://x.com/amazon/status/2049178618639839427
Since the product doesn't seem to be available yet, and the other links are all press releases, we'll leave the interview up as the main link.
You can just run "air gapped" inference?
Is this only of interest to enterprise customers already on AWS (who want "air gapped" behavior)? Is there any other use case for this?
This will be more expensive than calling OpenAI directly, right?
If this ends up similar to Claude on Bedrock, it's the same price.
But it also is for Devs in a company who already have a blanket agreement with Amazon, but would have an uphill battle signing an agreement with openAI.
AI is kind of like the ultimate corporation drug. They are all on it. And can't get rid of it - ever again.