RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
69% Positive
Analyzed from 9212 words in the discussion.
Trending Topics
#openai#agi#microsoft#https#more#com#don#azure#still#models

Discussion (418 Comments)Read Original on HackerNews
Deepseek v4 is good enough, really really good given the price it is offered at.
PS: Just to be clear - even the most expensive AI models are unreliable, would make stupid mistakes and their code output MUST be reviewed carefully so Deepseek v4 is not any different either, it too is just a random token generator based on token frequency distributions with no real thought process like all other models such as Claude Opus etc.
Once a new model or a technique is invented, it’s just a matter of time until it becomes a free importable library.
I think the biggest winner of this might be Google. Virtually all the frontier AI labs use TPU. The only one that doesn't use TPU is OpenAI due to the exclusive deal with Microsoft. Given the newly launched Gen 8 TPU this month, it's likely OpenAI will contemplate using TPU too.
You could reasonably say that "A majority of frontier labs uses TPU to train and serve their model."
https://www.reuters.com/business/retail-consumer/openai-taps...
Why does this need to be stated? Who else's would they be?
edit: he puts this on so many comments lol c'mon this is absurd.
Just add it to your profile once, no one assumes individuals speak for their employers here that would be stupid. The need to add disclaimer would be for the uncommon case that you were speaking for them. It's an anonymous message board we're all just taking here it's not that serious.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
The central issue (or so they claimed) was that people might misconstrue my comment as representing the company I was at.
So yeah, I don’t understand why people are making fun of this. It’s serious.
On the other hand, they were so uptight that I’m not sure “opinions are my own” would have prevented it. But it would have been at least some defense.
Their employer? They may work at related company, and are required to say this.
But I think you’re right
it's like people are LARPing a Fortune company CEO when they're giving their hot takes on social media
reminds me of Trump ending his wild takes on social media with "thank you for your attention to this matter" - so out of place, it makes it really funny
*typo
> Starting April 20, 2026, new sign-ups for Copilot Pro, Copilot Pro+, and student plans are temporarily paused.
From: https://docs.github.com/en/copilot/concepts/billing/billing-...
What was I looking at?
No. Email hn@ycombinator.com
https://news.ycombinator.com/newsfaq.html
There’s no upper limit to their financial stupidity.
I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
And on top of that, OpenAI still has to pay Microsoft a share of their revenue made on AWS/Google/anywhere until 2030?
And Microsoft owns 27% of OpenAI, period?
That's a damn good deal for Microsoft. Likely the investment that will keep Microsoft's stock relevant for years.
I doubt it
valued at --which I'd say is a reasonable distinction to make right about now
How?
https://news.ycombinator.com/item?id=47616242
[1] https://github.com/orgs/community/discussions/10539
They still run their own platform.
https://thenewstack.io/github-will-prioritize-migrating-to-a...
I think the differentiator is Team, which Google for some mysterious reason can't build or doesn't want to.
But if I own 49% of a company and that company has more hype than product, hasn't found its market yet but is valued at trillions?
I'm going to sell percentages of that to build my war chest for things that actually hit my bottom line.
The "moonshot" has for all intents and purposes been achieved based on the valuation, and at that valuation: OpenAI has to completely crush all competition... basically just to meet its current valuations.
It would be a really fiscally irresponsible move not to hedge your bets.
Not that it matters but we did something similar with the donated bitcoin on my project. When bitcoin hit a "new record high" we sold half. Then held the remainder until it hit a "new record high" again.
Sure, we could have 'maxxed profit!'; but ultimately it did its job, it was an effective donation/investment that had reasonably maximal returns.
(that said, I do not believe in crypto as an investment opportunity, it's merely the hand I was dealt by it being donated).
And Microsoft only paid $10B for that stake for the most recognizable name brand for AI around the world. They don't need to "hedge their bets" it's already a humongous win.
Why let Altman continue to call the shots and decrease Microsoft's ownership stake and ability to dictate how OpenAI helps Microsoft and not the other way around?
That's a flawed argument. Why wouldn't you want to hedge a risky bet, and one that's even quite highly correlated to Microsoft's own industry sector?
my impression is that many of these "investments" are structured IOUs for circular deals based on compute resources in exchange for LLM usage
Genuine question because I feel like I’m maybe missing something!
The longer answer is; you never know whats coming next, bitcoin could have doubled the day after, and doubled the day after that, and so on, for weeks. And by selling half you've effectively sacrificed huge sums of money.
The truth is that by retaining half you have minimised potential losses and sacrificed potential gains, you've chosen a middle position which is more stable.
So, if bitcoin 1000 bitcoing which was word $5 one day, and $7 the next, but suddenly it hits $30. Well, we'd sell half.
If the day after it hit $60, then our 500 remaining bitcoins is worth the same as what we sold, so in theory all we lost was potential gains, we didn't lose any actual value.
Of course, we wouldn't sell we'd hold, and it would probably fall down to $15 or something instead.. then the cycle begins again..
For OAI to be a purely capitalist venture, they had to rip that out. But since the non-profit owned control of the company, it had to get something for giving up those rights. This led to a huge negotiation and MSFT ended up with 27% of a company that doesn’t get kneecapped by an ethical board.
In reality, though, the board of both the non-profit and the for profit are nearly identical and beholden to Sam, post–failed coup.
Might really increase the utility of those GCP credits.
We have no idea what it means to be the "primary cloud provider" and have the products made available "first on Azure". Does MSFT have new models exclusively for days, weeks, months, or years?
Both facts and more details from the agreement are quite frankly highly relevant to judge whether this is a net positive, negative or neutral for MSFT. It's unbelievable that the SEC doesn't force MSFT to publish at least an economic summary of the deal.
That might help fix some of the bugs in Teams... :)
Bear in mind that MSFT have rights to OpenAI IP (as well as owning ~30% of them). The only reason they were giving revenue share was in return for exclusivity.
I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
https://blogs.microsoft.com/blog/2025/11/18/microsoft-nvidia...
https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-av...
https://ai.azure.com/
AFAICT they are just hedging their bets left and right still. Also feels like they are winning in the sense that despite pretty much all those products being roughly equivalent... they are still running on their cloud, Azure. So even though they seem unable to capture IP anymore, they are still managing to get paid for managing the infrastructure.
[1] https://news.microsoft.com/source/2026/04/08/microsoft-annou...
Which also means, if you are a big boring AWS or GCP shop, and have a spend commitment with either as part of a long term partnership, it will count towards that. And, you won't likely have to commit to a spend with OpenAI if you want the EU data residency for instance. And likely a bit more transparency with infra provisioning and reserved capacity vs. OpenAI. All substantial improvements over the current ways to use OpenAI in real production.
The Microsoft and OpenAI situation just got messy.
We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:
1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.
2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.
3. Microsoft is done giving OpenAI a cut of their sales.
4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.
5. Microsoft is still just a big shareholder hoping the stock goes up.
We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.
"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."
It would seem the translator took corporate PR speak and translated it into something between the LinkedIn and short-form blogger dialects.
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
I don't expect the translation to take OpenAI's statements and make them truthful or to investigate their veracity, but I genuinely could not understand OpenAI's press release as they have worded it. The translation at least makes it easier to understand what OpenAI's view of the situation is.
I'm pretty sure "just" is being used here to mean "simply" rather than "recently".
That's kagi? Cool, I'm check out out more!
https://www.dw.com/en/musk-vs-openai-trial-to-get-underway/a...
Azure is effectively OpenAI's personal compute cluster at this scale.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
https://blogs.microsoft.com/blog/2026/04/27/the-next-phase-o...
This seems impossible.
Amazon CEO says that these models are coming to Bedrock though: https://x.com/ajassy/status/2048806022253609115
https://news.ycombinator.com/item?id=47616242
Yes. Microsoft was "considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker" [1].
[1] https://www.reuters.com/technology/microsoft-weighs-legal-ac...
They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present.
OpenAI bet on consumers; Anthropic on enterprise. That will necessitate a louder marketing strategy for the former.
Why is it Altman is facing kill shots and Dario isn’t?
OpenAI has public models that are pretty 'meh', better than Grok and China, but worse than Google and Anthropic. They still cost a ton to run because OpenAI offers them for free/at a loss.
However, these people are giving away their data, and Microsoft knows that data is going to be worthwhile. They just dont want to pay for the electricity for it.
What's losing OpenAI money is paying for the whole of R&D, including training and staff. Microsoft doesn't pay that, so they get the money making part of AI without the associated costs.
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
Mac: You're damn right. Thus creating the self-sustaining economy we've been looking for.
Dennis: That's right.
Mac: How much fresh cash did we make?
Dennis: Fresh cash! Uh, well, zero. Zero if you're talking about U.S. currency. People didn't really seem interested in spending any of that.
Mac: That's okay. So, uh, when they run out of the booze, they'll come back in and they'll have to buy more Paddy's Dollars. Keepin' it moving.
Dennis: Right. That is assuming, of course, that they will come back here and drink.
Mac: They will! They will because we'll re-distribute these to the Shanties. Thus ensuring them coming back in, keeping the money moving.
Dennis: Well, no, but if we just re-distribute these, people will continue to drink for free.
Mac: Okay...
Dennis: How does this work, Mac?
Mac: The money keeps moving in a circle.
Dennis: But we don't have any money. All we have is this. ... How does this work, dude!?
Mac: I don't know. I thought you knew.
I fear for the end user we'll still see more open-microslop spam. I see that daily on youtube - tons of AI generated fakes, in particular with that addictive swipe-down design (ok ok, youtube is Google but Google is also big on the AI slop train).
Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.
Isn't that exactly what you would expect to happen as we learn more about the nature and inner workings of intelligence and refine our expectations?
There's no reason to rest our case with the Turing test.
I hear the "shifting goalposts" riposte a lot, but then it would be very unexciting to freeze our ambitions.
At least in an academic sense, what LLMs aren't is just as interesting as what they are.
The Turing Test/Imitation Game is not a good benchmark for AGI. It is a linguistics test only. Many chatbots even before LLMs can pass the Turing Test to a certain degree.
Regardless, the goalpost hasn't shifted. Replacing human workforce is the ultimate end goal. That's why there's investors. The investors are not pouring billions to pass the Turing Test.
> I propose to consider the question, "Can machines think?" This should begin > with definitions of the meaning of the terms "machine" and "think." The > definitions might be framed so as to reflect so far as possible the normal use > of the words, but this attitude is dangerous, If the meaning of the words > "machine" and "think" are to be found by examining how they are commonly used > it is difficult to escape the conclusion that the meaning and the answer to the > question, "Can machines think?" is to be sought in a statistical survey such as > a Gallup poll. But this is absurd. Instead of attempting such a definition I > shall replace the question by another, which is closely related to it and is > expressed in relatively unambiguous words.
Many people who want to argue about AGI and its relation to the Turing test would do well to read Turing's own arguments.
Like do people not know what word "general" means? It means not limited to any subset of capabilities -- so that means it can teach itself to do anything that can be learned. Like start a business. AI today can't really learn from its experiences at all.
The truth is, we have had AGI for years now. We even have artificial super intelligence - we have software systems that are more intelligent than any human. Some humans might have an extremely narrow subject that they are more intelligent than any AI system, but the people on that list are vanishing small.
AI hasn't met sci-fi expectations, and that's a marketing opportunity. That's all it is.
also, I'm pretty sure some people will move goalposts further even then.
If you've never read the original paper [1] I recommend that you do so. We're long past the point of some human can't determine if X was done by man or machine.
[1]: https://courses.cs.umbc.edu/471/papers/turing.pdf
Regarding shifting goalposts, you are suggesting the goalposts are being moved further away, but it's the exact opposite. The goalposts are being moved closer and closer. Someone from the 50s would have had the expectation that artificial intelligence ise something recognisable as essentially equivalent to human intelligence, just in a machine. Artificial intelligence in old sci-fi looked nothing like Claude Code. The definition has since been watered down again and again and again and again so that anything and everything a computer does is artificial intelligence. We might as well call a calculator AGI at this point.
An AGI would not have problems reading an analog clock. Or rather, it would not have a problem realizing it had a problem reading it, and would try to learn how to do it.
An AGI is not whatever (sophisticated) statistical model is hot this week.
Just my take.
That's not the definition they have been using. The definition was "$100B in profits". That's less than the net income of Microsoft. It would be an interesting milestone, but certainly not "most of the jobs in an economy".
It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.
Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.
[0] https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
"OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits."
Given that the definition of AGI is beyond meaningless, it is clear that the "I" in AGI stands for IPO.
[0] https://finance.yahoo.com/news/microsoft-openai-financial-de...
From: https://openai.com/charter/
I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.
From Wikipedia
Eschatology (/ˌɛskəˈtɒlədʒi/; from Ancient Greek ἔσχατος (éskhatos) 'last' and -logy) concerns expectations of the end of present age, human history, or the world itself.
I'm case anyone else is vocabulary skill checked like me
Russian Invasion - Salami Tactics | Yes Prime Minister
https://www.youtube.com/watch?v=yg-UqIIvang
OpenAI and Microsoft do (did?) have a quantifiable definition of AGI, it’s just a stupid one that is hard to take seriously and get behind scientifically.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
> The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.
Why are we expecting AGI to one shot it? Can't we have an AGI that can fails occasionally to solve some math problem? Is the expectation of AGI to be all knowing?
By the way I agree that AGI is not around the corner or I am not arguing any of the llm s are "thinking machines". It's just I agree goal post or posts needs to be set well.
People obviously have really strong opinions on AI and the hype around investments into these companies but it feels like this is giving people a pass on really low quality discourse.
This source [1] from this time last year says even lab leaders most bullish estimate was 2027.
[1]. https://80000hours.org/2025/03/when-do-experts-expect-agi-to...
I think this might be similar to how we changed to cars when we were using horses
They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
Other people just call it "theft".
Asking because, reading the tea leaves from the outside, until ChatGPT came along, MSFT (via Bill Gates) seemed to heavily favor symbolic AI approaches. I suspect this may be partly why they were falling so far behind Google in the AI race, which could leverage its data dominance with large neural networks.
So based on the current AI boom, MSFT may have been chasing a losing strategy with symbolic AI, but if they were all-in on NN, they were on the right track.
At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
Grand delusion, perhaps.
1) True believers 2) Hype 3) A way to wash blatant copyright infringement
True believers are scary and can be taken advantage of. I played DOTA from 2005 on and beating pros is not enough for AGI belief. I get that the learning is more indirect than a deterministic decision tree, but the scaling limitations and gaps in types of knowledge that are ingestible makes AGI a pipe dream for my lifetime.
Definitely interesting to watch from the perspective of human psychology but there is no real content there and there never was.
The stuff around Mythos is almost identical to O1. Leaks to the media that AGI had probably been achieved. Anonymous sources from inside the company saying this is very important and talking about the LLM as if it was human. This has happened multiple times before.
Seems more like an incredibly embarrassing belief on his part than something I should be crediting.
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
We already have several billion useless NGI's walking around just trying to keep themselves alive.
Are we sure adding more GI's is gonna help?
...just please stop burning our warehouses and blocking our datacenters.
Isn't this tautology? We've de facto defined AGI as a "sufficiently complex LLM."
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
And how will you know AGI when you saw it?
If you present GPT 5.5 to me 2 years ago, I will call it AGI.
neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.
I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.
Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong
In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state.
But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all.
Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there.
Science fiction from that era even had the concept of what models are... they'd call it an "oracle". I can think of at least 3 short stories (though remembering the authors just isn't happening for me at the moment). The concept was of a device that could provide correct answers to any question. But these devices had no agency, were dependent on framing the question correctly, and limited in other ways besides (I think in one story, the device might chew on a question for years before providing an answer... mirroring that time around 9am PST when Claude has to keep retrying to send your prompt).
We've always known what we meant by artificial intelligence, at least until a few years ago when we started pretending that we didn't. Perhaps the label was poorly chosen (all those decades ago) and could have a better label now (AGI isn't that better label, it's dumber still), but it's what we're stuck with. And we all know what we mean by it. We all almost certainly do not want that artificial intelligence because most of us are certain that it will spell the doom of our species.
https://www.noemamag.com/artificial-general-intelligence-is-...
There is a reason so many scams happen with technology. It is too easy to fool people.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?
The 'predicting the next word' is the learning mechanism of the LLM which leads to a latent space which can encode higher level concepts.
Basically a LLM 'understands' that much as efficient as it has to be to be able to respond in a reasonable way.
A LLM doesn't predict german text or chinese language. It predicts the concept and than has a language layer outputting tokens.
And its not just LLMs which are progressing fast, voice synt and voice understanding jumped significantly, motion detection, skeletion movement, virtual world generation (see nvidias way of generating virutal worlds for their car training), protein folding etc.
Their progress is almost nought. Humanoids are stupid creations that are not good at anything in the real world. I'll give it to the machine dogs, at least they can reach corners we cannot.
I can also recommend looking at Generalist: https://www.youtube.com/@Generalist_AI
is it? we're currently scaled on data input and LLMs in general, the only thing making them advance at all right now is adding processing power
Crypto was flawed from the beginning and lots of people didn't understood it properly. Not even that a blockchain can't secure a transaction from something outside of a blockchain.
Tried to delete this submission in place of it but too late.
I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).