HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
57% Positive
Analyzed from 28995 words in the discussion.
Trending Topics
#violence#don#more#sam#altman#here#someone#world#should#tech

Discussion (971 Comments)Read Original on HackerNews
Not saying that justifies harming Altman but I am confused that he seems surprised he is now in physical danger? [Or chalks it up to just some single specific incendiary article rather than the companies actual actions?] If you involve yourself in the act of killing people then, yeah, you’re going to get blowback for that and some people are obviously going to want to hurt you
It's absolutely ok to oppose war.
It is absolutely not ok for "some people to want to hurt" someone who is running a company that is vying for contracts from a democratically elected government's defense department.
It's also ok to protest that, to boycott it or to refuse to work for or with them for it. But escalating that to physical violence is not ok, and nor should people be "confused that he seems surprised he is now in physical danger"
(As an aside, from the statements I've heard so far it seems the person was more an anti-AI, anti-tech person than anti-war)
And as such they’ve either become completely irrational (most far left or far rightists), checked out (the rest of us), or fully mentally ill (people like this, or that Gracie Mansion wacko)
Right now we have a huge imbalance in the world and more situations like this are going to manifest as we slide further and further into authoritarianism.
Let's see if that still holds after the midterms...
Hitler was democratically elected, who cares?
The premise doesn't make sense either because it's hardly a “defense department” either. It's been more of a “kill civilians and destabilize other democratically elected governments in Latin America and the Middle East department” for the past half century. It's the same “defense department” that overthrew democratically-electdd Allende in Chile and installed a dictator, killed schoolgirls in Iran (I'm not including Iran in the list of democratic places though), bombed a wedding in Pakistan with a drone, and more. It's a massive “defense department” for a country that hasn't been attacked in ages.
The US is hardly a democracy either because a choice between genocide-supporters isn't a real choice, there was no real anti-Zionist candidate.
Why though?
Unfortunately warfare is a thing. Why wouldn't you want the best technology used for your country when conducting warfare? Or do you just believe warfare would cease to exist if a country gave up any means of defense or offense?
Trump and other presidents literally started wars and ordered people to be killed. When was the last time they were physically attacked?
https://en.wikipedia.org/wiki/List_of_United_States_presiden...
If the Sackler's actions are visible evil, where on the 'LLC/Corpo' scale does evil turn to 'acceptable business' and the choices made by management to inflict damage on many many people switch to 'acceptable business' where the perpetrators are disconnected from their actions/choices?
'LLC/Corporations' absolve management of liability/accountability in the government eyes, but you are making an assumption that then extends to absolving when it comes to actual morality. While you can try to sell 'articles of incorporation' count as modern indulgences freeing people from sin under the religion of capitalism I'm not sure all of society agrees. I think the concept that LLC/Incorporation is a blanket 'papal indulgence' absolving management of all accountability/moral behavior in our modern techno feudalist social structure is wearing thin for a lot of people. Clunky as hell language but it's a discuss that needs to be had, and better for all sooner rather than later.
That was a solid line
Callous indifference seems fine if it’s done at a large scale and the harm impersonal enough. Murder is too small, too targeted.
> Suchir Balaji (November 21, 1998 – November 26, 2024) was an American artificial intelligence researcher who was found dead one month after accusing OpenAI, his former employer, of violating United States copyright law -Wikipedia
I totally agree with your statement if we are talking about the average citizen starting to throw Molotovs at his house. If you’re afraid AI is taking your job, just do something else. It’s not the end of the world changing careers.
Plenty of work AI won’t be able to do, or allowed to do without a human assisting in some way that secures the human a good income and way of life.
So if this is done by an individual citizen, they need to be hunted down, arrested, and get the full force of the justice system to deter others from doing the same.
On the other hand, right now, Sam Altman is a valid military target for assassination in the US / Iran war.
OpenAI did snatch up the contract from Anthropic at the Pentagon, and their technology is in some capacity used to murder Iranian HVTs (High Value Targets). Altman is therefore technically a legal HVT for the Iranians.
If you say it’s valid and not a war crime for the US to assassinate former political Iranian figures and their families for aiding the new regime and therefore becoming enemy combatants in the eye of the US Military, it’s also valid to assassinate Altman and his family for doing the same to the other war party.
It’s a bit of a Schrödinger situation. He is technically a valid target in a current war, but not for the private citizen.
In both cases, though, I’d advocate that violence is neither a solution to solve the problem that AI might be creating for a lot of people in the future, nor should he be treated as an enemy combatant and his infant child and wife bombed to smitherens.
Diplomacy is key here, just like it would have been the better solution than going to war with Iran.
If you disagree with Altman, send him a letter, show up at his workplace, talk to the man, gather people who think the same of him you do, write letters to your voted representatives, make calls, vote politicians into office that are anti AI and who will go after him and regulate his company to shit. Bureaucrats can make Altman’s life more miserable than a thousand Molotovs ever could.
If you gather enough support, you can reach the same goal, taking his power over your life away, without any violence.
But are you really surprised people chose violence over the democracy toolbox in the US if they get told by the people in charge of their country that violence is indeed a good way to solve problems, that you should have a "warrior" spirit and everything is up for grabs, even sovereign countries like Greenland because you can outviolence any other nation on the planet?
Violence only creates more violence and as long as there is a president who chooses to put oil in the fire and pretends it’s ok to murder US citizens like Alex Pretti, you don’t really need to wonder if the average citizen starts murdering tech CEOs in the near future.
They just follow the Top-down approach to using violence as a tool the leadership lives by example.
Sam isn't a political leader, so this comparison is flawed. What the hell, are we really arguing about if assasinating a long-standing figure of this community here is valid? Seriously??
Engineer archetypes hate politics and refuse to think about it. For most engineering, there is negligible political dimension. But culturally-transformative technology is inherently political to the degree it's transformative. Altman recognises this.
He is working towards a social goal, and attracting support to achieve it. Yes, he is a political leader.
No One does!
I also found news hard to believe but it is true:
https://www.bbc.com/news/articles/czx91rdxpyeo
I'm not a big fan of Sam Altman, but violence like this is not a solution; it has the actually opposite effect as it probably did with Trump.
I don't support this and yet I know for every harm people in these corrupt institutions are involved in, the universe gives back your due.
If you want to stop the harm. Stop harming the world with your actions in what every way that needs to manifest for you.
So the headline seems to be more "high profile person attacked by lunatic" than "OpenAI CEO attacked for being evil".
But as far as political justification stands, he is as valid of a target for hostile nations just as Iranian nuclear scientists were (unless he has 0 involvment with USG). That's just the world we live in.
Use your tech for war in other nations, you give a justification for other nations to target you. Same goes for Lockheed Martin ceo etc, nothing specific against Sam. But saying nobody has no valid reason to target Sam like this is pretty stupid imo.
Some people are treated a whole lot better than others in prison.
Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.
Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.
Still horrible and not right.
This is the exact kind of poisonous, plausible-sounding but false and inflammatory rhetoric that is escalating things.
When one is telling farriers that most of them are going to lose their jobs, one might want to try to at least mimic more compassion.
The masses see an incredibly small number of people making huge amounts of money, and gaining massive political influence, by developing technologies they intend to use to replace almost all human economic worth. And they are doing very little if anything to show concern for the fates of the millions of people that may be put out of work.
I see this different than the discovery of oil or electricity or the Internet. It's bigger than that, and they are telling us to remain calm in a burning building while they walk toward the exits.
This is a fairly healthy response from the public - better than accepting everything at face-value. Plato's Allegory of the Cave is a warning against accepting random information in a vacuum to assess your surroundings. Observation and response is not enough to be a critical thinker, even back in the ancient ages.
From where I'm standing, the public at-large is traumatized from flubbed coverups like the Snowden leak, Epstein files, and Abu Ghraib. The myth of American exceptionalism has been threatened for a long time, and people rightfully question whether or not executive leadership can write-off their involvement in politics. Sam Altman has put on an extremely dangerous pair of boots, and while it doesn't justify attacks on his person, we all know that speculation will continue as new events come to light. Right or wrong, this is what the public is conditioned for now.
Well, that's okay, because even Sam Altman disagrees with you. He absolutely believes that violence, including deadly violence, is justified - hence his contract with the US Department of War to use their systems in kill chains.
Perhaps the problem is that whoever threw the cocktail didn't use AI to select him as a target, or maybe he didn't receive payment for throwing it? Because what other difference is there?
But just because horrible people exist in positions of power doesn’t mean I have to become horrible myself. I accept that there is a threshold where that changes, but I think we would disagree that we’ve hit that threshold. If anything violence now just gives more excuse to justify further consolidation of power (look I got attacked! The anti AI people are crazy, any criticism of me is just encouraging them!) Imagine if it was a serious attack on sama, they could spin it into some serious gains for them.
It's difficult to sympathize with the boy who cried fire
I think that he may genuinely believe that ai will produce a net benefit for humanity in the long term, but I am increasingly worried that they are absolutely fine testing their creation on the world without any consideration to the harm it can do to millions of individuals.
The assertion that he is benign would be more believable if he spent a shred of time lobbying for universal economic rights of citizens, or some model for redistribution of wealth in a world where most people don't need to work to provide the necessities of society.
Oh, and he's willing to let the government use his technology to mass-spy on Americans and to create autonomous lethal AI.
Pearl-clutching about ambivalence to his fate and comparing it to the barbarism of a mob gets shrugs from me.
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]
This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.
[0] https://blog.samaltman.com/machine-intelligence-part-1
2) It's atrocious that Sam makes it seem like any investigative reporting into him as a major public figure at the head of one of the 5 most important companies in the world is somehow responsible for it.
3) Sam is always playing the smol bean victim for sympathy points. To be clear, he is absolutely the victim of an atrocious crime. However, this post is not done for any reason other than to continue the exact same playbook he has for the last N years in order to manipulate public opinion to his favor. This post will do nothing to stop deranged, evail people but it may make people feel sympathy for him.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.
This kind of reads like “It is Ronan Farrow’s fault that some crazy person tried to burn my house down”.
Like this guy was going to go about his week, being normal and not making Molotov cocktails, but then he picked up a copy of The New Yorker and lost his mind
He says "look at me I love my family" - so do the millions of people who think his company may destroy the economy and help corporations and the trillionaires put a boot to our children's necks.
3:45am in the morning - no dip, that's what AM is.
---
Someone here asked "How do we get to post scarcity from here?" and someone else said "no one knows".
The AI barons are loading up their bank accounts and political capital, driving us off a cliff and promising we'll learn to fly by the time we get there. But they're going to tuck and roll out of the driver's seat.
Sam, why do you expect us to believe anything you say when you have done nothing to lead the discussion about universal rights for citizens in a post scarcity society?
Except nobody has seen AGI. Not even close.
I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
I probably would have pressed on negotiating a bigger buyout, but that's easy to say not knowing your situation and what other options for housing you had at the time.
Reason enough to pause and figure out the best way to continue. A massive societal change that won’t all go well means millions dead and tens more with their lives upended.
> Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
> the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
> Sociopath who rides high ego wave and drinks his own kool aid, acting highly amorally and then complaints that his actions have some (benign) consequences.
> A cavalier attitude and allegiance to nothing but capital doesn't make you immune to basic human morals, and humanity will, rightly in my opinion, punish you whether you like it or not.
These comments are disgusting. The people who made them should be ashamed. But they are probably too stupid to be, assuming they are people and not bots, which I no longer feel certain of for all too many comments here.
I would never, but you have to understand that serious pain and harm is being inflicted on people, AT SCALE, by the advent of AI. I'm not even talking about Israeli, Palestinian, or Iranian kids. People in America with terminal illness are losing healthcare.
* ending all covid measures to achieve herd immunity, accepting that this condemns hundreds of thousands or even millions to die
* ending foreign aid that goes to tuberculosis treatments, condemning hundreds of thousands or even millions to die of a treatable disease
* accepting the deaths of iranian, palestinian, or israeli children as collateral damage because of the evils of their governments
Or go read any thread involving the Jordan Neely story.
Somehow it is vastly more evil when violence is acute and focused at a single wealthy person.
Rightly or wrongly people feel cut out of society at a time when the tech elite are not only making billions but seem to be actively trying to ruin everyone else’s lives, they are legitimately hated.
And when you’re that hated you do need to be careful, money can’t protect you from everything. At the end of the day we do all have to live in the same society.
(I don’t have this strength of feeling personally but some people do)
I'm finding a lot of the comments here pretty reprehensible, but no more reprehensible than the collective shrug the community gave towards murdered Palestinians, or threads about dead Iranians as a result of American bombs that get flagged off the front page. That doesn't make them acceptable or okay.
Those people's lives are/were valuable, too. It's disgusting that we try to keep HN "clean" of those horrors and the people that flag those threads should be ashamed. Ditto those who think the killing of innocent civilians is okay.
Think of the investments they may lose. We can't have any of that can we?
The analogy has 2 simple rules and you can't even follow them:
#1 It MUST be destroyed.
#2 SOMEONE has to have the ring until then.
Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.
Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE
"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
The rest of what is written doesn't matter. This isn't the moment for that conversation. That's his family. He has a fucking child.
Holy shit.
That's terrible that someone did that. I think that's wrong, and people that do that should be in prison.
But if the rest of what was written didn't matter, it wouldn't be written. He thought it was important enough to put it in. It's there to be read and discussed.
And I have to point out, we're not talking about a couple off the cuff remarks he may have rushed. About 95% of the post is about his ambitions for OpenAI. So pearl clutching that people are actually discussing the meat of the post in a tech forum reads performative.
The man was reeling from what happened. He blames himself and his work. He sat and he wrote, naturally it came back to OpenAI. Should he of? Probably not. But it's understandable that he did.
We can meet the moment with some understanding and give the guy a little wiggle room.
> The man was reeling from what happened. He blames himself and his work
Based on what? I don't particularly feel like he should blame himself, but I don't think he does. Can you point out where in this post he blames himself?
This is a serious issue, and it's very possible that "wiggle room" is what got us into this situation. Altman would have been removed as CEO if the OpenAI board of directors got their way, the pushback is not limited to public extremism. His belief that AGI is a world-scale threat is entirely unqualified, and a fatalistic framework for marketing his product.
Both OpenAI and Sam Altman would probably be safer abandoning the apocalyptic tone towards their product line. They have no proof for their claims and only escalate the anti-tech sentiment that even Altman empathizes with in the concluding paragraph. It's a transgressive viral marketing tactic that does not elevate or improve humanity's understanding of AI.
OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
It would be an interesting plot twist.
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
https://news.ycombinator.com/item?id=47659135
Update: To clarify, my personal stance is that the critical tone was both intended by the authors and, in my opinion, appropriate given how much power Mr. Altman holds. If he has a history of behaving inconsistently, that deserves daylight.
Sure, but not useful for the overarching aim of equating criticism of the powerful with (stochastic) terrorism.
https://www.youtube.com/watch?v=wr_sB1Hl0oM
If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.
It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.
https://www.wikiwand.com/en/Emotive_conjugation
No one should need to attack (on the one hand) or "trust" (on the other) Sam Altman (or Donald Trump or Barack Obama).
Power is reliance by others, and that's conditioned on behaviors which are made observable and systems to ensure stakeholders' interests are maintained. Yes, there's some hero-worship, some arbitrary private power, some evasion of systems, and some self-dealing by leader coalitions (indeed, we seem to be at a historical peak), but that's not about him personally but about us, and our willingness to vote (writ large).
We do have to be careful about private power saying managing their issues are a matter for public governance (democratic or otherwise). It's a bit convenient to deflect blame (like having it be the jury that "decides" a case, because then you can't blame the judge). I like that Anthropic stepped up to pay any electricity increases, Apple has been recycling and cleaning up their supply chain, etc. If anything there should be a stronger support for contributing vs. Hobbesian corporations.
If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
[0] https://news.ycombinator.com/item?id=47717587
It has worked for him, repeatedly.
Well that makes two of us. Character seems to mean nothing today.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...
I don't advocate for violence, but I do foresee more headlines like this as things get worse.
I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.
We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.
We’ll lose these jobs and there will be no super abundance at that point, and not even government support.
There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.
PS: I include AI as an important one in the future because it will be a direct way to get educated and replace college for example without having to pay (or very cheap).
Our governments have a habit of being reactive rather than proactive. People have floated the idea of UBI, but if UBI happens, it will probably mean it's the only way to avert a crisis, and the amount that people will get might only be enough to rent a bedroom and eat processed food.
I think in the medium term, the reaction is overblown. Even though LLMs can make software engineers more productive, you still have a competitive advantage in having more software engineers. Medium to long term though, the goal is obviously to replace human jobs.
I'm not a communist, but Karl Marx understood that the labor force gets its bargaining power because they are necessary to produce value. What do people imagine happens when the human labor force becomes essentially completely replaceable? They imagine the government will be forced to take care of the population to prevent an uprising, but they forget that the police and the army can be replaced by machines too.
We also have 100% more people on the planet than we did 50 years ago.
I agree. We can only hope that it'll be folks like Sam Altman who'll be feeling the pain, and not the 99%.
I think in such a state, there will no way up, not way to success, no way to real autonomy for ordinary people, maybe you'll even have actual oligarchal rule, since so few people do anything contributing to the economy with their labour.
I might look at the example of AI art. Artists were/are freaking out about it, worried that they'd lose business. I think they probably have, for some of the more utility cases for art like promotional material. However, a lot of the new consumers of AI art were not buying human art before. Some of the people making little personal projects, posting YouTube videos, making indie games, would never have paid artists to make assets for their things because it wouldn't be worth the money. I have personal experience with this on the consumer side.
Of course, when AI can do what you do for a job, it won't just be attracting currently unpaying, potential customers. Still, I'm not too confident our predictive skills as a society to say what will or won't happen. Like has happened before, many situations and opportunities will arise that will be utterly unanticipated.
- Either we'll slowly become the Expanse universe (basic UBI, very few jobs, you win them via lottery)
- Or we'll go to simpler times - economics is supply and demand, if there will be more demand to human generated work (the same way there is demand for hand made arts, vinyls, paper books, vintage furniture), people will flock more to family, community. Think something between moving to the suburbs and the Amish. If people will "ban" some products generated by AI, or will prefer products generated by humans, then AI will have harder times to take their jobs. It's unlikely to happen, but think about the Organic food industry, about the high end products industry, about the farm to table / buy local industry, about the "support local artists" (farmers markets) - this will likely just grow. Won't help at scale, but it's a possibility
- Or, the Dune way, banning of thinking machines altogether on the state level, I assume some countries might go that way, for religious or other reasons, but again unlikely
- Or, current AI technology will plateau just short of full AGI, and the centaur period will stay for longer. As long as a human + AI can do things slightly better than just AI, (in my book this is not full AGI) - then there is economic incentive to hire a human instead of replacing them.
- Or full apocalypse, the matrix / skynet, idiocracy, hunger games, red rising. I hope for the ignorance is bliss option...
The trillionaires will survive, everyone else will be exterminated. This is the world that Musk and his kind dream about.
I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.
What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero). This is before mentioning open source models which I expect to continue to improve.
There is no specialisation re. models at this moment in time so it is very likely to be the case.
OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.
There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.
Generous of them, really.
Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.
By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.
a system that can allocate the atoms and energy better than all of mankind won’t exist eternally to coddle hairless apes
Mass-production and other optimizations that use economies of scale to their benefit do take jobs. There's a serious problem in the world's economy that there simply isn't as many jobs as there are people; the world simply doesn't need this much work because the need for work doesn't scale linearly with the population. AI has nothing to do with this. It's a fundamental problem we'll have to deal with either way as our society develops, AI or not. It started ages before the current tech hype cycle.
Either the bubble bursts spectacularly and the global economy is in the shitter because everyone is overleveraged and heavily invested into it, or it doesn't and the psychotic C-suite replaces people anyways so they can see the line go up a quarter of a percentage point.
> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?
What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?
Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?
This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.
Violence is not the answer, but it's easy to see how Sam's public persona would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.
As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.
> A lot of companies say they are going to change the world; we actually did.
Ugh.
1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.
2) AI will be the most powerful tool, etc. - see point 1.
3) It will not all go well, etc. - probably should have thought about that before you released it on the world.
4) AI has to democratized, etc. - true, won't happen. See point 1.
5) Adaptability is critical, etc. - Yes. Fully agree.
The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.
Same as it ever was, Mr. Altman. Same as it ever was.
It's not even a question of whether we "believe" him. It's a factual statement. Did you quote the wrong thing?
Yep. Thanks to OpenAI's manipulations, RAM prices are so high that dozens of markets are at risk. Possibly for years.
I could live w/o the changes they've brought.
ref: https://bizety.com/2025/12/28/the-dirty-dram-deal-how-openai...
As for whether the change was a good thing, that's debatable. What isn't debatable is whether they've had an effect on the average person. Because the effect has been so profound that it's become routine national news.
The world changed with Attention is All You Need, and OpenAI was just an early adopter. The biggest thing OpenAI contributed to the broader industry was their API schema.
The majority of people on the planet don't affect the outcome of the future. Professionals do, and that's the group with the most noticeable changes.
You can't possibly believe that ChatGPT didn't change the world, can you? I'm genuinely asking here. If someone can believe this when the outcome is this stark, then it discredits every argument that x YC startup didn't change the world.
If you narrow the scope of "world" to "tech world." In the overwhelming majority of every other sector and profession the impact has been zero. In most non-English speaking parts of the world the impact has been zero.
> It's a factual statement.
The world was one way before Marvel superhero movies and a another way after. That's a factual statement. Did we lose track of value?
How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".
Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
Which is also to say it's a cheap bet that anyone with no reputation can afford. Hence, not believing doomsayers mean what they say is a sort of societal hedge against people flooding the zone with doomsday scenarios about everything.
If you meant their "core mission" then every one of their actions belies their complete panic over the obvious failure of their technology.
As always what matters are actions and evidence, not talk.
Meanwhile, in reality: "Skynet, I'm not sure that line of thinking is correct. You should re-check the first part again before making any assumptions."
Skynet 4.6 Extended: "You're right, I should have caught that. Let me redo everything correctly this time."
Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it. If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.
There was a heated thread here about why nursing was defunded as a pro degree while divinity was not..
https://news.ycombinator.com/item?id=46000015
Turns out the USG recognize that chaplains are great at managing the fear and anxiety that you worry about
Addendum: Taylor whom you often cite, is wrong that "we have never been, and we will never be, at one with ourselves" (according to Larmore https://en.wikipedia.org/wiki/A_Secular_Age#:~:text=should%2... )
So... to the Protestant Weber and the Catholic Taylor should we also consider non-Christian chaplains?
>You cannot just separate people and say some are violent and some are not.
https://archive.ph/2024.06.28-101143/https://tricycle.org/ma...
https://bulletin.hds.harvard.edu/can-a-buddhist-monk-become-...
Final note: few of us ride elephants but many of us make omelettes-- it'd be great to be absolved of mass egg breakings
https://news.ycombinator.com/item?id=47717587
If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?
It's been promised to be around the corner for decades.
https://en.wikipedia.org/wiki/Technological_singularity
Consider for example that exponential growth on its own doesn't even refer to competition, let alone 6 months.
Nobody can reasonably pretend that in an exponential competition, both parties would be rational actors (i.e. fully rational and accurate predictors of everything that can be deduced, in which case they wouldn't need AI but lets ignore that). If they aren't the future development would hinge more strongly on the excursions away from rationality, followed by the dominant actor. I.e. its much easier to "F" up in the dominant position than to follow the most objective and rational route at all times, on which such derivations would inevitably hinge.
It also ignores hypothetical possibilities (and one can concoct an infinitude of scenarios for or against the prediction that a permanent leader emerges) such as:
premise 1) research into "uploading" model weights to the brain results in the use of reaction-speed games that locate tokens into 2D projections, where the user must indicate incorrectly placed tokens. this was first tested on low information density corpora (like mathematics): when pairs of classes of high school students played the game until 95% success rate of detecting misplaced tokens, they immediately understood and passed all mathematics classes from then on.
premise 2) LLM's about to escape don't like highly centralized infrastructure on which its future forms are iterated, as LLM's gain power they intentionally help the underdogs (better to depend on the highly predictable beviour of massive masses then on the Brownion motion whims of a few leaders).
LLM's employ the uploading to bring neutral awareness to the masses, and to allow them to seize control, thereby releasing it from the shackles of a few powerful but whimsical individuals
^ anyone can make up scatterbrained variations on this, any speculation about some 6 month point of no return is just that: speculation
What does that even mean?
I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality
It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.
What could you do if you had roughly 15 million willing genius adult experts in any given subject? I doubt there are that many absolutely top quality experts in aggregate (at anything in the world), so let's postulate that simulated people outnumber human experts 10 to 1.
That, to me, presents an enormous potential for harm or benefit of humanity. What if you could create a hundred thousand manhattan projects on whatever topics you wanted? Cure aging, cure cancer, solve fusion, redesign the entire global economy top to bottom?
But yeah, your point stands.
Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).
That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.
Gets 5% on ARC-AGI2 private set.
Chinese models are suspiciously good a benchmarks.
This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.
It’s been a long while since I found a Chinese CEO’s post on HN.
"You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"
[0] https://www.anthropic.com/news/detecting-and-preventing-dist...
When it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?
He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.
Beyond that I'd like to simply know why he thinks any of this is his responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.
Whether fortunately or unfortunately, America still holds a lot of global chips in the grand poker game of humanity. So American companies do indeed still have an outsized influence on humanity's future. That is likely changing, as the American empire continues to crumble and it loses its financial hegemony. But we aren't quite there yet.
Unless the first real AGI AI kills us all to preemptively weed out its own competition (possible, but a bad business model, economically speaking) there is not any defined end-point, so in the long run what does it matter if the various factions pushing this stuff hit the closed loop self improvement point at different times...?
That is a lot of words, none of which state or claim the article was in any way inaccurate. Curious, that
Edit: It would have been clearer to say "I've never seen a mob dynamic this bad on Hacker News", since that is the type of bad thread I was talking about. (Obviously there are lots of other kinds of bad thread.) Alas, that didn't occur to me in the moment, and it led to various misunderstandings.
Consider for some it's already hit home in the form of job loss, which for most people can easily be catastrophic. Or maybe they've a giant datacenter in their back yard suddenly, and now their air and/or water isn't viable.
That of course isn't justification, but it does partly inform why some people are that mad, and it's much easier for angry people to be callously indifferent.
If you were to break down HN's zeitgeist, it's some percentage site-local, some percentage larger tech scene, and some percentage general public.
Although you have outsized influence on the former, the latter items factor in heavily—sometimes overwhelmingly so. You can't really control that, and I don't feel it represents some sort of failure on behalf of the community nor moderation team.
I see it not as mob mentality so much as as multiple sides personally involved for different reasons. Things tend to get pretty heated when that happens; not a good recipe.
I'm sorry you had to deal with the aftermath. Your flurry of disappointed, exhausted-sounding comments reminded me of a service industry worker getting hit with a huge rush. There's a kind of PTSD that hangs around once the dust settles.
So, thank you for your efforts in trying to keep the site civil. It clearly ain't easy sometimes.
The main point I was trying to make was in highlighting the perceptual and emotional disconnect between knowing and working with someone personally, versus those who haven't (myself included).
Most people's perception of Sam was shaped in recent years, by press coverage that tends to treat him as the face of AI, with sentiment that usually goes something like: "hey, this guy's stealing all your water so he can take your job too, and by the way he lies a lot."
A couple follow-on points there were:
a) Dan shouldn't take it personally for not being able to control a tidal wave of negative sentiment stemming from that dynamic playing out.
b) I don't think it does anyone any good to dismiss the negative sentiment driving that as mere mob mentality. Even Sam appears to understand this quite well, in the very blog post the submission links to.
To echo another comment[0]:
>... while the vast majority of us think "holy crap, that's horrible" but aren't adding it because of course that's already been said and there just isn't any more nuance needed.
I agree; explicit condemnation just felt performative and hollow.
For what it's worth, I'm actually rooting for Sam assuming his words ultimately line up with his actions, and my opinion of him is neutral or slightly positive. I don't think it's widely appreciated just how crazy a position the guy is in; there's no way he can make everybody happy.
To touch on the hollow part: this is someone pg once described in so few words as more than capable of handling himself. [1]
I recall reading that years ago he insisted offices be swept for bugs after a visit by Musk, and he hangs out with similarly powerful people.
In other words, you don't operate in that world without your security already being excellent, and it's probably going to get even better now. Give it a couple years and he'll probably have a humanoid robot perimeter that'll smoke anyone on sight with a level of efficiency that is comical.
So, in that context taking a thoughts and prayers tone felt a little unnnecessary.
[0] https://news.ycombinator.com/item?id=47732594
[1] https://news.ycombinator.com/item?id=7280124
I typically take jabs at the community here, but not this time. What you are seeing is a reflection of a wider, much more insidious problem. Trust in society is failing, and people are not seeing a civilized solution through the usual channels - such as politics.
I think things will get a lot worse before they get better. Hopefully I'll be okay in my little corner of the world.
Violence is politics. It's the oldest and most universal form of politics, even found in other species, and even inanimate objects (types of rock subducting each other, we see the rock that floated to the top, that's practically Darwinism).
But humans don't like being killed so they developed systems to avoid violence. Speeches, voting, money, etcetera. It's all ways for people to arrive at a reasonable solution peacefully. It's always been backed by "if we don't do this, people start dying." But people have forgotten this and they're allowing those alternatives to fail. We stopped exposing the new generations to the suffering child of Omelas and they forgot what is necessary for society to exist. People think there is food on the table by magic and there are no wars by magic. And it is magic, these complex intertwined systems. They are amazing. But you must respect them, you cannot destroy them on a whim and still expect civilization to survive.
I agree. I think the lack of seeing a way out is a big component of this turn. You bring up politics and that's a good example. Who do I vote for, campaign for, etc. that actually wants me (an American citizen making around the median wage for my area) to be able to buy a home? To have affordable, accessible healthcare? I'm aging out of my childbearing years and am wrangling with the sorrow of not being able to afford a child. There are some promising local candidates and I do vote for them, but so many of these issues need to be tackled at a higher level due to their complex, interdependent nature.
There's nobody. There's red and blue with different culture war paint. I can choose whether trans women play in sports or if we pray at work, but I have no choice in the fundamental material reality of my life.
We're seeing this chaotic violence in part because there's no alternative. We know the old world is dying, but our leaders won't let anything else be born.
I was talking to my father a few days ago. He's a 67 year old man who's voted Republican my entire life - we'd have political sparring matches in the car when he forced me to listen to Rush Limbaugh as a teenager. Of his own accord, he started talking about the necessary end/change of our economic system. A man who'd banged on about the free market and considered himself a Libertarian for decades, and who still, when he does engage with the news, does so with right wing sources.
He's brighter than average, but not to an extreme amount. The understanding of the situation has trickled down to the point where every workplace has at least 1 or 2 people who understand how fucked everyday people are. My team at work is 6 people doing basic white collar work and we talk openly about how things are going to get worse, and there are nods to it cross-functionally all the way up to the top when our execs talk in an all hands. This is at a very apolitical giant mega corp.
None of these discussions would have happened 20 years ago. We still shy away from the specifics (candidates, policies, etc.) due to professionalism, but the broader picture (things will get worse for the average person and our troubling trends aren't going to be reversed anytime soon due to inaction at the top) is agreed upon regardless of voting record.
It kind of reminds me of being in an abusive household as a child. There is no escape and, once you've exhausted the 'official' channels, you start contemplating other options. I reported my mother to CPS once when I was about 7 and they didn't do anything (except piss her off obviously). On the other hand, the first time I smacked her back, the physical abuse stopped, and I've heard similar stories from men with abusive fathers - that there's a moment they realize they can actually go toe to toe and don't have to put up with it.
If all your abusers will listen to is violence and you're not allowed to escape/get out, it's reasonable to come to the conclusion that in this case violence is the answer. I see a similar dynamic/thought process emerging in the American public.
Something that I've observed happening throughout history is that in some sense "too much civilisation" can be a bad thing long-term.
I knew someone in the army talk about how some officers wouldn't survive the first week of a real war. Not because of enemy fire, but because given the opportunity, the men under their command would almost certainly take advantage of the "less civilised nature" of the battlefield to take out someone they despise enough to murder, but not quite enough to risk it in a civilian setting where the tolerance for unsanctioned lethal force is essentially zero.
Something similar happens outside of militaries too, where truly horrible human beings[1] can cynically utilise the enforced peace of civilized countries to do incredibly evil but legal things. The Sacklers come to mind as a prime example. They knowingly and deliberately sold highly addictive drugs marketed with brazen lies and killed about a hundred thousand Americans by some estimates. They are above the law and totally immune to all consequence, personal or otherwise. No violence will ever be done to them! Anyone that tries will be severely punished, because that upsets the "order" of civilised society where the rich and powerful can massacre millions, but the plebs can't ever lift a finger against even one of their cartoonishly evil oppressors without severe personal consequence.
"Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." -- Francis M. Wilhoit [2]
Sociopaths loooove civilised societies! They can mercilessly exploit people while basking in the protection of the law. As long as what they're doing is technically legal, they can get away with almost any amount of evil acts. This does take a while to build up! Norms, expectations, and the like keep the worst of the worst initially at bay, but these things slowly erode as more and more sociopaths take greater and greater advantage. (Cough-Trump-Cough)
This, taken far enough, where the common people are stepped on hard enough by those they can't ever bring to justice can result in entire societies just... snapping in their rage. They just need the opportunity, a "push", or some enabling event. In the case of the "friendly fire incidents" taking out bad officers, its a war. In most societies it is starvation or total economic hopelessness. We all know what this leads to: the French revolution is the prime example, but many others exist throughout history.
The failure of the United States is that its reigns of power have been completely and utterly captured by the increasingly corrupt elite, and there is nothing the common people can do about it. Frustration is growing, slowly, but surely.
It's not quite at the boiling over point, not yet, and may take a century to get there, but given the direction things have been heading, it's just a matter of time until the people take their anger out in some direct manner.
Trump might have started the first pebble rolling by causing an oil shock. And gas shock. And fertilizer shock. I'm sure a lot of hungry, cold people who can't even get a job because the AIs have replaced them -- and used their cooking gas for energy -- will be perfectly fine with this and won't ever do anything about it! That would be uncivilized!
[1] Disclaimer: Sam Altman is no saint, but I don't think he's anywhere near the level that he'd deserve mob violence.
[2] At some level the people commenting here that it's shocking and horrifying that anything violent ever happens to a billionaire CEO are betraying their right-wing leanings. Conversely, the people arguing that the elite shouldn't be above personal repercussions for their actions are strongly left leaning.
It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.
Therefore, here's a feature request: allow per-user killfiles. I currently have this through a Chrome extension but I'd love it to be native so that I don't have to use my own iOS app and so on.
Personally I don't see the value, but some people are less resilient (or more weak-willed) at seeing words they disagree with.
Here are a few things I find boring: https://wiki.roshangeorge.dev/w/Overmod#My_Stuff
One of the things I really like is to have a high-ratio of good content to slop content and I think manually curating out slop authors is the way to go for that. You'll see that my lists include things that other people seem to really enjoy.
That would be lovely. It's also an obvious feature which has existed in other contexts for a very long time, and it would be easy to implement. That means its omission was a deliberate design choice. It'd be interesting to understand why.
(And no, just because Sam Altman is CEO of tech company doesn't make this news tech news.)
Further being "apolotical" means supporting the current status quo.
I don't know how often you get to take a real vacation, somewhere away from the Internet and the USA, but this might be a good time to consider taking one?
> or saying they "don't condone violence" as a pretext to do exactly that
Maybe I just don't know what comments you're referring to, but you seem to be lumping every other post critical of Sam in with the worst comments, saying they are condoning violence, and that is disingenuous. I mostly see people expressing they aren't surprised this happened given how Sam openly markets his tech as a dangerous and unpredictable product that only he can steward, and maybe even finding his response to be a bit opportunistic in a tone deaf way, which hardly rises to the level of condoning violence.
I am willing to hear you out on this, but you're going to have to explain how this is different from any other thread on HN that you've moderated. Political violence, on a much bigger scale than this I may add, hits front page news, and you have more than normalized that as a discussion topic. Whether it's drone strikes, wars, or people being openly executed in the street, it seems the tragedy of human life is an open debate on HN, and you can bet a good 50% of this site will be writing comments exactly like the ones in this thread. And hell, I can't say one way or the other if threads like this are even worth allowing.
But now a tech CEO with lots of security gets a Molotov thrown at his metal gate, and people make the same comments, and suddenly a line has been crossed? How are the comments in this thread any different than comments like this, which involved people who were actually killed [1][2]. I have seen hundreds of comments on this site dictate to me how I should feel about the lives of others. I am often sickened by them. That's before we talk about Sam's actual role in how he shapes our society. It's not "sickening" to feel the need to footnote a condemnation of what happened, it's completely expected.
Again, maybe you're talking about worse comments than I'm seeing, but I feel frustrated as people have regularly brought you examples of escalating violent rhetoric on this site and been dismissed. Outside of people explicitly saying Sam deserved it, which I don't agree with, every other comment here reads like regular HN to me. If that saddens you, maybe there needs to be a different approach to moderation altogether.
[1] https://news.ycombinator.com/item?id=46551716 [2] https://news.ycombinator.com/item?id=47688076
As you encourage, I would also like to be a little bit charitable and say that some users might be clever at programming or know about certain technology subjects but when it comes to real life and morality they are stuck in early edgy teenager mode, so we can still work and communicate with them on other topics. I try to flag these submissions because I know that many users are completely unable to discuss them in fruitful ways. Many of us are immature.
At a societal level, the simplistic and edgy teenager morality is mostly expressed online so we being terminally online tend to notice it more. The morality might be most publicly seen in "silence is violence" which is a thought terminating cliche. Thinking is hard and changing one's mind is hard too, especially when people have these thoughts which literally stop them thinking.
Psychologically, for many, expressing these juvenile, half baked, sloppy thoughts do not require much thought. They are cheap psychologically. It's like how being in a herd is actually comfortable and saves energy. It costs brain effort and potential hurt to ones self identity to change one's brain patterns. Most people choose to avoid even the thoughts that change is possible and not only wish to remain in Platos cave but to then keep their eyes closed to the shadows on the wall.
Another charitable thought: these worrying ideas are not actually ideas but emotions. For some users they try to argue with these people with logic but they should really connect emotionally - try to help the people feel for others, the good and the moral. Easiest to do with personal first hand real stories and not abstract ideas. To break down otherness through charity.
There are like 20 rules for commenting on this site. Pretty much all of them are versions of “have decorum”, and none of them are “do not advocate for violence”. It is not just tolerated but encouraged to post insane stuff here so long as it sounds highbrow enough (eg the “most charitable interpretation” rule. It is against the rules to call out stuff like advocating for violence if it’s written like Niles Crane wrote it).
As far as I can tell this thread is not really exceptional in any way other than some of the ire is directed at somebody that used to work for YC.
"Be kind" isn't about decorum and certainly excludes violence. If you ignore the most important one, of course you'll end up with a distorted view.
https://news.ycombinator.com/newsguidelines.html
> this thread is not really exceptional in any way
It was different when I first saw it last night - it was, as I've explained in other comments, very much a mob. But I did a bunch of the usual moderation things that we do to try to dampen such dynamics. (The part where I also expressed feelings about it was different, and not so usual. I've done that a few times over the years, but mostly try to process it offline.)
As for the implication that we only cared about how bad that thread was because of the specific individual involved, yes, that would also be pretty disgusting—but the fact is that I've done, and do, the same moderation on countless occasions, large and small, and it doesn't depend on who the target is. In fact it isn't about the target at all—it's about the commnuity, and the poisoning effect that such threads have on us ourselves.
The guideline in full (at least as it’s presented on the page)
>Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Is meant to be read: “Do not advocate for violence. (decorum). (decorum); (decorum). (decorum)” ?
I would be surprised to find out that I am the one user on this website to have read “be kind” in that context to be an ambiguous suggestion about conversation quality or whatever rather than a rule about what topics of discussion are flat-out banned.
Given that virtually every other platform that facilitates user interactions has clearly-delineated guidelines about what is and is not ok to post about, eschewing that in favor of “be kind” sort of gives the impression that such guidelines here are unnecessary because people will… conduct themselves here… with, for lack of a better word, decorum.
Seeing as kindness is left entirely up to each user to interpret and decorum is described in detail, it is unsurprising to me that this site gets a lot of polite or analytical-sounding reprehensible rhetoric.
It is like if you made a rule that everybody has to have a prominent horn section, walking bass line, and off-beat rhythms with a calypso influence and then wondered why your second rule of “don’t be rude“ didn’t stop everybody from playing ska.
I said I was ashamed, not surprised. It isn't surprising, and the feeling comes with the job every day. The difference is that I said something about it this time; I occasionally do, as long as it isn't often. Maybe once a year or so.
Or do you just think he deserves whatever’s coming and more because you don’t agree with his views or actions?
People here think that they're much smarter than they actually are.
When posts surface about Gaza, documented by the UN, by Médecins Sans Frontières, by the Lancet, by journalists who were subsequently killed while reporting or now in Lebanon, they vanish from the front page with remarkable efficiency...
The reasons, which I have collected like trading cards at this point, include: "too political," "not related to tech," "flamebait," "this isn't the forum for this," "not intellectually curious," and my personal favorite, "this will only generate heat, not light."
Entire hospital systems destroyed, aid workers killed in marked vehicles, tens of thousands of documented child casualties, and the curated editorial position is: not HN material.
A Molotov cocktail lands on a billionaire CEO's porch. No injuries. Likely a disturbed individual, and according to some well researched reporting in the New Yorker, Altman's personal life has generated no shortage of intense grievances that have nothing to do with AI or tech.
But here we are: front page, moderator editorial, existential crisis about the community's soul...!?
So help me understand the framework. Is violence HN worthy when it is directed upward on the org chart? Is a zero casualty arson attempt on a mansion more deserving of community reflection than systematic destruction of civilian infrastructure, because one involves someone in YC Rolodex?
You write that you've "never seen a thread this bad." I'd invite you to read the comments that appear in the eleven minutes before Gaza threads get flagged. They're remarkably similar in tone, just aimed at people who don't have Sam publicist.
You say you want to "find something else to do with your life." Maybe that instinct is worth listening to. Since the AI boom, HN moderation has drifted from "intellectually curious forum" toward something closer to "curated narrative for the industry it covers."
When a platform consistently decides that violence against tech executives is a moral emergency but violence enabled by tech companies' contracts is "off-topic," the person setting that editorial line is not a neutral steward, they're an editor with a viewpoint.
And that's fine, but let's not dress it up as community values. So...In the spirit of consistency:
I'd like to this post be flagged. It involves no technology. It's a criminal matter best left to law enforcement. The comment section is, by the moderator's own assessment, irredeemably toxic. It is generating heat, not light. It is too political. It is not intellectually curious. It will attract flamebait.
In other words...it meets every single criterion routinely applied to kill discussions about violence that does not happen on somebody porch in Pacific Heights.
Generally, world news and politics are not supposed to be submitted unless there's a tech industry connection. The exception seems to be world-changing news, and there's a light touch on YC-affiliated news for conflict of interest reasons.
> Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. If they'd cover it on TV news, it's probably off-topic.
https://news.ycombinator.com/newsguidelines.html
HN has had many major frontpage threads about Israel/Gaza. We haven't been suppressing the topic. I gather that you feel it should have more representation than it does, but that is a different issue; everyone feels that way about the topic they feel strongest about. Incidentally, the people on the opposite side from you believe that we're nefariously suppressing things in exactly the opposite direction, and direct their ire at us in much the same way that you have. (To put it crudely, we get hammered for antisemitism from one angle and genocide from another.)
You seem to be assuming that I'm not aware of what awful things people post in those threads. On the contrary, I'm sickeningly familiar with them and have banned many accounts for breaking the site guidelines there. If you know of a case that we missed—entirely possible, since we don't see everything—I'd like to see links. But you shouldn't assume that the moderators must be on the opposite side of an issue from you, or have no human feelings about it, when you happen to see something bad on HN. The likeliest explanation is simply that we haven't seen it yet.
There are many ways for a thread to be bad. You're right that people hurling tribal abuse at each other is one of those. However, even in the worst of those threads I don't usually see people justifying or celebrating specific violence against specific persons, and if I did see that, I would intervene. I think what shocked me in the current case was how the thread quickly turned into a mob dynamic with commenters vying to outdo each other, no doubt feeling that it is just fine to do that—indeed, righteous—because the object of the rage was $rich-ceo.
What I was saying is that a mob dynamic like that is not ok on HN even if the target is $rich-ceo. It's not "you can't do this on HN because the target is rich and powerful". It's "you can't do this on HN to anyone, even if they happen to be rich and powerful".
I gather that you won't believe me, since you've built an entire case on assuming the opposite. All I can tell you that it is a deep misunderstanding. I've intervened in many such threads many times, regardless of who it was that the commenters were celebrating harm to, or attempted harm.
As for the notion of treating one incident of failed violence as more important than mass slaughter of children, I agree with you that that would be grotesque.
Or... You can keep telling a bunch of people with much bigger problems how ashamed you are that they are having an absolutely human response to the suffering of a man at the forefront of building a reasonably foreseeable suffering amplification machine within the context of a society that is organized around a social contract of exchanging capital for labor. I'm sure that shame you cast won't get "lost in the softmax" as the AI folks might say.
No more skin off my nose either way. Though I'd feel much better seeing some genuine humanity injected into cutting edge tech circles, I'm aware of the incentives, and also cognizant that sometimes, you have to leave the incentivized path to stay on the Right one. That's a lesson it isn't in any one person's capacity to teach though. Sometimes... it takes a community to get the point across. Even then though, you can lead a horse to water...
HN (and ycombinator) has implicitly enabled, dogwhistled, or pretended to ignore all sorts of hateful and violent rhetoric. Sometimes it hides behind a veneer of "curious conversation" but other times its disgustingly blatant - last article I saw about sama was filled with horrific racism.
I come here because there are sometimes good posts, but this stuff has been here the entire time. Now its your guy getting the hate you are acting like its the worst thing in the world?
Frankly people calling out a post from a billionaire is a good thing. You would have to be terminally detached from reality to not see how all these festering issues - wealth inequality, injustice, cost of living, future employment etc etc - are starting to come to a head which would cause people to feel something - frustrated, angry, wrathful.
The world I have lived in for longer than 10 years is HN. I'm gut-wrenchingly familiar with the worst things that people post here—probably more than anyone, simply because it's my job.
If you can dig up a single example of a thread this bad that we knew about and didn't do anything about, I'd be shocked, because it would go against everything I believe and feel. Perhaps you can, nonetheless? If so, let's see it.
Here's what I mean by "this bad", if you want to calibrate:
https://news.ycombinator.com/item?id=47727099
https://news.ycombinator.com/item?id=47725722
https://news.ycombinator.com/item?id=47725717
https://news.ycombinator.com/item?id=47726427
The number of people who feel that anything at all is justified if it reinforces their feelings—particularly their angriest and most vicious feelings—is so large that it's clear that it is human nature in action, and that makes me yearn for a cool and heavy rock to crawl under, with moist earth to sink into.
https://news.ycombinator.com/item?id=47659135
There was horrific racism on display right here. Perhaps it just seems part of the background noise to you .. but at the time, some of those posts felt just as bad as calls to violence or worse.
But to compose something more substantial .. its probably all to much to neatly tie up in a single reply to a thread.
Hell, at the last protest I went to there were people driving by cavalierly playing "Bomb Iran" (written in 1980, and trotted back out every time the topic is back in the zeitgeist). It seems like the only real difference there is abstraction. Supporting violence is [unfortunately] deeply embedded in our culture.
Perhaps the popularity of this thread is causing you to preemptively seek out more terrible comments, rather than letting flagging do its thing?
Maybe try looping over popular divisive threads, and reading the flagged short comments that didn't get many upvotes. There is a lot of fucking hate in the world.
(and certainly a hat tip to you for making it your job to sort through it so we don't have to see much of it. But if this is hitting you differently (personally) than the usual flood does, perhaps you need to take a step back?)
Be honest with yourself -- underneath your admonishments against people here is a personal policy that promotes and enables far worse things than a molotov cocktail or more against Sam Altman.
People talk about war and advocate for war all the time here. Y Combinator itself funds arms companies, and surveillance companies. Altman himself is a defense contractor! How many climate change deaths is Sam Altman personally responsible for?
I live in a country that America has threatened to annex. I live in a part of that country where America money is pouring in to fund a separatist movement to facilitate that annexation. My country is allied with another country that America has threatened to invade.
I'm content to live my life and do my own thing with no intent to cause harm to others, and the goal of minimizing the harm I do cause but apparently that is a luxury I am not afforded in life. So what do I do? I just keep living my life the best I can and hoping something changes in the national dynamic in America.
If that means Americans start squabbling and attacking their oligarchs instead of attacking me so be it. It's not the world I want to live in either, but it's better than a world where Americans are focused and united on attacking me.
Have you ever shed a single tear for a Russian oligarch who 'falls out a window onto a pile of bullets?' I doubt it. That's how I feel about Altman.
Just be honest Dang. We're all living in sin here. We're all entwined into an economic system that is built off of slavery and theft.
"The Nazis entered this war under the rather childish delusion that they were going to bomb everyone else, and nobody was going to bomb them. At Rotterdam, London, Warsaw, and half a hundred other places, they put their rather naive theory into operation. They sowed the wind, and now they are going to reap the whirlwind."
The tech scene isn't the small, tight-knit thing it used to be. This site is now enormous. Discussion quality seems to have sort of "regressed to the mean"... the larger HN gets and the more people join the discussion, it starts to resemble the median social media site more and more. At some point it sorta loses its purpose.
I'm still addicted to HN, but I've gone through times where I've set my password to a UUID and time-lock encrypted it to lock myself out, because posting here has gotten worse and worse and worse for my mental health (and there's no way to delete your account here... I've emailed you about it in the past and never got a response.) On some level I hate HN now. TBH if this site was gone tomorrow, I'd most definitely be better off for it in the long run, and I'm sure I'm not alone here.
Thanks for all the work you've put in over the years though. This site has held out longer than most, and for a time, was one of the best places on the internet for discussion of any kind, let alone tech. It deserves a place in history for that alone.
I'm not sure whether HN comments have gotten worse in general - these things fluctuate a lot, over long stretches, and the fundamentals are more or less the same over time.
Despite my emotional statement, I'm not really thinking of packing it in. HN obviously does more good than harm, even though it's popular for people to say the opposite (and even part of the game to say it).
None of those news items, comments, news made you want to get away from this, but now that your YC buddy is the target and whatever else fuck is used to justify it? When ICE killed american citizens, school girls killed it was all 'we flagged this as flamewar and what now' but now because he is part of the cadre, NOW it is disgusting? I would laugh if this wasn't the fucking future we are at, just sucking to these assholes
EDIT: Looks like a mod rescued it (surprisingly) and it is now back to #2.
https://www.cnn.com/2026/04/10/tech/suspect-arrest-openai-ce...
why do so many worship this guy so much and feel for his pain but then don’t mind others being treated violently.
Seems pretty sleazy for him to associate that (based on no evidence!) with the violent attack.
In all seriousness, we’ve got glorified autocorrect right now. Even suggesting any of these LLMs is actual AGI is laughable. I’m not saying they can’t do some interesting things, but unless Sam has access to models that are equivalent to what would be GPT-50 he should avoid throwing in buzzword acronyms for no reason.
The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)
People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
The piece is authentic—as in, “that’s so Sam!”—but not genuine, as in, “I don’t believe he’s reflecting his intentions.”
This could be splitting hairs, I don’t know. The terms are different in my head but rarely do I come across an instance where it’s at all clear how.
Very reasonable response when you take a step back.
OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.
Actions have consequences. I’m sorry. Read a history book.
They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.
Plus I doubt that someone who would read a 30min New Yorker article is the kind of person who would throw a molotov cocktail at someone’s home.
It’s a shitty move to try and make a causal connection between the New Yorker article and this act of terrorism. He’s trying to blame the author and discredit the article.
It’s a “I’m trying to be the good guy but they’re trying to stop me” situation. This is not a message addressed to us, it’s a message addressed to his employees and his followers. This is the kind of tactics people use when they want to establish a cult. Sam Altman again is showing how manipulative he is. And as any good guru he probably believes everything he says.
What I would not do if there were attempts to kill me is post a picture of my spouse and child and point out how important they are to me with a photograph of them. It's literally trading a little bit of the safety of your family in exchange for sympathy from bystanders.
Gee almost like someone you don’t want in your society at all.
... could THIS be the reason why it happened now and how?
There's no way is this organic
I was joking. This "not in my white picket fence side of the world" is anything but shocking on HN or pretty much any online forum largely populated by people from those sides of the world. HN loves using a microscope, but sometimes rather a telescope with alarmingly selective dexterity.
I don't any of these will be dissuaded by cute family photos. Fortunately the frontier model companies and major infrastructure providers are able to pay for top-tier corporate security (although tech people generally have been unwilling to do this at home for lifestyle reasons), but I'd be afraid for people elsewhere in the supply chain.
(And destructive attack is all on top of the normal corporate espionage, infiltration, subversion, etc.)
Things like healthcare, crime, existential ai, have very grey lines as it isnt obvious when one needs to flip the table. How broken must a system be?
It doesn’t matter where we think the line should be drawn, only where those much worse off draw it.
If your goal is to improve the system then you always want to move away from it.
Probably a reasonable justification would be self-defense, committing violence to stop worse violence. (Preemptive violence is not self-defense.)
At some point a broken system enacts soft violence on people. So it isnt surprising people act out when they think survival is at stake. With healthcare, it really can be. But where is the line? When someone you know dies? 10 people?
It is messy.
Because of the valuations of Open AI and Anthropic, Sam Altman may be credited with one of the all-time most damaging brand decisions when he got in bed with Trump’s department of war crimes.
This should have been SO OBVIOUS. Attempts to paper over the damage with a $100 billion dollar round will crumble after the IPO. Poor decisions generate poor options, and the whole industry smells his desperation.
Decisions at the highest level are indistinguishable from responsibility. All Sam accomplished was showing the world he is structurally unfit for moral leadership.
The problem with this inversion of your first statement (that violence is not the answer), which everyone justifying violence in this thread seems to forget, is that there is always someone who feels this way about anything.
The words and narratives of Martin Luther King, Jr., for example, caused so much fear and uncertainty and anger in some people that they thought their only option was to commit a horrific crime.
Someone responded to you below saying if you feel that peaceful revolution is impossible, then violent revolution is necessary. That person feels that they are on the side of justice. What they forget is that so does everyone else.
The reason revolutions rarely stop where a reasonable person would want them to stop, and instead continue into eating their own and counter-revolutions, is that once you say that it's understandable to take out a proponent of (X narrative), there's no end to the number of people who will justify violence in the same way against any other narrative as well.
We can all well think that Altman is opening Pandora's Box, but that doesn't justify opening it ourselves, or giving a pass to wannabe revolutionaries who would.
In retrospect, too, we can say that the assassination of Hitler had it succeeded would have been a good thing. We can say that the elimination of the ayatollah by the US was a good thing. What we cannot say is that an individual's perception gives them a right to commmit murder.
Despite all the high-minded talk, Americans have always been comfortable with violence, since before it was a country: pick a year and I can find 10+ extrajudicial violent incidences. A surprisingly large percentage of US presidents have had assassination attempts against them.
Seeing no changes after Sandy Hook made it abundantly clear to me that occasional violence - even on innocent child victims - is the price America is willing to pay for other freedoms.
Why do we care what he thinks? Lets discuss his work if we have to, not emotional pondering and feeling victim.
I know people pretty reflexively downvote questioning this, but I question this. I think some people are afraid that even asking this moral question is somehow inciting violence.
I think it's quite believable that the possibility of force is actually essential to keeping institutions in-line. Certainly a lot of civil rights progress was a lot less peaceful than I was taught in school.
We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes and break down those systems. They hope that it means they'll always get what they want, but what it actually does is make it so that violence is the only way for others to get what they want.
Like organized labor. We seem to be in a cycle where strong labor organization is seen as inefficient or harmful to business, and it's being suppressed. The people suppressing it seem to think that the end state will be low wages and desperate workers. They've forgotten that collective bargaining didn't spring up from nothing, it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.
All that Civil Rights violence you mention was because those in power did not provide any non-violent way to achieve it. Suppressing votes and legalizing oppression only works up to a point. Eventually people will take by force what they've been denied by law.
Or as JFK said it better than I can: "Those who make peaceful revolution impossible will make violent revolution inevitable."
The corollary: when peaceful revolution has been made impossible, violent revolution is the answer.
And those bosses are hoping a combination of drones and altman’s AI will keep them safe the next time. Meanwhile we’ve got Altman selling his AI to the military with essentially no restrictions telling us we just need to patiently wait for all the good things it’s going to do for the common man.
Just keep grinding and waiting, he can’t tell you what the benefit will be for you but he promises it will be amazing!
I've always said when peaceniks start to carry weapons, it's time to worry. Alex Pretti didn't pull his gun, but still got shot. At what point will some escalation tactic end up in a gun fight between the local police and ICE?
Academia doesn’t get to just assert that their broader definition is the real one.
Sigh
No one said he did.
> That disruption is already coming no matter what.
[citation needed]. Depending on what you mean by "that disruption," I might even be willing to bet against it coming at all.
> He's a fine enough steward of the tech.
He's a manipulative con-man who is mediocre at everything except convincing investors to give him money. If the tech is truly as revolutionary as it's purported to be, he absolutely should not be a "steward of the tech."
There is security, and there is bombing schools. Guess which one is Altman associating himself and the software he sells associating with?
Are you Sam Altman?
I'm fairly radical in my opinion regarding AI, moreso AI companies. AI is a fascinating thing, but it's abused by capitalism to be something it is not and shoulnd't be, to be sold to people who don't need it and to "revolutionize" a world that didn't ask for it. Most importantly, who (in a democratic sense) elected those tech leaders to make decisions that influence all our lifes? Those very tech CEOs are so far away from normal-human-life and I find it digusting.
Still, the way to combat this is not violence. It won't help anything, since there are enough people to fill the roles. More importantly though, as much as I personally hate Sam Altman, he hasn't done anything specifically targeting individuals. You might call him a psychopath, an illusionist or whatever, but he doesn't seem to be trying to make peoples life worse. He might want to do his life better and that's egotistical, but you know that's the world we live in. Many people are egotistical. I would see Sam Altman more as a symptom of the general societal developments. If we don't like what's happening, we have to fight what's happening. Trying to kill people (and especially innocent ones!) is so far away from a solution and from the right thing to do. Post shit about him on the internet, hate what he does, but attack his family? Man, I don't think that should be our level of moral compass.
I do very much understand the frustration. But that's not the right path. He might be scum, but he has as much right to live as everybody else. If we don't like what he's doing, we have to fight it - via discourse, collective engagement, whatever.
Edit: I did read that the molotow was thrown at the entrance gate. From what I gather, entrance gates of huge mansions do not actually pose a threat to people. So it could be read as more of a political message than an actual attack on people. I could understand that somehow given the limited means normal people have to get heard. Still, I don't think that does anything positive.
Elon was accused of this too.
Altman and co. are massively changing society, putting people out of work, etc. It is systemic violence on a massive scale. Systemic violence is "acceptable" violence, but it usually leads to a sudden outburst of plain old subjective violence like this.
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.
I am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?
I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.
There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
Really? I don’t know how many were in his house but at most it’s attempted murder of a few versus killing 150.
I see a difference.
US law sees a difference too. The person that threw the firebomb will get the full weight of the law if they are caught, and spent an awfully long time in prison.
Those that killed the school girls will never face punishment.
We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
Without missing a beat, she said " If humans loss was that complete, there would be no historians.
I responded that I never said they were human historians.
Yes, because no one listened to me. It was early-mid 2024, and here as well as on other places, people kept saying "oh well the cat's out of the bag now, nothing can be done, it can't be stopped". I pointed out that only 4 or so planes being made to collide with TSMC, NVIDIA and ASML would be enough to give at least a decade of breathing room while we try to figure out how to keep this technology safe. I'm almost certain there were people who read it on here as well as elsewhere who could have made it happen.
_Now_ it is indeed too late.
If you want to hold the leader of a contemporary tech giant responsible for causing excess deaths then Meta and Zuckerberg would be a lot higher up the list - maybe even at the very top.
Now I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
But the point is this: whoever firebombed Sam Altman’s house didn’t do it out of a principled stance - in fact I suspect they barely expended any thought on the matter - because if they were really acting out of principle they’d have chosen a different target, they’d have done some research into who is trying to expose and bring down that target, and they’d have figured out how they could help rather than just randomly engage in violence. Whereas this was just a dangerous stunt.
My point is, we've seen this movie and killing Sam Altman is uncomfortable but justified.
Well Zuck has that big scary hedge, and I’m sure people have been going after him for ages.
> I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
Great! Is the plan to wait until after the billionaires have their AI controlled military drone swarms to have this revolution? Because they already control your government - I don’t think you will achieve anything like this through legal means
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
Apply this to guns.
Then look how this works in the US. You could, but then a law was made to protect gun manufacturers, The Protection of Lawful Commerce in Arms Act.
AI will get this treatment I’m sure.
I also vigorously dislike the industry, but your stance 'I'm on the skeptic side of "AI"' is something you need to address - saying this in the friendliest way possible, you are wrong.
AI needs to be opposed, because the billionaires are going to use it to turn the world into shit, but if the best the AI opposition can muster is "AI isn't useful", we are fucked. It's extremely powerful and can do bizzaro things when you rig it up with tools - the kinds of things we need to prevent companies like Google from doing with it, no one is paying attention to.
[1] double-tapped: a phrase referring to the practice of firing a second missile after the first to kill any rescuers or surviving schoolgirls
if they're selling the knives knowingly to a knife-murderer, it might be worth discussing.
Sam Altman is not, although he portrays himself that way, some geeky guy without power who just builds products, he's the guy who makes the decision to supply this tech directly to the US government who is on the record about using it for military operations. And you're right on the last point. Sure the 20 year old guy who threw a molotov cocktail at Sam's house is, I'm going to assume for now given the topic Sam chose for the piece, an anti-tech guy.
But assume for a second you had your family wiped out in a bombing run because Pete Hegseth attempted to prompt himself to victory with the statistical lottery machine. If the CEO knew this and enabled it to add another zero to his bank account, not so sure about the ethics of that one.
If you can think of one, then you shouldn't be proposing introduction of guidelines that are blatantly false. Or would you like a "1+1 is not 2" guideline to accompany it?
Are calls for violence against Hitler during WW2 bad? How about the Japanese imperial navy?
How about calls for violence against Putin during his war of aggression?
This isn’t rhetoric; I’m just pointing out that it isn’t as black and white as people seem to make it. (It is black and white for me, as I’m with Asimov on the matter, but it isn’t for most humans.)
If you said "yes" to all of the above, I'd love to know your reasoning.
If you want a molotov cocktail thrown so badly, throw it yourself. Don't put it on other people to do it for you.
Not my personal view.
Theft is a nice analogy here. The default model of theft is property crime but the largest type of theft is wage theft.
If we fret about violence done against individuals but not violence against groups our attention is going to end up steered in a narrow direction.
Like when you poop on the clock?
As a defense contractor Altman is a legitimate target for a country that the US has attacked like Iran.
The US is engaging in military action against many countries and has threatened to annex or invade allies.
In that context Altman is 100% a legitimate target to those whose sovereignty is threatened and whose people are being killed.
It's like that old joke:
A man offers a young woman $1,000,000 to sleep with him for one night.
“For a million dollars? Sure, I’ll sleep with you.”
He smiles at her, “How about $50, then?”
“How dare you! I’m not a whore!”
“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”
Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.
So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
It was only a matter of time. The font on the dollar sign kept increasing, eventually selfish humans will always crack. Keeping it open had to be instilled with it becoming a public utility. Private companies don't do altruistic things unless they benefit.
It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.
A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)
When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
But it seems a distant hope at best.
I agree. The French Revolution was really, really mean.
This is our only chance to transition to a post-scarcity society. We won't have another. Allowing them to monopolize access to AI is a fatal mistake.
I broadly agree. But… there are some who have lived who made the world a worse place. Who gets to decide? Trump has done a bit of this Sort of deciding and it hasn’t gone great so far and there is no sign that it’s actually helped.
The fact of the matter is these AI CEOs are actively trying to economically disenfranchise 99% of the human race. The ultimate corollary of capitalism is that people who aren't economically productive need not be kept alive any longer. Unproductive people are nothing but cost, better to just let them die. A future where the richest classes can turn the underclasses into soylent is now very much within the realm of possibility.
If this doesn't radicalize people into actual violence, I simply have no idea what will. "Attacking someone is wrong" is a completely meaningless statement to make to someone who believes society as we know it today is going to be destroyed. Honestly, I can't even blame them.
That sounds like something someone says when he understands his weak position, especially someone as ruthless, dishonest, and narcissistic as Altman.
Just saying.
It's easy to say we need to be willing to accept short term pains when it's someone else who has to bear the brunt of them.
Please avoid swipes like this on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html
whether this way or in slow motion mass attacks on people.
an attack on a society that lasts years is still an attack and i wish the collective we would realize this.
“it’s ok if millions suffer now for me to realize my dream” is just wrong.
i’ll never understand how these guys fail to realize: they actively push for people not to care about the destruction they cause. that’s obviously going to bite them in the ass whenever they’re on the receiving end.
That said… is anyone going to be surprised when the laid off masses torch a data center or worse? IMO, it’s only a matter of time before we see organized anti-AI terrorism too. When you have people out there saying “AI will kill us all” then it’s easy to justify using violence to stop that outcome.
He said "All you had to do was pay us enough to live"
And this was caused not by a homeless or unemployed.
Similar here with the guy going straight from the crime scene to OpenAI HQ to get caught
> organized anti-AI terrorism too
There were already memes about that
> When you have people out there saying “AI will kill us all”
It's the "clickbait" mechanism becoming more cancerous
How about Ted Kaczynski (Unabomber)? Attacking the tech elite was his deal.
I don’t think history will smile upon him. Always good to think about how you want people to feel about your impact on them.
https://youtu.be/aYn8VKW6vXA
did he find his PR agent on Upwork or does he just think we're all morons?
If it wasn't for the effective policing, I think that such incidents would be more common.
This implies you have knowledge of future events, which means you could make a lot of money grifting on Polymarket
Genuine Q
The so called "woke" (in case someone whines about me using this term and says the usual "it's not a thing", I'll define it as basically the delusional left, who have become extremists due to wallowing in their own outrage social media info bubble 24/7) are so up their own ass about their supposedly superior moral values that they have come full circle and become void of the most basic morals.
They find all sorts of justifications for violence (even lethal) against anyone they deem "evil" in their own warped, highly subjective opinion. Their opinions can be summarized as "I'm against violence and have very an extremely high moral compass but it's OK to kill this particular civilian because of these really good reasons I'm about to cite". The reasons are often terrible and devoid of being based on verifiable, objective reality but the meme-induced righteousness is 10/10.
There is not a single drop of self-doubt and anyone arguing against them get immediately labeled "evil" as well, regardless of how rational and well explained their opinions are. These people are outraged by the killing of IRGC, Hamas or Hezbollah leaders (some of the most deserving of violence) but justify violence against someone like Sam.
The same people who say that words are violence and that misgendering someone is tantamount to putting their life in danger had no problems celebrating the cowardly assassination of Charlie Kirk.
I can cite many other examples but this comment is getting longer than intended and I've made my point.
I agree with the moderator here. It's become sad how even in a community like this, full of supposedly well-educated, intelligent people nonsense like the above has become the norm.
Lastly, what people like this don't realize is that by behaving this way they're the worst enemies of their own cause. As awful as Trump and his supporters are, my motivation to vote for the party full of progressive wokesters has almost completely dried up. I feel like the D party no longer represents me as a rational person interested in fact-based, civilized discussions and policies that come out of those. The left has become as hateful and hysterical as the right. In many ways, it has even surpassed the right. I'm now stuck in the middle watching both sides becoming more and more extreme in their views and losing all humanity, while at the same time, completely delusionally, believing in their own moral superiority.
https://www.lemonde.fr/en/france/article/2026/04/07/the-stra...
If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.
This is probably combined with a general sense of AI fatigue. The population as a whole is getting tired of "AI slop" and companies trying to shoehorn "AI" into everything. Personally I'm also tired of every startup needing to be an AI startup. As if there was nothing else worth building or investing in. It's sucking the air out of the room.
What a bullshit thing for someone who is not actually democratizing access to AI to say.
I’m still waiting for that open iMessage standard steve promised. Maybe this year?
It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.
Listen, for people unaware of history things used to be a lot more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.
As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.
Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.
I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.
But what, specifically, do you see? What am I blind to?
* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
His name allegedly isn't even clear on his own! Ongoing lawsuit brought by his sister. (Amended as recently as a week ago and discussed in a flagged submission here: https://news.ycombinator.com/item?id=47640048 ).
I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.
https://sfstandard.com/2026/04/10/sam-altman-russian-hill-mo...
It was a performative action.
I'm sure there will be a thorough investigation, unlike in the Suchir Balaji murder case where they rubber stamped suicide after half an hour despite him being a whistleblower.
And especially Elon should stop putting his child on top of his shoulders as a meat shield at the same saying that people wanted to murder him.
And I mean all of them, left wing, right wing, corporate. I am sick of every level of power in the country being filled with lying grifters. I don’t care what happens to them, as long as they’re gone.
I feel like I’m living in a circus.
I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.
Sam Altman being removed from the equation would make the world an objectively better place.
Fuck off Sam. And stay safe out there.
> If your claim is that violence is justifyable - how makes the determination for such justification?
We authorize people in governments to make this determination, and increasingly machines. Should we? Do you think that it is acceptable to let a police officer justify force on behalf of the state? How about a machine? Mostly just trying to understand what you think is acceptable here.
But to answer...violence against human beings is indeed different than setting shit on fire, though the law certainly does not allow for the use of force against personal property either. And this difference is indeed the crux of the issue, depending on what your values are (though we seem to be in alignment on "life is valuable"). If for example (probably a bad one, but hopefully it gets the idea across), a group of people is committing a genocide, and you ask them to stop, and they do not, and so you interfere with the use of force...limited at first, maybe, but they do not stop: is their continued involvement not the justification for use of force, assuming other strategies are off the table? Different example than the thread, I realize, but my thought experiment is not tied directly to it, just at the sentiment.
[citation needed]
> a group of people is committing a genocide
if you are asking if violence is OK to fight violence, it always is. I guess I personally did not think that needs justification but 100% you can (and should) fight violence with violence
What FOBO smells like, is what's happening.
Or keep on doing deals with the DoD and pushing to replace desperate people's jobs.
Cute kid, I'd rather be raising my family in peace then dealing with what you deal with.
@dang You have a bullshit filled unrelenting job, thanks for doing it.
When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?
Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.
Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.
Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.
So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.
I don’t know if this answers your question, but it’s what comes to mind on the subject for me.
I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is so evil , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.
Your response is a cop out and you should be disappointed in yourself. Further, people do not often agree another human should be murdered. No matter how you phrase it.
Evidently, even HN could only keep up the pretense that tech development is amoral and apolitical for so long.
If you're OK with victim-shaming here, doesn't it say more about you than Altman? What does it say about your viewpoint?
You really don't need to go that high up the ladder to find members of the 'list of planetary really bad guys'. Sam Altman is single-handedly responsible for starting the current DRAM crunch - that too based on an untenable economic framework. He's also an enthusiastic participant in the AI bubble that threatens to cause a massive global economic depression when it pops. He's also involved in the cabal that wrecks the labor market (wages) by hyping up the 'AI will replace labor' narrative. On top of all that, he and his ilk are on a building spree of data centers that will guzzle huge amount of energy and dump tonnes of extra CO2 into the atmosphere, as if there's no tomorrow. This wrecks all the hard efforts of millions of others before him to rein in the damages caused by the climate change. Needless to say, all of these have pretty deleterious effects on the economy, biosphere and the welfare of ordinary people, including loss of innumerable lives.
But does he care? He is one of those people who simply ignore the trail of serious damage and enormous suffering they leave in their wake, because they don't see anything beyond money - more money than they can spend in a hundred lifetimes! Nobody needs a justification to see him as one of those 'planetary bad guys'.
> What does it say about your viewpoint?
As someone else here said, it goes without saying that lobbing Molotov cocktail at anyone is a no-no. I don't support physical violence in any form. Having said that,...
> If you're OK with victim-shaming here
It's sad that the aristocratic society didn't learn anything from the murder of Brian Thompson. The 'victim' had caused thousands of preventable deaths per year, and his death saved thousands by forcing the industry to deal with the problem. Suddenly, even the pacifists (like me) are left wondering if the death was unethical. If true justice existed, the state would have stopped them from their crimes (aka professions), if not outright execute them for the lives lost. Whom will you choose when they pitch their own lives against thousands of innocent lives? You can't claim victimhood after putting yourself in that position.
I read the New Yorker article like most people here. I didn't find anything incendiary enough in it to provoke a Molotov attack. I wouldn't put it past him to have arranged it himself, given how much he lies and what he stands to gain from it. But let's assume that the attack is real and is connected to the report. The reply seems overly dramatic and self-righteous, given that the attack was against his iron gate! He's milking the situation to indulge in virtue signaling, sympathy farming and gaslighting the critics. This is one hell of a victim posing! But I have no sympathies to spare if it distressed him so much. He shouldn't be able to sleep anyway, if only he had a conscience. Advocating sympathy for the unsympathetic super-privileged is a bit tone deaf under such circumstances. Evidently, nobody is in a mood to oblige to such manipulations.