FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
56% Positive
Analyzed from 3116 words in the discussion.
Trending Topics
#google#government#lawful#don#used#should#seems#doesn#pentagon#law

Discussion (183 Comments)Read Original on HackerNews
Also, since I was older I feel like I was able to get away with those redefinitions a lot more often…
If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, that they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
My friend, this paragraph needed some periods. I could not follow what you were trying to say - but it seemed interesting enough to consider retyping.
I read it twice (admittedly quickly) but couldn't grasp the point even though I felt like it was there.
Any AI researcher who continues to work here is morally compromised.
For a long time, and probably still, it was legal for the US to torture enemy combatants. It was never ethical.
So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.
It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.
In extremis, were the people working for Pol Pot just good patriots with no moral culpability?
We could surely at least agree that there are cases where working for the military of your home country doesn't fully excuse you from your actions.
In fact, I think international tribunals have existed which operated on just those principles.
You propose that other governments militaries would not be so compromising. Seems reasonable.
But the question then becomes, what is the operative distinction between the two?
"Our enemies would have no qualms building a weapon that will end life on earth! We better build it first because we're the good guys!"
The point is - this happens everywhere, it's not just some weird western thing.
Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.
I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.
Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.
The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!
And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.
Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?
> The classified deal apparently doesn’t allow Google to veto how the government will use its AI models.
Seems concerning?
question as old as time itself
(Yes, I recognize that past military entanglements do read as favors for Big Oil, but that’s more because lobbyists directly purchased the corrupt and useless Congress)
So Google can't tell the government it needs a warrant to perform a search? Google can't sue over something the government did?
It's Google's product they want to buy.
Congress and the courts obviously.
If you think there's a hole in the law tell your congressman, don't, for some reason, try and put Google or any Ai company above the government.
The first is fully neutered. The second is far too slow.
"Nothing unlawful" needing to be in the contract is inherently concerning, as it's typically the default, assumed state of such a thing.
I am kind of mad at James Cameron here. Skynet was evil but interesting. Reallife controlled by Google is evil but not interesting - it is flat out annoying.
Could Google back out of this agreement later by arguing that they were coerced?
Not trying to suggest that Google would be opposed to doing evil, but curious about how solid this agreement would be in practice.
Reality is this ship sailed once the US/Palantir rolled out AI target selection
Having your work being used by the govt in ways you disagree with feels similar to having your taxes used in ways you disagree.
When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back
Vote in elections, local and general.
btw i am not making a judgement call on the ai usage issue itself, just saying that this and taxes are more equivalent than it might seem
sure if you're Lockheed you might be screwed, but that's not the case for Google. Military contracts, or even government contracts as a whole are a tiny fraction of the King Kong Sized gorilla that is Google.
The fact that Anthropic puts up a fight but OpenAI/Microsoft and Google don't I find hard to characterize as anything other than pathetic. These guys could, if the wanted to, afford a lawyer or to two to push back on the administration. They do that pretty successfully with their taxes in most places btw.
Remember that even the third Reich had laws!
And starts the lying to our faces. The public and private (from your own employees!) consensus is that it should not be used for those things at all, regardless of “human oversight.”
So the rest of the world is fine to spy on, its the domestic part they don't agree with. So go on, destroy lives all around the world, helping the powers at be build the fascist state. Its fine to use Gemini to tell what building to blow up; its fine for Gemini to wrongly identify people and cause hundreds or thousands of deaths based on the telling the military who to attack.
[1] https://www.nytimes.com/2025/09/20/us/politics/tom-homan-fbi...
https://en.wikipedia.org/wiki/Torture_Memos
"When the president does it, that means that it is not illegal." - Richard Nixon
I've had the unfortunate experience of working at a startup that started courting some autonomous weapons companies and HOLY SHIT were they the bottom of the barrel. Levels of incompetence you wouldn't believe, just good ol' boys who wanted to play with energetics. Then the company I was working for also hemorrhaged all their top engineers because they found the work unsettling.
The takeaway is that your refusal to assist these shitheads does have an impact, they have to pay more for talent and they have a much harder time courting good talent.
Capital and Big Tech have always been opportunistic enablers, not principled actors. Corporate Values have always been nothing but internal propaganda. "Don't be evil", what a farce.