FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
65% Positive
Analyzed from 4396 words in the discussion.
Trending Topics
#apple#claude#code#more#don#files#file#should#llm#why

Discussion (245 Comments)Read Original on HackerNews
--Mark Gurman, Bloomberg https://x.com/tbpn/status/2016911797656367199
Probably smart time to rent and not buy if they plan on buying in a downturn.
https://blog.google/company-news/inside-google/company-annou...
Be careful what you wish for.
The 2010s was marked by Intel's lazy product lineup, year after year pumping rehashes of older products, iterating on top of their 14nm lithography with increasingly minor improvements on its architecture until AMD overcame them. In the process, Apple's partnership with Intel became a liability it had to solve, and a push for the unified ARM architecture was no small feat.
If you ask me I don't think it's justified to degrade the user experience for the sake of focusing on this. It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.
In any case I think it explains really well why Siri feels so abandoned.
You have to remember all of the AI companies are making cash bonfires. People aren't going to stop buying iPhones because Siri can only do what it does now.
If Apple focuses on hardware and skips the pay-for-inference bubble they'll come out the other side with the best consumer hardware everybody already has for local inference which is going to eat the whole industry's lunch.
nvidia is going to have a hard time convincing people they need to buy $1000 LLM inference hardware. Apple isn't going to have a hard time convincing people to buy the next generation of phone/tablet/laptop.
And in this particular war, it's even worse, the "winner" will actually just be the "biggest loser", contrarily to a traditional war.
Really not true both in real wars and in tech wars. There's no evidence to support this claim.
Android only exists as the dominant mobile platform because it went to full scale war with Apple when the iPhone launched. Those that didn't take part and came after the battle have like <1% market share and Apple and Google are printing money from the cut to their app stores.
Apple doesn't take part in the AI race because whichever AI wins the war in the end, they'll have to be on their Appstore to reach the users, so Apple wins regardless due to their Appstore monopoly. AIs are no threat to their phone, laptops and Appstore business.
But Google can't afford not to take part in this race because AIs are a threat to their search and ads business.
Same with real wars, US is the world superpower because it got involved in WW2 even though it didn't have to be. Same with Russia and Ukraine, provided they don't wipe each other out scorched earth, their militaries will be the most advanced on the planet on modern drone warfare they invented after the war is over, and every other military on the planet will be paying them for their gear and expertise, which they already are.
Anthropic probably couldn't give the uptime guarantees that Google can, right?
If you have terms that conflict with theirs, they aren’t very flexible. Anthropic can be similarly difficult, and their needs from a business perspective probably don’t align with Siri. I would imagine that Google has a more flexible/long term approach to absorbing some risk in a revenue share arrangement than anthropic who generally wants cash.
Anthropic’s only purpose is to juice whatever KPI‘s are gonna increase their IPO market cap.
https://daringfireball.net/linked/2025/12/01/gurman-pooh-poo...
This is the important point.
Sending their internal code, documentation, secret tokens, etc. to Anthropic would be completely irresponsible.
But if they are running the models on their own servers, why not!
Yuck. a lot of those replies have LLM smells. Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?
I feel the same. Quality of both submissions and discussions have considerable decreased. It is still the best general purpose “aggregator” I know of, but it is not what it was. It is becoming more and more FotM hype and boring group-think.
HN was great due to the breadth of unique, interesting, nerdy topics, most of which I would have never come across on my own; and the insightful thought-provoking commentary, often by insiders with unique insights and perspectives.
Now it is just the same LLM agentic coding harness hype cycle astroturfing 100x engineer 37k LoC/day BS I could get from Reddit or LinkedIn or Twitter or anywhere else.
The moderators are still doing a fantastic job though! I feel like that is the last big differentiator from just being orange Reddit.
Both the really old timey graybeard techies and the green haired alternative techie communities are reducing in numbers.
The title refers to most machinery being a "centaur," meaning a thinking human is carried by the machine doing the heavy lifting, while the goal of AI companies is to replace high value work with the opposite. They want to turn people into meat appendages that serve unthinking machines.
arse
The first question, answer is yes - most people live their lives mindlessly, with or w/o LLMs (think every idiot you knew 20 years go throwing in punch lines from "Friends" to sound "funny"), To the second question - most people have a twisted view of identity. It is supposed to mean something identifying you uniquely,but to the most people it means, identifying you as a member of a large group (nationality/political view/religion/major music genre you like). So, now when every proverbial Dick, Tom and Harry use LLMs to generate Confluence content with shiny emojis, what are the proverbial Emily or John to do? Of course, they will adopt this new identity - its who people are now - shallow, hollow puppets for LLMs to fill in. And to think of the irony - mother Nature perfected this super-efficient, low energy and highly capable thinking machine, each and every one of us holds in their skull. Its already put us on the moon once, before we even had a semblance of a functioning computer! And we choose to throw it away, for fucking what? Verbal diarrhea and pain inducing coloured walls of texts?
All so some retarded antisocial VC-funded "AI founder" can call themselves a tech visionary?
(sorry couldn't resist)
I look at all those files the same way as IDE configuration cruft--it's workstation-specific configuration that shouldn't even go into source control. I would .gitignore all of those files. Is this not what is done in industry?
EDIT: Wow, thanks for all the replies. Very eye-opening to see what's happening outside of my hobby-experimentation with the technology. I was coming at it with the assumption that 1-2 out of 20 people on the team were using CLAUDE.md, so why have it in source control. But if all 20 people are using it, I can see the benefits. This reply chain has really opened my eyes, thank you HN.
I tend to include a well documented justfile, so between the readme and that common commands are covered. If there’s a style guide it should be its own file, or summarized in the readme.
If Claude is making errors I tend to just update my global Claude file, but I haven’t updated it in 6 months — only to disable Claude signatures on generated commit messages.
otherwise it's like leaving vim dotfiles in the repo or something
Its critical that its part of the source code.
They often describe:
- Overall architecture
- Repository layout
- Processes to use
- Things not to do: code styles to avoid, libraries to not use, etc.
While they’re primarily documenting these things for an agent, the information is similarly useful to a human.
The number one reason is, you are on a 10-dev tea and it just doesn't make sense for everyone to waste their token budget creating separate instances of this file, which an also requires ingesting the othe whole repo... That is 50, 60%.
The other bit is that you have a review pipeline hooked into CI/CD, and it is the easiest way to tell the bot how to review your code.
I used to be a purist about IDE configurations, but if everyone isn't on the same page about formatting and stuff like that you see a lot of file churn as things move around.
I would have said the same thing about the .github/ folder, but I've had to add things to it to prevent Copilot from thinking bad patterns in existing code are actually good patterns that should be repeated.
It makes more sense when your communication between teammates is constrained to the repository, because your other communication channels are already saturated. They're meta concerns that really have nowhere to go outside the repository without getting lost.
IMO that is what automated static analysis jobs are for. Let me configure my IDE how I want.
VS Code is one notorious offender in that realm; it will try to commit settings.json, even if their gitignore's are set up to ignore all other cruft.
In general, the question of what should go in the source folder is a bit of a mess. Source code, README and License make enough sense, but what about files describing project governance or CI configuration logic? Or what about files that are used to make the forge you're using render the repository in a certain way (for example: bug tracker templates). Those are all cruft insofar that they have nothing to do with code, but it's generally agreed on that you're supposed to commit those, maybe in a dot-folder if necessary.
Version control everything (inputs)
The idea of having to repeat something to your computer is ridiculous.
Also it looks like there's a compilation step to these files, which is interesting. The raw file was included, not the environment specific file.
And tests, linter configuration, doc...
If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed.
No, AI code review doesn’t help. Claude can’t even give me correct line numbers 80% of the time, literally just makes them up, and more than half of it is false positive BS anyway.
Our brain is designed to fill in gaps, it's why memory is so blurry when it comes to reciting the facts of what we saw in a trial.
It's why you could swear you saw "x" in the production software you were about to push. But it really comes down to expectations - and those expectations help reduce cognitive load/increase cognitive efficiency (resource usage).
So after more and more people get used using AI, you will see these mistakes occur more frequently. B/c it's how our brains work.
Like doing long division by hand instead of trusting a calculator.
I'm not sure why. It just doesn't feel very Apple-like
It is no secret that Apple has an enormous R&D budget.
It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives. Apple is an enormous multinational company, it is unlikely they have zero-AI on-site.
What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.
I'm sure journalists and Anthropic would love to have you believe otherwise, but I think we need to keep our feet on the ground here and accept the reality is more old-school.
Afterall, as others have pointed out already here ... whilst the rest of Silicon Valley has been shoveling truckloads of cash at AI, Apple have been patiently sitting, watching the bandwagon trundle along the rails.
Having worked there this is a perfect description of the organization from my experience.
> So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives.
> What is almost guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide.
100% agree
It's a standard engineering practice, otherwise it's like if you refuse autocomplete because autocomplete is not right 100% of the time.
You can include project/team based md files in your repo and exclude env/system md files (eg from you home directory, which includes your personal coding instructions).
So yeah.. nothingburger.
Seems like at some point most of the actual humans just gave up on replying.
Had some issues with my monitor apparently seeing connection to my Mac Mini, but the Mac Mini displaying black, apparently somehow got out of sync with my monitor, sleeping the display controller then waking it solved it.
Gathered a bunch of data, wanting to submit a report, since I'm a Apple Developer Program member since like two days ago, and I wanna be a good c̶u̶s̶t̶o̶m̶e̶r̶ user, so I opened up Feedback Assistant.
It asks me for my email, I input it, press enter. A password input appears, but keyboard focus doesn't move there automatically. I know is such a tiny nitpick practically, but tiny shit like this makes it so obvious that not a single person actually tried this UX. 10-15 years ago, Apple would never release something that isn't perfect, but now there are these UX edges absolutely everywhere across the OS.
I ended up not logging in at all, wrote my fix into a tiny fix-display.swift file which I'll run when it happens instead.