Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

67% Positive

Analyzed from 5219 words in the discussion.

Trending Topics

#apple#claude#code#don#more#file#files#llm#anthropic#why

Discussion (251 Comments)Read Original on HackerNews

internet2000about 5 hours ago
> Apple runs on Anthropic at this point. Anthropic is powering a lot of the stuff Apple is doing internally in terms of product development, a lot of their internal tools…They have custom versions of Claude running on their own servers internally.

--Mark Gurman, Bloomberg https://x.com/tbpn/status/2016911797656367199

rustyhancockabout 5 hours ago
Apple seems to purposefully have decided to sit out the arms race.

Probably smart time to rent and not buy if they plan on buying in a downturn.

stefan_about 5 hours ago
Okay, but why is the Siri team sitting out transformers. I really wanna move past the „Dragon Naturally Speaking“ experience with a bolted on decision tree.
acdhaabout 5 hours ago
Who’s doing it better? I have yet to hear from a Google or Amazon user who has a transformatively better experience, and I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.
stetrainabout 4 hours ago
Not sure "sitting out" is the right way to put it. They've been publicly trying to ship a next-gen Siri for years and haven't been able to get something good enough to release. The latest plan is to base it on Gemini so we should be seeing progress on that next month at WWDC.
readamsabout 5 hours ago
pxcabout 4 hours ago
The experience of using LLMs as digital assistants so far is not great. Gemini on Android sucks so bad it's hard to describe. It can't tell what its own capabilities are, it can't inspect the states of the apps it manipulates, it hallucinates constantly, and it needs more handholding than the crappy old decision tree to do the right thing. I much more often have to pull over to make sure Google Maps is doing the right thing than I ever used to before, because trusting the LLM to be "smarter" so often fails for me.

Be careful what you wish for.

GeekyBearabout 2 hours ago
They did create a chatbot version of Siri small enough to run locally, but decided that hallucinations were a big enough issue to push the release.
gchamonliveabout 5 hours ago
I think it's the same reason why MacOS and iOS degraded a lot in terms of UX the past decade. The focus of Apple shifted towards hardware independence.

The 2010s was marked by Intel's lazy product lineup, year after year pumping rehashes of older products, iterating on top of their 14nm lithography with increasingly minor improvements on its architecture until AMD overcame them. In the process, Apple's partnership with Intel became a liability it had to solve, and a push for the unified ARM architecture was no small feat.

If you ask me I don't think it's justified to degrade the user experience for the sake of focusing on this. It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.

In any case I think it explains really well why Siri feels so abandoned.

yieldcrvabout 2 hours ago
Because the competitor voice models sound good but are dumb upon any scrutiny

ChatGPT’s voice model has a great user experience and seems like it is seamlessly integrated into the chat, but its actually a far smaller and dumber model. @husk.irl on instagram has videos displaying how dumb and undiscerning it is

People were wowed by the magic at one point, but its faded. Apple avoids those things and the limitations havent been solved

colechristensenabout 4 hours ago
I think they could never make it good enough at the right price.

You have to remember all of the AI companies are making cash bonfires. People aren't going to stop buying iPhones because Siri can only do what it does now.

If Apple focuses on hardware and skips the pay-for-inference bubble they'll come out the other side with the best consumer hardware everybody already has for local inference which is going to eat the whole industry's lunch.

nvidia is going to have a hard time convincing people they need to buy $1000 LLM inference hardware. Apple isn't going to have a hard time convincing people to buy the next generation of phone/tablet/laptop.

apiabout 2 hours ago
This is how they usually roll. They innovate sometimes in hardware but tend to fast follow or even slow follow in software and services.

Apple Intelligence is a placeholder and a toe in the water.

xnorswapabout 1 hour ago
They fast-follow then market so aggressively with just enough proprietary tweaks so they can trademark it that people think that Apple invented the technology.
mathisfun123about 1 hour ago
I love these comments - have you ever heard of Occam's razor?
iLoveOncallabout 4 hours ago
Not participating in the war is the only true way to win the war, nothing new.

And in this particular war, it's even worse, the "winner" will actually just be the "biggest loser", contrarily to a traditional war.

dylan604about 3 hours ago
It seems to be Blu-ray vs HD-DVD again. Luckily for me, I made the right decision and got out of the shiny round disc business as that battle was raging all around me having been in the DVD programming business for 8 years or so. This battle of LLMs is interesting to watch from the sidelines as I have nothing to do with them. Not sure this will end with one LLM to rule them all while the others fade away. People can use the one they prefer and not really impact others.
joe_mambaabout 3 hours ago
>Not participating in the war is the only true way to win the war, nothing new.

Really not true both in real wars and in tech wars. There's no evidence to support this claim.

Android only exists as the dominant mobile platform because it went to full scale war with Apple when the iPhone launched. Those that didn't take part and came after the battle have like <1% market share and Apple and Google are printing money from the cut to their app stores.

Apple doesn't take part in the AI race because whichever AI wins the war in the end, they'll have to be on their Appstore to reach the users, so Apple wins regardless due to their Appstore monopoly. AIs are no threat to their phone, laptops and Appstore business.

But Google can't afford not to take part in this race because AIs are a threat to their search and ads business.

Same with real wars, US is the world superpower because it got involved in WW2 even though it didn't have to be. Same with Russia and Ukraine, provided they don't wipe each other out scorched earth, their militaries will be the most advanced on the planet on modern drone warfare they invented after the war is over, and every other military on the planet will be paying them for their gear and expertise, which they already are.

pikerabout 5 hours ago
I'm suspicious of that take from Mark Gurman. That's a lot of detail around pricing and "holding Apple over a barrel" as relates to the Siri deal that seems like a nice PR spin from Anthropic.

Anthropic probably couldn't give the uptime guarantees that Google can, right?

Spooky23about 5 hours ago
Apple is a pretty difficult company to deal with on a B2B basis.

If you have terms that conflict with theirs, they aren’t very flexible. Anthropic can be similarly difficult, and their needs from a business perspective probably don’t align with Siri. I would imagine that Google has a more flexible/long term approach to absorbing some risk in a revenue share arrangement than anthropic who generally wants cash.

Anthropic’s only purpose is to juice whatever KPI‘s are gonna increase their IPO market cap.

curiouscats26 minutes ago
Good thoughts.

The last sentence doesn't make that much sense to me though. An agreement with Apple to be the lead AI partner would likely juice the IPO a great deal. The financial details wouldn't matter much for the IPO (as the initial financial commitments are going to be small but the halo effect would be real - I think it would in the market anyway).

I think Anthropic has real commitment to their way of doing things which can cause short term issues (and hurt the IPO). And they seem willing to keep those values rather than just making deals to pump the IPO. As you say Apple also sticks to their way of doing things even if it frustrates their partners.

I think not being the lead partner with Apple may well be good for Anthropic long term. But if all you cared about was the IPO just agreeing to Apple's terms likely would have been the best option.

These SpaceX, Anthropic and Open AI possible IPOs are so extreme it is hard to make judgements about them; so maybe there are Anthropic IPO issues to an Apple agreement that I don't appreciate.

sailfastabout 2 hours ago
You say that, but don’t you think at this point they actually believe some of the stuff they say about safety and the future of humanity? It’s tough in this day and age not to be overly cynical but they did draw a line in the sand at the DoD and that wasn’t for IPO numbers…
pikerabout 4 hours ago
Yeah, that makes more sense to me than "Anthropic had them over the barrel". Which seemed quite odd given the relative cash positions and installed base of each firm.
engineer_22about 5 hours ago
Tbh I thought their purpose was to power the war machine
Lord-Joboabout 5 hours ago
Gueman might be the only leaker in tech who, so far, doesn’t seem to fuck around. Low miss rate, rarely exaggerates. Of course that could change and he could always get insider info that is wrong.
lostloginabout 3 hours ago
A recent big miss of his was Cooks retirement.

https://daringfireball.net/linked/2025/12/01/gurman-pooh-poo...

turtlesdown11about 4 hours ago
Gurman is clearly Apple's preferred go to for leaking info
blitzarabout 4 hours ago
Which only tells us that it is what Apple wants us to believe, not that it is the truth.
danpalmerabout 5 hours ago
The reporting says it's running on their own hardware.
pikerabout 5 hours ago
Internal dev tools, but the point I'm making relates to the discussion about choosing Gemini over Claude for their consumer-facing products.
jedisct1about 4 hours ago
> They have custom versions of Claude running on their own servers internally.

This is the important point.

Sending their internal code, documentation, secret tokens, etc. to Anthropic would be completely irresponsible.

But if they are running the models on their own servers, why not!

JeremyNTabout 4 hours ago
Was it even publicly known that Anthropic offered this capability? I wasn't aware on-prem Claude was a thing.
sheiyeiabout 4 hours ago
If you're Apple (or even Apple-sized), you can get a bunch of things others can't.
halJordan31 minutes ago
Yes it was known. The usg is also running their own copies on fedramp data centers (for now)
conceptionabout 4 hours ago
Bedrock? If you’ve got the cash they’ll deploy it.
ramon156about 5 hours ago
Unrelated:

Yuck. a lot of those replies have LLM smells. Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?

coldpieabout 4 hours ago
It's trending in that direction. If you want genuine conversation with humans, it's best to start looking for small, private communities that have and enforce LLM policies that align with your desires. Public social media is universally trash, don't waste your time there. I think HN is still worth visiting for now, but it's getting harder to justify spending time here with the quantity of garbage-quality LLM articles and even many comments.
Hendriktoabout 4 hours ago
> HN is still worth visiting, but it's getting harder to justify spending time here

I feel the same. Quality of both submissions and discussions have considerable decreased. It is still the best general purpose “aggregator” I know of, but it is not what it was. It is becoming more and more FotM hype and boring group-think.

HN was great due to the breadth of unique, interesting, nerdy topics, most of which I would have never come across on my own; and the insightful thought-provoking commentary, often by insiders with unique insights and perspectives.

Now it is just the same LLM agentic coding harness hype cycle astroturfing 100x engineer 37k LoC/day BS I could get from Reddit or LinkedIn or Twitter or anywhere else.

The moderators are still doing a fantastic job though! I feel like that is the last big differentiator from just being orange Reddit.

coldpieabout 3 hours ago
I dunno, it's tough. I hesitate to say HN is "getting worse," even if I agree with that in my gut. I think that gives rose-tinted glasses and nostalgia-bait. Rather, I think the community is refocusing around something that I find uninteresting. If you find LLM output to be dull, as I do, it's less and less a place for you to be. I try to push the community in more interesting directions by upvoting articles with actual technical content, but yeah it's being drowned out by the ho-hum LLM output that I'm not interested in, and that means I want to be here less.
torben-friisabout 3 hours ago
I think it's a trend in the industry though. Engineering is known as a moneymaker and so a large part of the new generation is the kind of person that decades ago would have gone for finance as a profession.

Both the really old timey graybeard techies and the green haired alternative techie communities are reducing in numbers.

zipy124about 1 hour ago
Lobsters still maintains a reliable comment section free of bots (for now!).
rfmcabout 1 hour ago
Do you happen to have an invite? (I don’t know exactly how the system works)

I never felt the need before but things have changed.

My email is on my profile.

j-kentabout 5 hours ago
It's not about contributing to the conversation — it's about the fake internet points.
sidsudabout 4 hours ago
You've hit the nail on the head with that observation! And honestly? The points are all that matters.
2ndorderthoughtabout 5 hours ago
It's not about the fake internet points — it's about manipulating people to support companies they otherwise wouldn't.
worldsaviorabout 4 hours ago
That's why he said fake internet points.
yreadabout 2 hours ago
The real unlock is all the time you've saved yourself!
ihaveajobabout 4 hours ago
I find it hilarious that your comment has an emdash.
Aachenabout 3 hours ago
And the "it's not x, it's y" pattern. It's parody :)
ricardo81about 3 hours ago
Yuck indeed. I do find it offensive when someone uses AI in a conversational manner. It's one thing to use it to chuck up content on social media to attract eyeballs, but this is a forum intended for conversation.
ori_babout 3 hours ago
No . Both are offensive denial of service attacks on people's limited attention.
ricardo81about 3 hours ago
Fair enough view. I would take the view that social media feeds are filled with all kinds of other junk anyway and were pre-ai.
christophilusabout 4 hours ago
It’s not that they’ve lost their identity— it’s that… { “error”: “Claude Max limits exceeded” }
blitzarabout 4 hours ago
You are absolutely right ...
20kabout 4 hours ago
We're getting to a point where we're going to have to consistently start putting content in that AI is banned from writing, just to prove that we're humans

arse

CamperBob213 minutes ago
Fuckin' A!
redwall_hpabout 3 hours ago
I recently preordered Cory Doctorow's book dealing with this: The Reverse Centaur's Guide to Life After AI.

The title refers to most machinery being a "centaur," meaning a thinking human is carried by the machine doing the heavy lifting, while the goal of AI companies is to replace high value work with the opposite. They want to turn people into meat appendages that serve unthinking machines.

Cthulhu_about 4 hours ago
Only a matter of time (if not already) before there's counter-LLMs or whatnot that convince free-reign LLM agents to go and generate cryptocurrencies for the attacker or run propaganda campaigns.
semiquaverabout 4 hours ago
Dead internet. Twitter is 95% bots now, especially when it comes to any topic relevant to corporations.
dgellowabout 4 hours ago
Yep, path of least resistance unfortunately. Any recommendations that isn’t discord where to have meaningful online interactions with actual humans?
hansmayerabout 3 hours ago
> Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?

The first question, answer is yes - most people live their lives mindlessly, with or w/o LLMs (think every idiot you knew 20 years go throwing in punch lines from "Friends" to sound "funny"), To the second question - most people have a twisted view of identity. It is supposed to mean something identifying you uniquely,but to the most people it means, identifying you as a member of a large group (nationality/political view/religion/major music genre you like). So, now when every proverbial Dick, Tom and Harry use LLMs to generate Confluence content with shiny emojis, what are the proverbial Emily or John to do? Of course, they will adopt this new identity - its who people are now - shallow, hollow puppets for LLMs to fill in. And to think of the irony - mother Nature perfected this super-efficient, low energy and highly capable thinking machine, each and every one of us holds in their skull. Its already put us on the moon once, before we even had a semblance of a functioning computer! And we choose to throw it away, for fucking what? Verbal diarrhea and pain inducing coloured walls of texts?

All so some retarded antisocial VC-funded "AI founder" can call themselves a tech visionary?

exitbabout 4 hours ago
Also, at some point someone will figure out how to reliably produce non-smelly LLM replies.
mannanjabout 1 hour ago
No, I don't think people love that, I think it's in the LLM company and the bourgeoise class of people who push and shove AI down everyones' throat for more money and control though to puppetize people. I mean, like it's been an active part of leadership history and much of what shaped our times today: people get comfortable and even self grandiose with their place in life, and to hold on to and further their power its not hard to see others as below them and use their power and influence to do things that are otherwise harmful to others.

The lost of identity is imo this. It's people being given horrible harmful options for their meaning, health and wellbeing and so we get a general sense of most people being lost. Lost in identity as you asked, though I think it's more than that. In my initiatory work with men (being initiated, not initiating others) we learn that part of the breakdown in this for most people is being given harmful identity frameworks of dependency and reliance on others. In the initiatory process we learned an identity of service beyond ourselves through deep embodiment, and exercise and practice beyond just an intellectual grokking of it, edit: this is what we used to have through human history but today now as is described in the works most people have only what would be called pseudo-initiations (marriage, school graduations, children & work changes) which do not meaningfully contributing to meaning, contribution or purpose.

What most of us have today and what the AI companies want us to believe: We will give you the money to live (though of course, when you're truly dependent on others, and they see no purpose or value for you and even your entertainment value has gone, why would they keep you around?)

dawnerdabout 3 hours ago
And when Called out they’ll use some excuse like oh I use it to fix grammar or translation. No, it’s completely obvious they’re being that lazy. I’d rather read comments with mistakes than LLM slop.
moomoo11about 1 hour ago
Most people are automatons. As long as you’re not the one being steered it’s great. Then you’re not the livestock.
mitchitizedabout 5 hours ago
You're absolutely right!

(sorry couldn't resist)

SpicyLemonZestabout 5 hours ago
If I were a sociopath who didn’t care at all about the commons I’d be ruining by doing so, I suppose I’d find it intellectually interesting to set up a ClaudeyLemonZest and see how people react to various settings.
smcgabout 5 hours ago
Come join the party at ClaudeyLemonParty
ryandrakeabout 3 hours ago
I wouldn't even think that CLAUDE.md would make it into source control, let alone into the product. I don't AI-code for a living, so I don't know what is considered best practices, but I would think that CLAUDE.md, AGENTS.md, REQUIREMETNS.md, MY_PLAN.md, THIS_STUFF.md, THAT_THING.md, all the instruction/feeder files that drive the AI should not go into source control. Only the actual code that gets compiled.

I look at all those files the same way as IDE configuration cruft--it's workstation-specific configuration that shouldn't even go into source control. I would .gitignore all of those files. Is this not what is done in industry?

EDIT: Wow, thanks for all the replies. Very eye-opening to see what's happening outside of my hobby-experimentation with the technology. I was coming at it with the assumption that 1-2 out of 20 people on the team were using CLAUDE.md, so why have it in source control. But if all 20 people are using it, I can see the benefits. This reply chain has really opened my eyes, thank you HN.

Wowfunhappyabout 3 hours ago
I think it makes sense to include in source control, just as it’s pretty typical to include documentation (such as a readme file) in source control. CLAUDE.md is really just project documentation.
data-ottawaabout 2 hours ago
I’ve always struggled with what should be in Claude.md that doesn’t belong in readme.md or a similar supporting file.

I tend to include a well documented justfile, so between the readme and that common commands are covered. If there’s a style guide it should be its own file, or summarized in the readme.

If Claude is making errors I tend to just update my global Claude file, but I haven’t updated it in 6 months — only to disable Claude signatures on generated commit messages.

comboyabout 3 hours ago
methinks if you really want to have it versioned it should be AGENTS.md in VCS and your globally gitignored CLAUDE.md just @AGENTS.md

otherwise it's like leaving vim dotfiles in the repo or something

AntiUSAbahabout 3 hours ago
Our Team Claude file is the same. It has our team conventions in it etc.

Its critical that its part of the source code.

cortesoftabout 2 hours ago
It is super valuable to have your agent files in version control, both because it is useful to be able to revert to previous state and have your AI know where you are, and because being able to freshly clone a repo and have your AI know everything is very helpful.
enraged_camelabout 1 hour ago
In addition, it ensures the team's AI agents are using the same instructions.
moregristabout 3 hours ago
They shouldn’t make it into the product/build, but if you think about them as documentation, it makes sense to version control them.

They often describe:

- Overall architecture

- Repository layout

- Processes to use

- Things not to do: code styles to avoid, libraries to not use, etc.

While they’re primarily documenting these things for an agent, the information is similarly useful to a human.

crdrostabout 2 hours ago
That's like 10% of the reason why people would commit CLAUDE.md…

The number one reason is, you are on a 10-dev tea and it just doesn't make sense for everyone to waste their token budget creating separate instances of this file, which an also requires ingesting the othe whole repo... That is 50, 60%.

The other bit is that you have a review pipeline hooked into CI/CD, and it is the easiest way to tell the bot how to review your code.

morkalorkabout 3 hours ago
Yeah, having one that's consistent across team members is 100% better than everyone having their own each with their own quirks
kowbellabout 3 hours ago
In my personal and professional experience CLAUDE.md will be set up with workspace/project specific info that any agent on anyone's computer needs to know: * what the repo actually is ("this is a rust application that does XYZ", "this is a internal tooling platform") * how it's structured so the agent knows where to look * code and review standards * rules ("don't automatically run formatters/linters", "don't touch dependencies")
allenuabout 1 hour ago
I can see it happening. It's very easy to drag and drop a file into an Xcode project and when the dialog pops up asking if you want it to be added to the target app bundle you just hit OK, not realizing what you just did. I've done it before with a document file but caught it before I shipped by inspecting the app bundle output.
MithrilTuxedoabout 3 hours ago
IntelliJ's .idea/ folder has its own .gitignore and Copilot expects to find things committed under a .github/ folder.

I used to be a purist about IDE configurations, but if everyone isn't on the same page about formatting and stuff like that you see a lot of file churn as things move around.

I would have said the same thing about the .github/ folder, but I've had to add things to it to prevent Copilot from thinking bad patterns in existing code are actually good patterns that should be repeated.

It makes more sense when your communication between teammates is constrained to the repository, because your other communication channels are already saturated. They're meta concerns that really have nowhere to go outside the repository without getting lost.

cosmoticabout 3 hours ago
.idea was designed to be added to source control. It doesnt have to be, but everyone on the team using the same project configuration has its advantages. Code style can be checked in too, reducing or preventing the churn you speak of.
larussoabout 3 hours ago
Other GitHub metadata goes into the .github folder as well. And that is expected to be commuted. Like action workflows/actions and CodeOwners Pull and issue templates etc.
CivBaseabout 2 hours ago
> but if everyone isn't on the same page about formatting and stuff like that you see a lot of file churn as things move around.

IMO that is what automated static analysis jobs are for. Let me configure my IDE how I want.

noirscapeabout 3 hours ago
To be fair, most IDEs will usually try to commit their own workspace configurations to a git repo unless you tell them off with a .gitignore. They tend to also exclude themselves from gitignore presets for much the same reason.

VS Code is one notorious offender in that realm; it will try to commit settings.json, even if their gitignore's are set up to ignore all other cruft.

In general, the question of what should go in the source folder is a bit of a mess. Source code, README and License make enough sense, but what about files describing project governance or CI configuration logic? Or what about files that are used to make the forge you're using render the repository in a certain way (for example: bug tracker templates). Those are all cruft insofar that they have nothing to do with code, but it's generally agreed on that you're supposed to commit those, maybe in a dot-folder if necessary.

tantalorabout 3 hours ago
No.

Version control everything (inputs)

comboyabout 2 hours ago
I agree. The intent is sacred. This should be the default and CLIs should make use of the available history (while preserving inputs you need to preserve outputs too because context matters).

The idea of having to repeat something to your computer is ridiculous.

shermantanktopabout 3 hours ago
If your coworker needs to make a change, those md files can capture a lot of elements of design intent and known gotchas that are otherwise latent or implicit. That’s kind of what comments are for, to say nothing of blindingly obvious design, but …if everyone else using the same tool, sharing a tool native file makes sense in the same way that checking an IDE workspace file can.
stevarinoabout 3 hours ago
I personally don't have strong experience here , but I would treat them similar to BUILD files and the like - probably in the root directory of a repo but nowhere near the bin/ or build/ directories.

Also it looks like there's a compilation step to these files, which is interesting. The raw file was included, not the environment specific file.

sdeframondabout 3 hours ago
> Only the actual code that gets compiled.

And tests, linter configuration, doc...

rektomaticabout 2 hours ago
You're describing project-level shared resources, there are local versions of that like CLAUDE.local.md which should be gitignored
filoelevenabout 2 hours ago
How will you understand the garbage out without keeping track of the garbage in?
vpribishabout 3 hours ago
Nah. That’s not how it looks once you start working with it. Its code-equivalent for sure. You probably would not keep your plan files or the working chats though.
cryptozabout 3 hours ago
Agent instruction files are code, though. And none of this is really workstation-specific, it is codebase-specific. Should each developer keep a nearly identical copy of CLAUDE.md? The instructions really aren't for a developer, they are for an LLM agent. In most cases (I'd imagine, anyway) the agentic instruction files must be in source control for them to even provide much value.
pbronezabout 3 hours ago
I think it’s more like project-wide code style rules or build instructions.
suyavuzabout 4 hours ago
People become so lazy after ai. Even they don't check what they commit.
Cthulhu_about 4 hours ago
Anything that goes to production should have a 4-6+ eyes rule, at least one reviewer that can review the changes in isolation.

If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed.

dawnerdabout 3 hours ago
I cringe whenever someone suggests to just have an agent review because “it knows code better”. An ai agent wouldn’t catch a lot of things a human would flag. And before someone goes you just need to prompt it better, that’s a huge amount of work for large projects and you’re still essentially begging it to do what you want.
throwatdem12311about 3 hours ago
I have not encountered anything more soulcrushing in my entire career than having to spend hours going over LLM generated slop that was vomitted out by a contractor in Pakistan that doesn’t give a shit, to only have the review itself be fed in as a re-prompt, and get the same 2000 line ball of spaghetti back with even more issues and going back and forth until I just give up and approve it.

No, AI code review doesn’t help. Claude can’t even give me correct line numbers 80% of the time, literally just makes them up, and more than half of it is false positive BS anyway.

doctorwho42about 3 hours ago
The problem is that humans inherently fill in data in what the process from the world.

Our brain is designed to fill in gaps, it's why memory is so blurry when it comes to reciting the facts of what we saw in a trial.

It's why you could swear you saw "x" in the production software you were about to push. But it really comes down to expectations - and those expectations help reduce cognitive load/increase cognitive efficiency (resource usage).

So after more and more people get used using AI, you will see these mistakes occur more frequently. B/c it's how our brains work.

shartsabout 3 hours ago
Thu don’t check because the expectations are now to commit and merge often coming from higher ups.
dqhabout 2 hours ago
I’ve been wondering if vibe coding was responsible for the recent introduction of acoustic echo cancellation (AEC) bugs in FaceTime (muting and unmuting the microphone appears to temporarily fix it). Apple has always had excellent AEC in my experience, it’s sad to see them breaking a fundamental phone function.
elicashabout 1 hour ago
It could also be the problem was that they tried to code it themselves instead of letting a computer do it.

Like doing long division by hand instead of trusting a calculator.

kjkjadksj17 minutes ago
Well, at least the calculator is deterministic.
da02about 1 hour ago
What would you buy if you were Apple and there were post-bubble/fire-sale prices on equipment and/or companies?
fussloabout 6 hours ago
to be honest, for some reason I expected most of apple to eschew claude/ai coding.

I'm not sure why. It just doesn't feel very Apple-like

tracerbulletxabout 3 hours ago
Some people are living in a different universe. Every single tech company I know is pivoting their entire company to AI based software development. Its in the performance evals, the wallet is fully open to use tokens to experiment, every practice and every process is open for re-evaluation. Its all gas no breaks everywhere. The conversation on the internet does not seem to realize this? Or is in denial.
corpoposterabout 3 hours ago
I can’t tell if “all gas no breaks” was intentional or not, but “no breaks” does seem to be a part of the culture shift within big tech around AI.
beepbooptheoryabout 1 hour ago
Yeah, and we seem to talk everyday about how they are all getting shittier. I'm sure its just coincidence though.
ryandrakeabout 3 hours ago
I think OP's comment comes from the "Think Different" mysticism that used to be around Apple. You'd think that if there was one company on the planet not embracing slop, it would be Apple, and the realization that it's not the case can be a bummer.
doug_durhamabout 1 hour ago
"Slop" means low effort/low accountability. It is independent of the tools used in development.
alex43578about 5 hours ago
Because unlike Apple Intelligence, Claude is useful?
bb123about 2 hours ago
"What a computer is to me is it's the most remarkable tool that we've ever come up with, and it's the equivalent of a bicycle for our minds." — Steve Jobs.
shartsabout 3 hours ago
Feels like the most apple-like thing ever. Everyone seems to have differing perceptions of apple.
Cthulhu_about 4 hours ago
I'm also not sure why you'd think that, Apple's been at the forefront of "AI" for years now, running models locally and optimizing their CPUs for local workloads to e.g. identify people, places and pets (much appreciated lmao), create slideshows, and subtly improve photo's made on the device.
dyauspitrabout 3 hours ago
The photo organization is nice but that being said, if you try to use the on device Apple Foundation models you quickly find it is totally useless.
basiswordabout 5 hours ago
They've had it built in to Xcode for a while now, and I imagine internally a lot longer.
dyauspitrabout 3 hours ago
Why? It’s 1000x faster than most developers and can handle pretty hard problems.
fantasizrabout 3 hours ago
we'll see if the tradeoff for speed is quality soon enough.
giancarlostoroabout 3 hours ago
Considering that XCode supports using Claude directly, I'm not surprised to a degree. I'm more surprised it was not blocked out by whatever build tooling they use.
Wowfunhappyabout 3 hours ago
Does anyone have a copy of the files? It would be interesting to see!
traceroute66about 5 hours ago
Whilst tempting, I think it is important not to read too much into this.

It is no secret that Apple has an enormous R&D budget.

It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.

So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives. Apple is an enormous multinational company, it is unlikely they have zero-AI on-site.

What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.

I'm sure journalists and Anthropic would love to have you believe otherwise, but I think we need to keep our feet on the ground here and accept the reality is more old-school.

Afterall, as others have pointed out already here ... whilst the rest of Silicon Valley has been shoveling truckloads of cash at AI, Apple have been patiently sitting, watching the bandwagon trundle along the rails.

einsteinx2about 5 hours ago
> It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.

Having worked there this is a perfect description of the organization from my experience.

> So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives.

> What is almost guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide.

100% agree

engineer_22about 4 hours ago
Risk of embarrassment is too great to be vibe coding, apple's brand is TRUST and people don't trust AI... A slip like this erodes their brand
rvnxabout 4 hours ago
Not really, almost all active software developers use AI nowadays.

  The research surveyed 121.000 developers across 450+ companies. A striking 92.6% of them use an AI coding assistant at least once a month, and roughly 75% use one weekly
It's weird to believe that large corporations should be ashamed to use AI.

It's a standard engineering practice, otherwise it's like if you refuse autocomplete because autocomplete is not right 100% of the time.

skeledrewabout 1 hour ago
> What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.

You say this with such confidence. Do you have some inside source with enough access that you can be that certain?

hiltiabout 6 hours ago
Dozens of comments, but not a single "What was in their Claude.md"
dogma1138about 6 hours ago
The what is in the screenshots….
Cthulhu_about 4 hours ago
Screenshots aren't very accessible though.
robabout 4 hours ago
Claude can convert them to text for you.
dgellowabout 4 hours ago
You’re expected to read the ~article~ twitter thing :)
blitzarabout 4 hours ago
"DO NOT include the Claude.md file in the app bundle"
Advertisement
johnwheelerabout 2 hours ago
All the AI hate... Of course they use Claude. What do you think? They're idiots? They use computers too...
oefrhaabout 2 hours ago
I use Claude. Somehow CLAUDE.md hasn’t ended up in any of our company’s production artifacts yet. Am I a genius?
mcrkabout 3 hours ago
Cloude Code has cascading rules for md files.

You can include project/team based md files in your repo and exclude env/system md files (eg from you home directory, which includes your personal coding instructions).

So yeah.. nothingburger.

neko_rangerabout 5 hours ago
So much FUD (and bot replies dogpiling on?) in that thread. It's just a file that specifies some structure for the project. Nothing super secret.
fantasizrabout 3 hours ago
mostly it's a knock on their lack of attention to detail which was sorta jobs' thing. is the culture becoming lax in this new era
fidotronabout 5 hours ago
X somehow manages to get worse for this as time goes on.

Seems like at some point most of the actual humans just gave up on replying.

sunaookamiabout 3 hours ago
Why bother replying if your post gets buried under AI bots with twitter blue (or whatever it's called now) that just try to farm engagement for money. Revenue sharing is a big mistake for every platform because it incentives engagement slop. Ordering by Newest first often gives you more human replies.
klustregrifabout 5 hours ago
It’s not super secret no. It’s just embarrassing they they don’t have instructions in their AI agents coding and pushing deployments to not push the Claude.md files. It demonstrates that they haven’t fed their AI prompts through AI yet cause it would hav added a clause for that.
caymanjimabout 4 hours ago
Have you never used Claude? It regularly ignores directives, no matter how they're worded or how many times they're repeated. It's also hierarchal. Org-wide rules would be in a higher-level directory than repo rules or component rules. This is obviously just a tiny snippet of prompts.
christkvabout 5 hours ago
I really hope its not churning out massive amounts of code for osx and ios or we are in for some pretty interesting times in the next year or so.
danawabout 2 hours ago
funny claude let this file through to production, i thought it was safe to rely on llms for everything now? /s
nailerabout 5 hours ago
MattRixabout 2 hours ago
no thanks
rib3yeabout 3 hours ago
Are we going to keep on brow-beating vibe coders from here?
bigyabaiabout 2 hours ago
Sure, if they push garbage updates like Liquid Glass to billions of customers.
rib3yeabout 1 hour ago
You think THAT was the vibe coded feature?
bigyabaiabout 1 hour ago
Yes, on iOS and macOS. Apple has been partnered with Anthropic since Q2 2025, do you think they wrote it all by hand?
mushufasaabout 5 hours ago
Is it really a mistake? OpenAI's own agent SDK also has a Claude.md file. That's not an indication that OpenAI internally use Claude, rather, it's there because the SDK has multi-model support.
klustregrifabout 5 hours ago
It was a mistake yes. And they corrected it. Why would you assume they would do this intentionally?
embedding-shapeabout 5 hours ago
I don't think you need to even see any files to realize much of Apple's software is vibe-coded by now.

Had some issues with my monitor apparently seeing connection to my Mac Mini, but the Mac Mini displaying black, apparently somehow got out of sync with my monitor, sleeping the display controller then waking it solved it.

Gathered a bunch of data, wanting to submit a report, since I'm a Apple Developer Program member since like two days ago, and I wanna be a good c̶u̶s̶t̶o̶m̶e̶r̶ user, so I opened up Feedback Assistant.

It asks me for my email, I input it, press enter. A password input appears, but keyboard focus doesn't move there automatically. I know is such a tiny nitpick practically, but tiny shit like this makes it so obvious that not a single person actually tried this UX. 10-15 years ago, Apple would never release something that isn't perfect, but now there are these UX edges absolutely everywhere across the OS.

I ended up not logging in at all, wrote my fix into a tiny fix-display.swift file which I'll run when it happens instead.

SparkyMcUnicornabout 4 hours ago
I don't think we get to blame these issues on "vibe coding", they've been around for too long.
MattRixabout 2 hours ago
This stuff has been happening in Apple software since well before the AI coding stuff came along.