Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

62% Positive

Analyzed from 2601 words in the discussion.

Trending Topics

#claude#code#codebase#more#codex#https#com#don#large#lot

Discussion (70 Comments)Read Original on HackerNews

nopinsight38 minutes ago
This is already the case for many startups. In fact, the figure might be closer to 100%. The work shifts to requirements analysis, high-level specifications, and final review instead (after AI code review).
foresterre23 minutes ago
The first link states literally

"AI will take over almost all the work of software engineers (SWEs) end - to - end in just 6 - 12 months!"

What you describe is >50% of the job of SWEs, even when they write all code by hand.

Are you saying that "for many start-ups", this isn't done by SWE's but by some other career type or are you implying that it's just the code written (and first review) is replaced by AI?

suzzer9930 minutes ago
Yeah I'm working on one of those now that a 3rd-party vendor cranked out for us. I spent all day ripping out an endpoint that did 98% of what another endpoint did and should never have existed. I also ripped out 80 lines of code that looked like this:

const sqlStatement = (!params.mostRecentOnly) ? {giant SQL statement} : {identical giant SQL statement + 'LIMIT 1' at the end}

AI never met a problem that can't be solved with more code. Need some data in a slightly different structure? Don't try to modify an existing endpoint, just build a new one! Need to access a field that's buried in a JSON object in the database? Just create a new column, but don't bother removing the field from the JSON object. The more sources of truth, the merrier! When it comes time to update, just write more code to update the field everywhere it lives!

Factor out the extra sources of truth you say? Good luck scanning the most verbose front-end you've ever seen to make sure nothing is looking at the source you want to remove. In the beginning of big projects, you have to be absolutely ruthless about keeping complexity down so it doesn't get out of control later. AI is terrible at keeping complexity down.

My goal is to halve the lines of code from what the vendor turned over to us. One baby step at a time.

cbg024 minutes ago
If only we had this tech back when managers were looking at how many lines of code you were committing weekly as a performance metric.
jameson13 minutes ago
> many startups

which startups? I'm genuinely curious

sumitkumar31 minutes ago
He would be right if claude code was written by a team of humans. The AI written blob is slowing progress.
KronisLV22 minutes ago
I mean, since Opus 4.6 came out, that rings more and more true. You still have to babysit the output, do some planning and be proactive about ways to do things better… but 80-90% isn’t out of the question if you’re in the domains that are well represented in the training data, e.g. if you’re writing a lot of CRUD functionality as a web dev.

Companies will definitely expect devs to ship more with the same headcount, oftentimes either won’t hire juniors to train them up or will straight up do layoffs, sometimes the AI just being a convenient scapegoat. We kind of can’t ignore that either, sure a lot of those companies will be shooting themselves in the foot, but livelihoods will be impacted a bunch.

prodigycorpabout 1 hour ago
This is really a zero information blog post. I want to know how they use the LSP to improve their understanding of the code base. Would be great if it was open source for us to review.

A post like this should be providing people with some reassurance about Claude's ability to understand code at a large scale. It's mostly fluff.

0123456789ABCDE26 minutes ago
this exists: https://code.claude.com/docs/en/tools-reference#lsp-tool-beh...

op is at `https://claude.com/blog/...`, you should be reading `https://code.claude.com/docs/` instead

in essence: rtfm bro

prodigycorp16 minutes ago
My complaint is about how there's not enough information in the blog post. The title of the post is "How Claude Code works in large codebases". 1521 of 18135 characters is dedicated to expanding on the premise of the title.

My criticism is fair. This is not an engineering blog post, it's purely marketing.

0123456789ABCDE5 minutes ago
you shoouldn't expect a corpo blog to read like an engineering one

try this instead: https://anthropic.com/engineering

hbarkaabout 1 hour ago
Really? I thought it explained the point that harnessing for agentic search of a large code base is more beneficial than RAG-indexing a monorepo.
jwilliamsabout 1 hour ago
> Claude Code navigates a codebase the way a software engineer would: it traverses the file system, reads files, uses grep to find exactly what it needs, and follows references across the codebase. It operates locally on the developer’s machine and doesn’t require a codebase index to be built, maintained, or uploaded to a server....

> Agentic search avoids those failure modes. There's no embedding pipeline or centralized index to maintain as thousands of engineers commit new code. Each developer's instance works from the live codebase.

The frame of "the way a software engineer would" and the conclusion seem at odds. I'd love to be schooled otherwise?

I use autocomplete/LSPs all the time and they're useful. That's an index? Why wouldn't Claude be able to use one? Also a "software engineer" remembers the codebase - that's definitely a RAG. I have a lot of muscle memory to find the file I need through an auto-completed CMD+P.

It doesn't need to particularly be real-time across thousands of engineers -- just the branch I'm on.

It's rare that I'd be navigating a codebase from first-principles traversal. It would usually be a new codebase and in those cases it's definitely not what I'd call an optimal experience.

marheeabout 1 hour ago
The answer is in the introduction:

> Claude Code is running in production across multi-million-line monorepos, decades-old legacy systems, distributed architectures spanning dozens of repositories (…)

So it is optimized for the general case, using robust tooling that works everywhere, especially when large & messy.

That being said, your remark is right and for well organised smaller repo’s there’s better tooing it can and should use. But I think it does, at least Codex does is my case so I guess Claude does it to. For example Codex use ‘go doc’ first before doing greps.

sumitkumar19 minutes ago
But the general use case is not the most efficient for a greenfield to-be fully managed by an agentic system code-base. It is built to be good around the scaffold(programming like humans) and not the actual problem space.

Anthropic's target should be a codebase designed for agentic comprehension from the first commit. Here the codebase adapts to the agent. You can enforce conventions, structured metadata, semantic indexing, explicit dependency graphs. Whatever makes the agent's job trivial rather than heroic.

khueyabout 1 hour ago
The article does have an entire paragraph about LSPs and how Claude can use them.
hibikirabout 1 hour ago
Even if there is first principles traversal of some parts of the codebase, there are other bits that definitely not change, and where exploring every time is a massive waste of tokens. My arguments with claude often have to do with making it explore a lot less, because I know better, and faster, than its slow, expensive navigation of things that basically never change. And it just goes into the same kind of rabbit holes every time.
thinkindieabout 2 hours ago
I don’t agree with the statement about indexing codebase: it works pretty well for IDEs like PHPstorm or other jetbrains IDEs
njovinabout 1 hour ago
PHPStorm's indexing is incredible. Aside from a scant few times it's been corrupted, which is easily corrected, I've never gotten stale results.

Although if you've ever used Claude's search tool, you'll be unsurprised that the team knows nothing about indexing.

How a company, whose primary product is text-based chat, doesn't allow users to easily perform text search on said chat is beyond comprehension.

Rapzid7 minutes ago
It's an odd statement. AI slop? GitHub Copilot has pretty good local indexing too. It's not a super hard problem to put code into a vector DB..
selcukaabout 1 hour ago
And Claude Code can use Jetbrain's MCP to use that index.
martypitt40 minutes ago
I don't have any LSP's hooked up to CC yet (going to fix that today), or particularly sophisticated CLAUDE.md files.

So, if I've read this post correctly, that means that CC is navigating my codebase today by sending lots of it up to a model, and building an understanding. Is that correct? Did I misunderstand it?

I kinda suspected there was more local inference going on somehow -- partly because the iteration times are fairly fast.

prodigycorp8 minutes ago
At the heart of it, the agentic loop is a fancy retry mechanism. It's wasteful because a lot of that can be compressed into a single call with proper LSP integration.
mystifyingpoi23 minutes ago
I think that's correct. Which is kinda funny, I remember 10y ago that I was heavily relying on IntelliJ features to understand new codebases (jump to definition, find all usages of a function, navigate from SQL to the table in database tab etc.).

It turns out, that for a machine, find and grep is all that's required.

belZaahabout 2 hours ago
How very interesting. In an industry, where things shift around in months if not weeks, there’s been not only enough time for clear patterns to emerge but also these patterns have proven successful on large codebases. What’s the success criteria? Didn’t delete production database? Team velocity has increased? Codebase TTL has increased? Operations guys are happier?
giancarlostoroabout 2 hours ago
> Didn’t delete production database?

I still say if this happens to you with AI tooling, that's both a failure on you and your org for giving a developer prod credentials that could nuke production resources. I don't think I've worked in a place that gave me this level of blind access.

nibbleyouabout 2 hours ago
I have only worked in startups and I have been an early engineer in both of them. I would always get high privileges within a short time where I would have the access to create and delete resources. I don't think it's that uncommon.
eecc43 minutes ago
I would never have these privileges granted directly to my account.

Indeed it’s a good practice to use roles where supported (AWS has them) and explicitly switch when needed

ramraj07about 1 hour ago
The first step I do when I do any meaningful side project is to set up rds with snapshots. So any startup that doesnt do this one basic step already deserves to fail in my opinion.

Then next I've used AI agents like crazy, we even have linked mcp servers that let it query on the dev database. Haven't seen it try deleting everything a single time. I haven't seen any agent try to do anything destructive. Ever. Perhaps its just reflecting an outrageously bad engineer and nothing else.

indentitabout 1 hour ago
But the correct way to do it is to have a separate account with more privileges, and only give AI access to your standard developer account
belZaahabout 2 hours ago
Exactly. So is that level of obvious hygiene where the bar is or is it somewhere else. What ticks me off is the audacity of blanket claims without an attempt to even remotely state why it’s said this is a list of successful patterns and what does success mean. We’re just supposed to eat it up, because, you know, Claude.
digitaltreesabout 1 hour ago
Dude, AI has been shown to execute queries on coworkers env files, extract master keys, decrypt variables and push to production.
ufish235about 1 hour ago
How important are Claude.MD files when they don’t even describe (with concrete terms) what should even go into each one?
0123456789ABCDE38 minutes ago
the fish: you can read about that here: https://code.claude.com/docs/en/best-practices#write-an-effe...

the fishing: 1) install the official `skill-creator`; 2) use that with the above link to create `claude-md-improver`; 3) improve the skill by tasking claude with researching the topic of `progressive-disclosure`, in the official docs; 4) point the new skill at you CLAUDE.md file and accept the changes

Plywood135 minutes ago
Claude clearly wrote this. A lot of fluff, not much substance.
hbarkaabout 1 hour ago
Interesting that MCP was mentioned over CLI. For production or controlled environments, I would not make MCP the deployment path. I would let MCP help generate or choose commands, but have the actual deployment go through CLI scripts, Git commits, and CI/CD approval.
tex0about 2 hours ago
If the developer can have a local copy of the monorepo it's not a "large" codebase.
regnerbaabout 1 hour ago
Disagree, but also what do you classify as local storage? Does the repo “size” include all projects or just one? What about multiple branches? How much capacity is local storage?

A stock Unreal Engine project is several hundred gigs, consists of multiple solutions, multiple languages, and I would classify as large personally.

Without some kind of indexing it’s very awkward to work with and very slow. To work with LLMs and Unreal projects we create a local index, that index file alone is 46GB.

Without distributed compilers and caches it can take multiple hours to compile the main solution per platform (usually PC, Linux, Xbox, PlayStation, Switch, and sometimes mobile).

So the codebase easily fits on local storage so long as you don’t count assets (those are several TB) and extra so for source assets (10s of TB), and that’s per stream per large project.

Anyways, point is I disagree and think Unreal Engine is an example of large codebase that fits locally.

digitaltreesabout 1 hour ago
If your codebase can’t fit on a single developer dev machine it’s too big.
nfg30 minutes ago
Ever work on a AAA game?
ramraj07about 1 hour ago
You mean like Teslas multi terabyte repo is not normal?
rtpgabout 1 hour ago
I think it's obvious that multi terabyte repos are not the norm.
aulinabout 1 hour ago
If you can't clone it it's not a repo
nilirl38 minutes ago
The better you explain the codebase to the LLM the better it explains it to you?
Advertisement
Tsarpabout 2 hours ago
Wondering if enterprises have a modified version of CC that doesnt have to optimize to stop bleeding on fixed cost subscription plans.

The article really does not align with the current sentiment. Everyone with a choice has mostly moved on to codex (ofc in this world all it takes is a model update/harness update to turn things around).

CC is great at a lot of things, but repeatedly misses out reading on crucial parts of the code base, hallucinates on the work that was done and a bunch of other issues.

Reebzabout 2 hours ago
The influencer economy trades on hype, on frenzy, and ultimately, eyeballs. The more the better.

They want you feel like you’re missing out. They want you to switch. Being boring is far more productive. Pin your versions. Stick to stable releases and avoid the nightlies.

Significant noise created from 4.6 to 4.7 Opus transition has caused some to interpret this as signal. Excluding certain genuine and real bugs, the noise about perceived quality falling dramatically was noise. Influencers doing influencing turned it into “signal”. The reality was that if you had strong planning and spec driven development it ranged from manageable to non-existent.

The vast majority of the people I know and work with have not switched off CC or their Max sub.

paustintabout 2 hours ago
I have a choice and have not moved to codex (100/mo personal + my employer pays for a subscription). I try codex here and there and it seems to go off the rails every time. I have had some good experiences with codex, but generally trying to get something big accomplished it doesn't work out.

But I may not have paid enough to get the full real experience with codex

viking123about 1 hour ago
I use codex at home 20 bucks a month the limits are very high relative to the price, maybe the gravy train ends soon for these and then it's probably to open router chinese models.

At work it's CC or sometime codex, personally don't see much difference at all and most normies will notice none. The cultists have their opinions.

periodjetabout 2 hours ago
> Everyone with a choice has mostly moved on to codex

Ha!

shoabout 2 hours ago
> stop bleeding on fixed cost subscription plans

What bleeding? Anthropic wants as much of that "bleeding" as possible. The interaction data gathered from genuine human CC subscription usage of their models goes directly into their RL training, it's invaluable and they are more than happy to lose money on the inference to get it. That data is what xAI was recently willing to pay $10b to cursor to get.

They want you to use Claude Code. They hate other UI surfaces like OpenCode etc purely because they lose control over that data, so they're subsidizing the inference without getting what they actually want, the data (they still get some of it of course, but it's much less ergonomic for them. Those tools often abstract away the subagent calls, for example). OpenCode can collect that data themselves, so by allowing subscription there, Anthropic sees itself as subsidizing another org getting that data. Hard no.

And tools like OpenClaw are useless because they're mechanical and don't represent actual users interacting with the service - again, subsidizing but not getting the reward.

It's all very simple once you understand their motivations.

Aeolunabout 2 hours ago
You must be using a different CC. Or what they’re writing here is correct, and it’s all due to the CLAUDE.md file that I only occassionally yell at claude.
Tsarpabout 2 hours ago
Hmm please share more. I have had the max CC sub since it came out. Religiously follow all of Boris/Cats advice but still struggle with it. Meanwhile a really badly written AGENTS.md will still get the work done.
zarzavatabout 2 hours ago
Apologies but what is a Boris Cat?
vasachiabout 1 hour ago
I find that most “techniques” are basically user hallucinations. Simple plan-write-refactor loops and trivial CLAUDE/AGENTS.md, generated by the harness itself, work nicely. Maaaaaaaaaybe write a skill or two, but usually it’s better to just write a script.
SpicyLemonZestabout 2 hours ago
I think it's a good rule of thumb that if you find yourself saying everyone prefers this model or that model you're in a bubble. I've made this mistake before, I used to go around saying everyone knew Claude was the only model for serious professional use, but I was wrong.
sigmarabout 2 hours ago
I always assume that people making those comments on HN are trying to convince others to switch to their model. Surely no one actually believes their friend circle is a representative sample of the hundreds of millions of people that use these LLMs?
viking123about 2 hours ago
Anthropic has the best marketing for sure.

Btw the guy in charge of that stuff for Anthropic is the same guy who said GPT 2 was too dangerous to release, Jack Clark. LMAO. That model could barely string a sentence together.

SpicyLemonZestabout 1 hour ago
It's probably not a coincidence that I both prefer Claude and think that they made the right judgment call on GPT-2 at the time.
Analemma_about 2 hours ago
> Everyone with a choice has mostly moved on to codex

You are deep in an information bubble, mostly driven by hype-train influencers with magpie attention spans.

wood_spiritabout 2 hours ago
I’m super interested to know what the back and forth between models and tools really looks like in practice.

Are there any much more detailed walkthroughs of how it works and how it decides the tools to use and the grep to use etc and what the conversations actually look like?

In the UI you see just enough to know it’s doing something but you don’t really see the jumps it’s making offscreen.

weird-eye-issueabout 2 hours ago
You can easily inspect the full requests it makes to the API which contains the full system prompt, tools, tool calls, etc.
sprobertsonabout 2 hours ago
or easier, open ~/.claude/projects/[project]/[session].jsonl (excluding the system prompt)
weird-eye-issueabout 1 hour ago
Doesn't really seem easier and it's in a harder to read format
ralfhnabout 2 hours ago
Codex is open source if you’re interested https://github.com/openai/codex
ares623about 1 hour ago
Lots of concepts. Release the harness that made it possible to port Bun to Rust in 9 days. That's what everyone really wants. Then everyone can go "do that but for this other goal".
0123456789ABCDE13 minutes ago
what if this magical harness is just: experienced operator† + claude code + official plugins + opus 4.7 + max effort ?

† swe with practical experience, a code wrangler if you will