Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

79% Positive

Analyzed from 4363 words in the discussion.

Trending Topics

#code#python#rust#language#more#claude#llm#languages#data#write

Discussion (66 Comments)Read Original on HackerNews

alok-g5 minutes ago
I have been wondering on a similar thing -- Looking for feedback

There are many existing, often mature, third-party software libraries or solutions that a new project could use but which hide the internals, including how the data is organized behind the scenes*. Vibe-coding for the specific project requirements, instead of using the pre-existing third-party libraries, is now becoming a feasible option. The latter may be simpler (no features beyond the actual need), more flexible (easier to add new needed features), and the data/model behind could be more accessible.

Looking for feedback on pros/cons and experiences along this.

* I care for the data as it is can be longer-lived than the code itself.

Thanks.

rchoweabout 3 hours ago
Python has a much more mature ecosystem than Rust, especially for AI/ML stuff. I ran into a rust crate that purported to do a certain ML algorithm but did not do it correctly. I managed to write a replacement with Claude though.

I do think enforcing correctness at the type system level is a good idea for AI, which is why I often choose languages like C# and Rust over Python. However, for some things Python is definitely the correct tool for the job.

sshineabout 1 hour ago
I almost always pick Rust. Recently I wrote a plugin for something that was written in Go. I could have used Rust, but Go for one felt right because if the thing turned out well, others would surely find more value in having one toolchain.

The main reason is that you’re capable of reading it if you need to. And the recipient ecosystem expects a language. That’s why some data science communities pick R, MatLab, Julia, Python or Mojo not depending on what’s superior tech, but what their peers speak.

dev360about 2 hours ago
Definitively something to be said for AI/ML library support. I find myself going with Rust / TS for a ton of my backend work lately though, even though I'm a huge Django fan for backend.
niek_pasabout 3 hours ago
Bit off topic but why in the world are people still posting on medium? The reading experience is abhorrent; I couldn’t even finish reading this article before a full screen popup literally blocked the sentence I was reading.

Is there some incentive I’m not seeing?

iLemming24 minutes ago
> The reading experience is abhorrent

Nothing you read in the browser can provide ultimately great and hands-down the best reading experience equally for everybody - the modern web model is inherently at odds with that. A plain HTML page with no CSS is a near-perfect reading experience. The problem is that almost nobody ships that, because the web also became a publishing platform where authors compete for attention. A plain-text protocol under user control is closer to "best reading experience for everybody". The web could be that. It mostly isn't.

I stopped trying to read long articles in the browser. Why would I do that, if I can easily extract all the relevant, plain text (and even structured one) and read it in my editor instead? Where I have control over fonts, colors, navigation, etc. The browser is a delivery mechanism, not a reading environment. Treating it as one is a habit, not a necessity.

Long ago I stopped trying to type anything longer than three words anywhere but my editor. Of course, why wouldn't I? It already has everything I need - spellchecking, thesaurus, etymology lookup, translation, access to all my notes, LLM integration, etc. Try it one day - it's enormously liberating experience. And then maybe you'd stop reading long texts in the browser as well.

nickffabout 3 hours ago
It seems like it's just the latest evolution of the writer-friendly blogging platform; easier than Wordpress to package into a newsletter, and also easier to monetize with a paid tier.
ciupicriabout 1 hour ago
But don't we have AI to deal with the complexity of Wordpress? :-)
DonHopkins33 minutes ago
Insofar as AI is great at accidentally deleting your production and backup Wordpress databases, and forcing you to start from scratch with something else.
xrdabout 1 hour ago
They have made an honest attempt to pay writers. It's a different model than substack, but that's why.

I look at it the same way I look at pay walls for newspapers. I don't like them but I understand why they are there.

chneuabout 3 hours ago
My best guess is momentum. Some people are very, very brand loyal and have to do things in relation to what/how others do things.

In reality it doesn't matter where something is posted, just give us a url, but some people don't operate that way.

dsmurrellabout 3 hours ago
Yep, Medium was free and everyone donated content... then it put up reading paywalls and conned everyone, I'm also surprised when I see people writing on there.
__mharrison__about 2 hours ago
AI's are really good with Python. Quick turnaround. Easy to read. Tons of training data/examples. Many of the same reasons we wrote Python before.

Another benefit to using Python, is if you subscribe to writing/vibing a throwaway version first, a Python version is 100x better than a spec.

(Disclaimer: I teach Python and AI for a living and am doing a tutorial at pycon this week, Beyond vibe coding. Am also using other languages as there are times when Python isn't appropriate)

dakiolabout 2 hours ago
Problem with Python and other non-strict typed languages is that if you let an LLM to write some stuff, you cannot truly be confident that nothing has broken. Even if your tests all pass. The LLM could have broken some path that only gets run in production in a very specific case. At least with strongly-typed languages you get a compiler error. In big codebases is non-negotiable
serfabout 2 hours ago
so it just boils down to strictness even when we're talking LLMs?

I agree with you about fast failure being a nice feature , but I also think that if you're TDDing a bunch of stuff and it fails in some categorical way , well then the test suite was lazy.

plqbfbvabout 2 hours ago
> so it just boils down to strictness even when we're talking LLMs?

The article describes what I've been doing for the past few months - I did small python projects in the past because of the ecosystem: I couldn't possibly write a ton of the stuff required for the things I wanted to do, so I leaned into python because someone already wrote it for me. Quality of deps was mostly ok for the happy paths, but always a chore to patch the broken ones.

Nowadays I tell Claude what I want to build and I always ask it whether rust is a good choice for it. It'll pick up the right crates or choose whether it should DIY, do all the plumbing, nail all the logic, and in ~30m I'll have something very solid that would have taken me 3+ weeks of part-time evening coding in python. I think the article is right and rust is the closest to the "best language" we have for LLM coding at the moment: the strict typing and the tooling dramatically reduce the output space for LLMs, and 99% of errors have a clear, precise explanation that is actionable, and the compiler helps you a lot there too.

I think it also boils down to the fact that you cannot reliably and quickly answer "why is this arg None?" in languages like python without figuring out the call graph and evaluating possible states and inputs/outputs. Rust makes all that explicit and forces you handle it, which I feel dramatically cuts the time an LLM needs to spend figuring out why it's broken or what to do next. EDIT: The fact that you get memory safety on top of all this and it's handled by the compiler is yet another advantage for LLMs: the logic that gets written is simpler to reason about, because if you try to mutably access the same variable in two different places, the compiler will feed this back to the LLM at build time. In other languages that would be a "code smell" or would require static analysis.

Strictness is a quality for software and a chore for humans, and of course the stricter you are at representing your logic and your state machine, the less ways a program can break. LLMs writing in rust give you the strictness without the chore part, and it's a very good deal from my point of view.

__mharrison__about 2 hours ago
If you are using TDD with any recent model and even local models (qwen3.5+), you alleviate most of the issues mentioned.

Note that:

Writing code, then tests

Is not equivalent to:

Writing tests, then code

__mharrison__about 2 hours ago
My anecdotal (sample size 1) experience is not consistent with this. I code fast. Refactor fast. My stuff doesn't break. But my methodology isn't the same as other's.
QuadmasterXLIIabout 2 hours ago
i have bad news
__mharrison__about 2 hours ago
Lay it on. I love to collect other's anecdotes and see where they align (or disagree)
fxjabout 2 hours ago
You can of course use any language but here is my advice: you should use the language that you know best to make your life as uncomplicated as possible when you want to understand what the LLM was creating.

Remember, you are the judge whether the code is OK and if you use assembler you might get really performant code, but can you trust it?

Of course it might be a good incentive to learn rust or go. Or challenge yourself to learn something really cool like LISP, COBOL, FORTRAN, APL or J. (just kidding...)

just my 2 ct...

0xbadcafebeeabout 2 hours ago
I know a couple languages fairly well: C, Perl, Python, Bash. I never formally learned Go, but as a test of AI coding, I started some vibe coded projects in Go. It worked very well: the code is minimal, there's few dependencies, and it compiles down to a static app. But most importantly, I can actually read the Go code and understand basically what it's doing. I can also use LLMs to critique the code if I'm uncertain. The big benefit of Go is the simpler language and "batteries included" standard library. This leads to fewer dependencies and less lines of code, which improves overall AI output. In theory, AI should be able to write better code faster in Go than in another language like Rust.

Python does have a much larger ecosystem of course, so with Go you have to develop from scratch what already exists in Python. But for smaller projects, you can also have an AI write a clean-room implementation in Go of some project in Python. So you aren't necessarily locked into one ecosystem anymore.

And in my experience, you don't even need to know the language. I have a co-worker who's basically not a programmer, but got multiple implementations of applications working sooner than our dev teams doing it by hand. You should be a coder so you can architect and orchestrate the coding, but 'language' isn't a barrier anymore.

halfcat11 minutes ago
> I have a co-worker who's basically not a programmer, but got multiple implementations of applications working sooner than our dev teams

Deployed to production, right?

Right??

(I’m just kidding, of course it’s only on their machine, no different than Excel 5 years ago)

> architect and orchestrate the coding, but 'language' isn't a barrier anymore.

Never was the barrier.

kylecabout 3 hours ago
This post resonates. I recently built a little web service to scratch an itch I've been having and after discussing the options with Claude we settled on Go, and honestly it's been fantastic. Highly performant, native threading, dead simple to deploy with containers. And I don't even know how to read or write Go.
queenkjuulabout 2 hours ago
Go is fun, you should actually learn it
xtractoabout 2 hours ago
Oh man... I like go because it is compiled, performant, strong and statically typed. But "fun" is not something I would say about it. The ergonomics of error handling, lack of ternary operator and other stuff that compiled 30yo languages already had ...
kylecabout 2 hours ago
I did go through the Go tutorial many many years ago, but it's been so long I don't remember anything. I do remember it was an enjoyable process though, and I'd love to pick it up again.
librasteveabout 1 hour ago
Many here propose replacing Python with more performant, but less familiar languages - mostly Rust, Go. But I find the argument that the AI - HUMAN interface is the most important. A simple version of this is “no, stick with Python if that’s what you know”. A more interesting version is “use this new found AI leeway to move up the abstraction level”, “try something more expressive and human oriented”, “make a DSL and parser that suits the domain (and focuses the AI)”. Despite being a minority language, Raku is ideal for these aspects (esp with built in Grammars and general kitchen sink repartee) and works surprisingly well with most popular LLMs.
rick1290about 2 hours ago
I'm still not sure. Would love thoguhts on this.. but in this new ai world we are in... is it better to go fullstack typescript? or go with proven mature frameworks? .net, ruby, django, etc? Seems TS is moving fast but maybe its time to not reach for the shiny object and stick with proven tech? or in 5 years will we regret it?
halfcatabout 2 hours ago
The main risk-of-regret is: How will you feel when/if the $20/month plan costs $2000/month?

May never happen. But be clear with yourself if you’re relying on it not happening.

It’s a hell of a nice risk mitigator to understand the code, in a language you know, if you have to print-debug it yourself at some point.

bad_usernameabout 2 hours ago
The article applies to a narrow case of a totally green field application that's going to be completely vibecoded. This is the only case where you reasonably can be indifferent to what the language is, and so you can abandon familiar Python and go with unfamiliar Rust. (If you _are_ familiar with Rust, the point of the article is moot.)

This "fair weather development" approach feels very risky if that application is going to be exposed to any serious usage. There WILL be a situation when things break and the AI will be powerless to fix it (quickly) without breaking something else in a vicious loop. There WILL be a situation where things work fine and tests pass with 3 concurrent users but grind to a complete halt with 1000 because there is something O(N^2) deep in the code. And you NEED a human to save your day (which requires also proper architecture for that to be possible in the first place). If you don't plan for this, and just hope for the best, then you are building nothing more than a toy. And if you plan for this, then it matters again what the language is, and whether your team is proficient in it.

Or maybe I too old fashioned or too behind the state of the AI art...

BiraIgnacio9 minutes ago
I dislike Go but I have to admit, it's a great language for AI generated code. Simple enough, it compiles quickly and it performs meh-well enough for most applications.

One of the reasons I dislike Go is because it's easy for most engineers to write really low grade code with it. But AI agents would probably not write the best code in any language anyway, so not much is lost.

Advertisement
skybrianabout 2 hours ago
This seems sort of like asking whether a chatbot should answer you in English or Japanese. Obviously, it should use whichever language you understand. If you understand Python best, why not write code in Python?

But on the other hand, maybe you could learn some other programming language, particularly with AI help. If that's what you wanted to do anyway, it seems like a good time to learn.

ChicagoDaveabout 2 hours ago
If you're using GenAI, you should go through the process of selecting an optimal tech stack for each solution, but also take into consideration that Claude and other services probably the most knowledge of python, javascript, and typescript with go, rust, java, and c# following closely behind. Consider what you're building and what elements of the tech stack is optimal for your problem-space.

I don't know rust at all and I've built three applications using it with Claude because it has speed and correctness built-in.

I use Typescript for 90% of the things I build. For web development I've used a number of tools, but mostly react, nextjs, or raw html/css/js. But if I were building an enterprise application I'd consider my team and whether opinionated (Angular) was optimal over flexible (React).

Each project should consider its own optimal tech stack.

munroabout 2 hours ago
Lately I just have Claude build most things in Rust, it's really amazing. I tried Go, but I found it wasn't as good--Rust really does to me feel like Python. That said, it still struggles with the same class of errors of building complex systems. I've tried using TLA+, Alloy, and other things but haven't found the trick yet. The best I've found is reimplementing all external systems in memory and e2e testing everything extensively, without reimplementing the tests become unusably slow, and Claude can rewrite huge surface areas with ease--it's somewhere between mocking and literally just reimplementing the external systems.
an0malousabout 3 hours ago
The ideal language for AI coding:

1. Type safety as basic guard rails that LLM output is syntactically and schematically correct

2. Concise since you have to review a lot more code

3. Easy to debug / good observability since you can't rely on your understanding of the code. Something functional where you can observe the state at any moment would be ideal.

4. A very large set of public code examples across various domains so there's enough training data for the LLM to be proficient in that language

5. A large open source ecosystem of libraries to write less code and avoid the tendency for generated code to bloat

It's basically all the same things you look for in general. I think TypeScript scores high here but I'm curious if anyone knows of a language that fits these criteria better.

pdimitarabout 2 hours ago
Golang. People trash it for being verbose on errors but it's an extremely readable language and it's almost like bash, only much stronger typed and with a very rich stdlib (so it's not likely you'll need a library for a quick script).

It's more or less a perfect replacement for Python for "one-off programs" and "quick scripts". Many bonus points for not having to fight shell quotation rules and trying to remember differences between sh, bash and zsh.

ASalazarMXabout 2 hours ago
In a world where AI supposedly can write in any language, Go is much better choice than TypeScript. Imagine contemplating for more than a few seconds a choice between simple, fast, cross-compilable language, and a TypeScript -> JavaScript -> Interpreter -> JIT stack.

If you don't know Go, it's more efficient to learn it than to waste the hardware resources of thousands to stay within JavaScript.

pdimitarabout 2 hours ago
Absolutely. And in this same thread I am noticing people offering Java (lol). Yeah, we all need 1.5s startup time for one-off scripts, surely.
dukeyukeyabout 2 hours ago
This is just Kotlin. Strongly typed, more concise than Java or Go (and probably Typescript), less likely to blow up at runtime than Typescript, epic tooling, plenty of public code, and a library for basically anything because JVM.
pdimitarabout 2 hours ago
And needs the JVM to start for 1.5s before you get any results. Sure.

Golang or just shell scripts.

dukeyukeyabout 1 hour ago
The JVM takes tens of milliseconds to boot up, not a second and a half.
MaxBarracloughabout 2 hours ago
> Concise since you have to review a lot more code

Isn't readability what matters here? Conciseness isn't the same thing.

fluffysporkabout 2 hours ago
C. At least with Gemma 4 it does a fine job. Writes good error checking. Writes memory management. Mostly straightforward and easy to read. A lot of libraries. Runs everywhere.
OliverGilanabout 2 hours ago
I’d also argue it needs to compile fast/ have fast static analysis. Feedback loops like this are super helpful for agents
tptacekabout 2 hours ago
Type safety feels like the big one; anything you can shift to static/compile-time regimes benefits agents immensely.
iLemmingabout 1 hour ago
There are two working LLM axes. Critic strength: how much the language catches before runtime. Sensor strength: how good the empirical feedback loop is. LLMs benefit from both, but the sensor axis often is undervalued.

Type safety is great, but you can't just quietly disregard the benefits some dynamically typed languages provide; that would be completely ignoring that different tasks weight the two axes differently.

Systems code, performance-critical code, code where correctness across all cases matters more than exploration: parsers, compilers, network protocols, data structures - statically typed languages (like Rust) give you an edge here. The compiler's depth pays for the verbosity, and exploration is less of the work because the problem shape is known up front.

For stuff like building a web scraper, or rapidly prototyping, or exploratory scripts, something like Rust would be actively bad. You cannot poke at a live browser (you can with Clojure). Async Rust adds another layer of type complexity. The signal-to-noise for "figure out what is on the page" collapses entirely.

If I were picking a single language for general LLM-assisted work, weighted across task types, it would be Clojure (or Elixir), with OCaml as the most interesting alternative if the ecosystem were stronger.

aneabout 2 hours ago
Java?
sgtabout 2 hours ago
Was thinking the same. Modern Java is similar or at least quite a bit closer to many other less verbose languages. Not like your dad's Java anymore.
999900000999about 2 hours ago
So I can fix it when it breaks. I don’t understand anyone shipping real code without human review.

Give it 2 years, the ‘Blame the AI ‘ incidents will increase. Like an unfaithful partner you’ll always return to it

schmookeegabout 3 hours ago
I assume this is why things like PyO3 are popping up? If so, sort of a fascinating way to compartmentalize new rust code into legacy .py code in lieu of a refactor, or at least, a way to do a staggered refactor and eat the elephant in bites :)
infinite_spinabout 3 hours ago
For me, whether it's AI or my own handcrafted artisanal code, the choice of language comes down to what has the least friction. This means I turn to vite/react for a lot of frontend requirements, and that the backend will be in nodejs or python, because those are easier for me to debug than writing an equivalent application in C++ or Rust.
xnxabout 3 hours ago
For the utilities I write it is faster to iterate without having to compile. When I get to the point where I'm done adding changing features, and performance is an annoyance I can always ask the AI to "rewrite this in Go". (I've never gotten to that point.)
serfabout 2 hours ago
1) python is one of the foremost trained upon languages

2) it's practically verbose, not technically

3) it resembles pseudocode

4) batteries included shortcuts a lot of work

all of these reasons are a boon for LLM work.

lenerdenatorabout 3 hours ago
1) I still have to comprehend it.

2) The corpus for the sort of applications I build is likely larger for Python than it is for C++ and Rust. Bigger corpus == more training data == better generated code.

3) The bottleneck in the applications I run aren't in the execution of the code; they're in the database/network latency.

4) I don't get anything extra for pushing Rust or C++ over Python.

pacificpendantabout 3 hours ago
If all the libraries are rust as the article claims having the top layer in Python probably makes even less difference.

I tend to agree with the article’s statement about the value of the test code though, may even have been true before LLM code took over.

Advertisement
avereveardabout 3 hours ago
https://arxiv.org/pdf/2508.09101

tldr 2% average point lost on Rust compared to python, gap vary by model, go has a better upper bound but opus had it 3% below python.

benchmark is a bit old but research on why is there, article is just vibes

tontintonabout 3 hours ago
Also easier to ship a binary like a cli
Terr_about 2 hours ago
A somewhat contrarian/pessimistic view: The hardest thing in any future of LLM generated code is going to be the verification step, and especially types of verification that require humans which are going to be the most expensive.

Therefore the "best" language is going to be whatever makes it easiest for humans to detect bugs, bad design, or that the "wrong thing" has been developed.

CivBaseabout 2 hours ago
This point only makes sense if you ship AI code without reviewing it. And if you're shipping AI code without reviewing it, you're going to run into much bigger problems than Python performance limitations.
GardenLetter27about 3 hours ago
The LLMs just churns out non-idiomatic slop in any language.

It doesn't matter if the 800-line if statement is able to use pattern matching.

There's been a lot of progress on making coding agents able to solve problems when they can easily evaluate in a closed loop, we desperately need something similar for controlling complexity and using relevant abstractions.

fxjabout 2 hours ago
One thing to consider:

The (well-known) Sapir–Whorf hypothesis (if dont know it, look it uop) is often invoked for natural languages, but there’s a pretty direct analogue for programming languages: the language you "think in" during solving a problem biases which abstractions and idioms you reach for first.

If you force an LLM to first solve a problem in a highly abstract language (Lisp, APL, Prolog) and only then later translate that solution to C++ or Rust, you’re effectively changing the intermediate representation the model works in. That IR has very different "affordance", e.g.

- Lisp pushes you toward recursive tree/list processing, higher‑order functions and macro‑like decomposition. (some nice web frameworks were initially written in LISP, scheme, etc...)

- APL pushes you toward whole‑array transforms, point‑free pipelines and exploiting data parallelism. (banks are still using it because of perforance)

- Prolog pushes you toward facts/rules, constraint satisfaction, and backtracking search. (it is a very high abstraction but might suit LLMs very well)

OK, and when you then translate that program into C++/Rust/python, a lot of this bias leaks through. You often end up with:

Rule engines, constraint solvers, or table‑driven dispatch code when the starting point was Prolog.

Iterator/functor pipelines and EDSL‑like combinators when the starting point was Lisp.

Data‑parallel kernels and "vectorized" loops when the starting point was APL.

In principle, an LLM could generate those idioms directly in C++/Rust. In practice, however, models are heavily shaped by their training distribution and default prompts. If you just say "write in Rust", they tend to regress towards the most common patterns in the corpus (framework‑heavy, imperative, not very aggressively functional or data‑parallel), even when the language would support richer abstractions.

By inserting a "thinking" step in a different paradigm, you bias the search over solution space before you ever get to Rust/C++. That doesn’t magically make the code better, but it does change which regions of the design space the model explores.

Same would also be true for python which is already a multi-idiomatic language. So it might be a good idea to learn a portfolio of different languages and then try to tackle a problem with a specific language instead of automatically using python/go/rust because of performance.

Something to consider...

p.s. how would a problem be solved when the LLM would have to write it first in erlang? Is it the automatically distributed?

p.p.s. the "design pattern" of the GoF comes automatically to my mind, which might be a good hint to the LLM to use.

aaroninsfabout 2 hours ago
As always, "it depends."

I'm using coding tools to build a complex media-intensive application. The approach I'm taking is to build a _reference implementation_ in Python, which is in its design specifics, constrained to use patterns which transliterate into the actual deployment targets (iPadOS/MacOS/Web).

Why start with Python?

Because I can read it, reason about it, and run it, trivially, which are Good Things for the reference. I intend to have multiple targets; I'd rather relate them to a source of ground truth I am fluent in.

For what I'm doing, there is also a very rich set of prior art and existing libraries for doing various esoteric things—my spidey sense is that I'm benefiting from that. More examples, more discourse.

I'm out of the prediction business and won't say this is either a good model for every new project, or, one I will need in another N months/years.

But for the moment it sure feels like a sweet spot.

Ask me again though, after the reference goes gold and I actually take up the transliteration though... :)

ActorNightlyabout 2 hours ago
a) Python (and Node) comprise the largest training set for all the models, so you are likely to get way better accuracy, especially with local models

b) Python code is easier to introspect, and set up test harnesses around. And also extend in agentic frameworks

c) LLMs are really good at translation. I can give it python code and it can translate it into C.

suis_sivaabout 1 hour ago
Let's go through some of the arguments, in no particular order:

> Klabnik vibe-coded a new language in Rust, therefore Claude + Rust = Good.

I argue the inverse -- Rust, being an ML-family language, is well suited for parsing, and language design (I know! Shocker!). In more moderate translation -- ML-style languages are good for parsing, interpreting and compiling code. Claude is not the magic here -- ML is.

I would also add that I've had decent success vibe-coding+human-coding Haskell (contrary to the article). My experience is that if I can hand-write a rich set of types (blessed be IxMonad), I can throw Claude to fill in the blanks for the implementations. If I can design the data structures that make the program tick, bridging them is something Claude is awesome at. Again, no surprise -- it's intern-level work.

The key distinction between C, Zig and Rust is that Rust is designed around types. C and Zig are more memory-oriented -- they really see most of your program as flat memory and you can kind of shoehorn a little bit of data layout in that flat memory. While this offers a large amount flexibility, this philosophy isn't well suited for proving out correctness. But again -- this doesn't mean they don't have a spot.

When I was a junior at Tesla, I used to joke that senior staff had a VMs in their heads, because that's really how you analyze C programs -- you try to execute it in your head, with interesting inputs, but that's about it. Claude's head-VM is quite fuzzy and often makes errors.

With Rust, if you design your type system, you prevent yourself from making dumb mistakes. Swap out "yourself" with Claude here and it's the same story.

I've yet to see Claude design really nice type systems, fwiw.

But the point is -- Claude is the enemy of beauty and correctness -- it's up to the SWE to design a type-system which will prevent it from doing so. To be clear, I obsess over type-systems personally, but that's not the only way -- incredibly rich, comprehensive, huge type systems, fuzzing, Antithesis, proptesting are all things you can do to minimize the impact of slop, and those are all valid things to do.

---

> Code is not written by humans therefore it doesn't matter that you don't know Rust.

Wouldn't say this was explicitly stated, but I definitely smelt this undertone throughout the article. If you don't understand the language you're reading, how can you understand whether the code in front of you is correct or not? If you have a systems engineer sitting across you to clean your PRs up, you can pass that responsibility onto them, but what about when they give their two weeks?

If all you know is Python, chances are you're going to make better software in Python than in Rust. Stick an `Arc<Mutex<T>>` everywhere and chances are your code will be slower, as a matter of fact. Use If you want to learn Rust, please join us! But if all you're trying to do is vibe-code better code -- do it in the language you know and can actually debug when shit hits the fan.

---

> Anthropic C Compiler

It is impressive that Claude is awesome at taking existing code and rewriting it, this is certain, but I'd like to repeat the exact same rhetoric that many have given -- rewriting =/= original authorship. Awesome, we have a C compiler, but we already had one, and we just rewrote it? Seems like a little bit of wasted electricity.

To build on top of this, I am really happy that Bun is exploring Rust, and the Claude rewrite is truly impressive, but quite surprising at times, preserving strange anti-patterns (my name being said anti-pattern, teehee): https://github.com/oven-sh/bun/blob/ffa6ce211a0267161ae48b82.... It's hard to determine why Claude decided this -- I assume a really strict input prompt.

Do note that the current stage of that PR is much better than what it was at the state of that commit, and obviously Jarred isn't merging blind slop, but that is still human-driven by someone who has an understanding of their product.

My bet is actually that _rewrites_ of already-functioning, well-tested code, are likely to be more common as time progresses. I think that's what Claude is really awesome at, and I think Claude can often achieve 80-20 improvements through rewrites. Again, Claude alone will not be a silver bullet -- it won't generate data-oriented programs if the source material wasn't data-oriented. It won't optimize for cache coherency, if the source didn't, but moving from Python to Rust alone, with more-or-less the same code structure, you're likely to see improvements by virtue of common operations being memory-coherent and avoiding the GIL and so on.

---

> A C compiler written in Rust used to be a graduate thesis. It isn’t anymore.

Come on, this is disingenuous -- a simple C compiler is a 1-day long project. LLVM is a graduate thesis (and for good reason). Copy-pasting prior-art is academic dishonesty and Claude does a lot of that.

---

For transparency: I work with Noah.

EDIT: Wanted to add that not a single line of my comment was AI generated.