DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
63% Positive
Analyzed from 10094 words in the discussion.
Trending Topics
#code#more#don#feel#write#things#lot#still#coding#doing

Discussion (162 Comments)Read Original on HackerNews
That said, I have experience. I could absolutely see myself falling into this as a junior or even mid level dev. I'd no doubt not feel that feeling on my neck if it wasn't scarred from code review lashings early in my career by knowledgeable mentors.
I code review everything that Claude produces, and I'd estimate about 90-95% of the time, my reaction is WOW it works but too much code dude, let's take 3 hours to handhold you through simplifying it until nothing more can be removed.
For me what throws me off most of the time is the structure on the mid-level. It usually makes sense in the loc and maybe project level, but on the file and folder level it just loses reference on what it already has or what it does not need to be too verbose about.
The reason we aimed for minimal "accidental complexity" up to now was directly related to the cost/pain of changing and maintaining that code. Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?
I think a bit of refactoring, renaming and restructuring has been helpful for maintainability but recently I've been a little less inclined to worry about the easy readability of function bodies and fine implementation details. It still feels wrong but I can't justify the effort anymore.
I think I need to work up a Claude skill named marie-kondo, so that when it breathlessly presents its triumphant solution, I can go “yes, but does it spark joy?” And have it go into an aggressive refactor loop with me.
A good human developer might see that the better way to address the review is to backtrack and pick a different approach. The ai agents seem more prone to getting stuck down bad branches of the decision tree.
You can also tell it to periodically summarize the "lessons learned" from the recent session(s)
You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :)
I don't buy it. I think a much more likely reason it leans towards adding code is because deleting code carries inherent risk: it can break things in major ways or minor ways or very visibly or invisibly. Adding new code, on the other hand, is a lot safer: the only parts that can break are those the AI touched inside its own working context. So it doesn't have to go down rabbit holes and potentially create bigger and bigger messes.
Tell it "Do not change any files yet, just listen." Then we discuss the problem. Then I have it write to a file it's understanding of the change.
I review that carefully. Then I let it implement. I approve each change after manually looking at it. I already know what it should be doing.
Make smaller changes and check each one carefully before and after.
Don't vibe-code, it's a joke someone coined in the moment, that somehow the industry decided shouldn't be a joke, and some people think it's a feasible way of developing stuff, it's not.
Find a better way of working together with agent, where you get the review what's important to be reviewed by a human, and "outsource" the rest, and you'll end up with code and a design that works the way you'd program it yourself, you just get there faster. I probably end up reviewing maybe 90% of the code that the agent writes, but still it's a hell of a lot more pleasant writing/dictating a few prompts over typing tens of thousands of characters and constantly moving between files. Maybe I'm just tired of typing...
[0] https://news.ycombinator.com/item?id=16563160
If you try and get AI to do anything meaningful, it will be riddled with footguns and bizarre choices. Maybe if you have hundreds of dollars worth of tokens that might not be the case - but for someone who spends $10 a month, it's just not worth the headache.
Besides, for me these are hobby projects and writing code is still fun, I just make AI write the boring parts (good examples: saving and loading, parsing of data files and settings menu functionality) - but I keep it away from anything that needs a humans judgement to create.
I don't think this makes me dumb though, I've just moved up stack. Rather than caring about assembly language or source code, I'm focused on requirements, architectural decisions, engineering process, and ever more automation.
Which way is it going to go?
i) “Seniors” also get superseded by even more capable models that can do all of the things which currently require experience.
ii) Linguistics become the new higher order abstraction (English is the new high-level programming language) _but_ there are different / orthogonal ways of approaching software development than the way we do things now — which “juniors” become more adept at more quickly.
A junior has managers pushing them to do more, faster. You review the code but do you really understand it the same as if you struggled through it? Do you ever build the muscle memory of what works and what doesn't?
It is the thought process that builds skills. I've seen some projects trying to be deliberate about learning from the agent as it writes to code - but I'm not sure there is a substitute for struggling and learning by doing.
And probably the least valued it has ever been.
That's what drives it, and I don't really think the extrinsic things about the way you learned (while helpful) have that much bearing on it. It comes from you and you should take credit for it.
I think if you were learning today you'd probably find have the same feeling and do just fine because of it.
Can relate but the only thing I do different is I teach AI how to cleanup after herself in followup prompts, sessions and refining AGENTS.md. Static code quality analysis tools are also really good to keep the agent on its toes.
I am trying super hard to give the tools to validate everything to AI.
I finish by opening a draft PR and then I go through doing a deep review myself.
If I didn't already have 10+ years experience, it would be hard to learn and not atrophy with easy shortcuts being so available.
You still need people who know stuff in detail and can own the code... for now
Reviewing code is pain, reviewing requirements and giving feedback feels more productive. I have to confront the full shape of the problem and I usually discover a few cans of worms that make me rethink my approach.
Then I'll usually go and implement at least one piece of that. If I get stuck, I'll ask for some help. Then, once I'm happy with it, I'll ask the AI to review what I came up with. Then typically ask it to stamp the pattern around the codebase. And often to just iterate through writing out unit tests.
So I just did this for getting dense output from interpolants for an ODE integrator that I maintain. I did the work to make Tsit5 work by hand. I asked AI to stamp out the same pattern for DP5 and BS3, because it was just gene splicing those changes into a very similar RK integrator. I can review the diffs and see that it faithfully stamped out the same pattern with two prompts and a couple of minutes.
I'm still maintaining pretty strong contact with the codebase by doing a lot of my own programming, and fighting with the design while I'm writing that first piece of it, but then I use the AI to stamp out the mindlessly repetitive stuff.
That just seemed like the obvious way to me to go about programming with AI rather than pure-vibecoding and never touching anything other than prompts.
Also, you probably run out of tokens a lot faster if you're pure-vibecoding.
Plus you should spend some time debugging your own code. Even if AI could find and fix a bug in a minute or three that would take you 20 minutes, it is generally going to be better for you to burn that 20 minutes on trying to fix it before asking for help.
Of course, unlike another poster in this comment thread, I never cheated in college and spent a lot of time on "academic" side projects that weren't part of any course I was taking.
Once the vibecoders and cheats are done spamming a billion lines of AI generated code into industry, there's probably going to be positions for people who can (with AI assistance) sort out the mess and get production stable again.
Scar tissue from production going down and staying down is probably powering those code reviews and I think will be teaching this wave of vibe projects a few hard lessons. I've had to learn a few things the hard way like this and it's as effective as it is painful.
I'm very pro ai-generated-software in the right context. I think being able to vibe out software as needed is awesome and could finally unlock the potential of our computer and data dominated world. I also think we haven't yet learned as a culture where this new thing is different from traditional software and misunderstanding that is where a lot of the pain will be felt.
There must be an epistemic problem with just how fast these SOTA models run. I don't think it's just that my local model is dumber, I think it's more that the speed of token gen trains my brain with different expectations. There's no way it'll just generate hundreds of files by itself. When it can via a opencode loop with thought files, letting it run for a day is the only way you get that.
But the industry is changing around you fast.
If MIT-bred devs were already building crap in faang before, the trend has been getting nothing short of worse across the industry.
Expectations are rising, the field is becoming a rat race of which engineer can output the most mediocre/acceptable/good enough amount of features in the least time as possible.
Let me make this clear: you're in an increasingly rarer bubble where you have a luxury that is disappearing in this industry, plain and simple.
I have the fortune of having stellar devs around me, people that contributed to projects and software you use every day.
They are also outputting magnitude of order more than they ever did, and none of them is getting genuinely better at the craft, but it is what it is.
On the flip side, I'm working on stuff FAR more challenging than I would ever be able to do on my own.
My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.
AI might be making me a worse coder, but I don't care. If it hasn't "solved" coding now, I'm pretty confident it will long before my career is over. I don't have a job because I can write code - that's a small part of it. I have a job because I can get things to work. Anyone can code things that don't work (especially AI).
AI is certainly making me a far better overall engineer. Instead of spending my time trying to make the compiler happy (or fixing dynamic type errors at runtime), I can spend my time trying to solve substantially harder problems that I would never even dare try without an entire team to back me up (i.e. never).
Coding - imo - is VERY low on the totem pole of engineering skills.
I don't care if the function is pretty. I care if the system is upholding invariants and performing as expected, and there's adequate testing in place to PROVE to me that it ACTUALLY works.
High performance concurrent code has always blurred the line between sorcery and arcana... Go didn't really solve that. Rust/Tokio didn't. Zig didn't. C certainly hasn't.
It might be easier to prove to yourself, if you're the one doing all the writing, but at the end of the day, code is rarely just for you...
You probably should have the same level of proof whether you wrote it yourself and just trust yourself bro, or whether a Chinese Room wrote if for you.
I feel like I'm living in a Brave New World, and - at least for the time being - I'm enjoying it, even if it feels like I'm sprinting as fast as I can and still unable to keep up.
This is not a good thing. You should understand what your code does. Writing code nobody can understand is not a flex.
It is not hard to understand what a line of code does...
It is hard to keep up with solving the problem I'm trying to solve...
After using LLMs for a while, I have to admit it's pretty nice, and I like using it. I've been vibecoding a few apps, and it's a good dopamine hit to immediately see your ideas come to life. However, based on my experience, it will bite you if you trust it blindly. Even in my vibecoded projects, it keeps adding "features" without me asking for them. Since they're just pet projects, I don’t really care as long as the end result is what I'm expecting, but I don’t think companies will be as flexible. I also don't think customers would like it if features changed or got added with every new fix or update.
So this could go in a bunch of different directions from here, but to summarize the current situation:
I think, right now, the toughest position is for new developers, and the best position is for people already in the market.For one thing, the efficiency gains are massive. Bigger than any other tool, for any other price. Our company's main product is a web-app. We've been working on a re-write of our main product over the last few years. In one afternoon, I set up a new project with our desired stack, and was able to vibe-code an MVP of our product that we've been working on in a matter of hours. It wasn't perfect, of course, but I prompted feature after feature in bite-size prompts, each one taking between 5-10 minutes to complete. It looked pretty professional, and by any measure it was certainly "good enough." Given a little more time, I could solo ship and maintain what has taken us a few years to build as a small dev team. Unfortunately, this is more like a cheap "full team-replacement" rather than an efficiency-improving tool
Then there's the non-technical CEO AI hype-train. Our CEO (and the rest of our directors) have fully embraced the Claude suite of agentic tools. They're all regularly spinning up mockups, apps, and toolchains every single day. I can tell they're addicted to it, and they see the gains first-hand. In fact, while it hasn't happened yet, I wouldn't be surprised if the CEO laid off the majority of the dev team and vibe-coded the entire app himself (along with a few experienced devs). For now, they hold the view that "AI is a multiplier, not a replacer!" and in the same sentence will say "if this allows us to go the next few years without hiring again, that's a win!" I was asked point-blank why we couldn't just vibe-code our whole app. I didn't really have an answer. Yeah, there's the nice thoughts like "we wouldn't know how to maintain our app" -- but Claude would do a decent job in a single dev's hands, or "AI will potentially change the application unintentionally and introduce bugs" -- but proper observability, testing, and further prompting could fix those things in minutes to hours.
Frankly, it just doesn't make sense for companies to keep their whole dev team around anymore. No matter how many projects you launch and initiatives you tackle, the backlog will rapidly shrink, while individual dev capacity grows to exorbitant heights. Non-technical CEOs don't care about tech-debt, cognitive debt, poor software design practices, learning to code, keeping devs smart, the joy of problem solving, the art of a good algorithm or architecture; they care about shipping a product that works reasonably well, provides value, is worth paying for, and doing so for the cheapest investment possible. Unfortunately, AI fits THAT bill in nearly every single way.
I'm hoping you're right, and that the sheer volume of software being created now will increase demand. I'm worried, though, that it will never be enough to offset the massive capacity gains we get from AI.
I do write initial proof of concept crude prototypes (not commented, hardcoded variables, etc), and AI does the productionizing of them. It has really allowed me to command a team of agents instead of keeping track of a bunch of humans of varying work ethic, skill, and ability to maintain high code quality. And often AI is very good at maintaining patterns used in the code base or even keeping them to industry best practices.
When using AI you will no longer be writing so much in programming languages—English or whatever language you talk to the LLM will be the main language.
How much of this rote, mundane code do you honestly have in any given project?
I recently started a new job and I find that AI is making it so much harder for me to onboard. I am adjusting to my role much slower than my peers who are using AI less. I am coding in a language I am unfamiliar with, which makes the lure of vibe coding stronger. I am at least skilled enough to recognize when Claude gives me an answer that either makes no sense or is unnecessarily verbose. But the more time I spend asking Claude to write code, the less I feel like I'm developing the skills that the job requires. Plus, when I submit a PR, I lack the necessary confidence in my own work, which just feels bad.
Honestly, another part of this is that I'm asking Claude to search through Slack and docs for answers to questions when I should just ask another person. The AI is feeding my social anxiety, luring me into avoiding human contact that I know will be good for my understanding as well as my general need for social interaction.
That all sounds like I am absolving myself of responsibility, but I think it's important to point out how a given technology is especially addictive for a certain type of person, and traps them in a negative behavioral cycle. If I hold off on relying on AI now, I suspect I can grow in my skills to the point that I can delegate tasks to AI that are rote and easy for me to verify their results. It feels challenging, but it's necessary.
Do you think that in 2026 maybe rapid progress can also come from using the same primitives faster?
I'm still figuring this out but I'm certainly open to the possibility.
I am try to be coding at all times on complex issues while I am offloading a boring, non architecture, boiler plate heavye etc. task to it in the background in a git worktree.
I ask it to work in small iterations and commit every step of the way. After my coding session is done I can go back and review it's code.
Also I feel like it’s fine to let AI write your code. I felt very much like the OP did. A couple of things help keep my sanity. one is that as developers I think our job has evolved to knowing what decision an AI makes is good and which one is bad, this can be code or design – but there is nowhere a developer(or for that matter a knowledge worker) can hide from ai. In this world you will be forced to communicate with them. Partly because as a community we have decided(for better or worst) that AI should bring non trivial amounts of productivity gains to software development.
The other one is something I want to validate which is for those of us who are mediocre at coding, it might be a gift because it would free up some time and thus mind space to consider what we are actually good at.
I love coding, it always felt like Legos for adults. Not that Legos aren't also Legos for adults.
But there's no fighting the fact that we won't be writing 99% of the code anymore so I take pleasure in crafting the specs and requirements clearly, that's where I put the effort.
And then to avoid having to babysit the agents to get them to stick to the plan, I built a super robust external orchestrator that forces multiple review and fix rounds until I get the result I want.
I'll be fully open sourcing that soon also https://engine.build
During the "don't make me think" era of software design, if you wanted to make software you got really good at identifying the use case and using design thinking to optimize the paths to goal. You could make a business around a very narrow set of flows. The only thinking a user had to do was pick The App for That. They never had to think about how they want to approach their task, which is a skill in itself.
AI isn't like that. There's a million ways to use it. That's a big part of what makes it cool, but it requires the user to thoughtfully approach their workflows. Not everyone is used to doing that.
I unironically believe this is a very good habit. When it comes to writing, instead of starting with AI, finish a chapter by hand first then ask AI to review it strikes the best balance.
This is where I'm at. I feel like I need AI to review everything.
I've learned an insane amount in a very short period of time, and have been engaging in much more challenging problems.
Instead of "what's the right syntax for this for loop again?" I'm asking "what's the business critical module in this system and how do I structure the test suite to prove it's working to spec?"
I think, if you're not feeling challenged, you're probably just doing the same work but faster. You should try to tackle harder problems, too!
This thing will explode in our faces sooner or later. Also makes me feel like an imposter rather than an engineer.
Maybe that’s actually what I have become.
But why would anyone use AI to write documents or articles? Do you really respect your recipients so little that you can't be bothered to share your own thoughts?
I might as well get an AI to call my own mother on mother's day.
Yes -- now let's talk about the correct form of fighting back.
It is not "I don't want to feel self-doubt so I will suppress that feeling."
It is, "The self-doubt is valuable -- it's pushing me to improve."
The AI is never going to be able to say what you really mean. But it may inspire you to push harder to improve your ability to do that.
I personally think it can be a great tool for learning but it's so easy to fall into the trap of getting AI to do everything for you.
I've also used it for personal projects like a Chip8 emulator I wrote in C where I'd managed to run a few basic games and ran out of steam. Used AI to help me implement the rest.
I find that AI fails at things that are truly creative. I have been thoroughly unimpressed with ideas it has had or things it’s written for me. There’s still a lot of room for human creativity.
I have a pet theory that perhaps the optimal way to use AI will be more like an "exoskeleton" that turns you into a super-human programmer. Something that plugs the deficiencies of the human programmer, rather than replacing you entirely.
This sounds a lot like "You can skip the fundamentals of basketball and just focus on dunking!"
* actually writing more on my own - created a personal blog just to get myself to write more
* upleveling my thinking - think more about problems and framing
* leverage my experience - guide (or sometimes force) the AI assistant to leverage my experience to avoid problems
* learning new things - rather than let AI just replace things I can do, I use AI to help me learn new things/technology faster than I would have pre-AI
I wonder lately, doesn't that all new knowledge push out the old knowledge? As in new things replace old things we know. I don't know any studies on this but do we have infinite capacity for knowledge?
What about retaining it? I catch myself asking AI wondering about random things that pop into my head, reading it, maybe using that knowledge once and later no longer remembering what it was. Maybe if you use that knowledge in practice from the get go but projects get so complicated sometimes it seems like there is not enough space in my brain for things AI is learning me.
Another way of looking at what you said is that the practicing the new knowledge takes the place of practicing the old knowledge. So it isn't the knowledge that is replaced, but the learning (imprinting).
Retaining (again just speaking for myself) requires actually using / applying the knowledge at some point within some timeframe of learning it. Otherwise yeah it fades to the point of disappearing over time.
Im still concerned enough about the specifics to show concern about background refresh tokens silently failing in OAuth in a mission critical real-time system.
Im not coding it, but im still thinking it. That's the important part, ain't it? Is it dumb or just clever delegation?
I like making things myself, I have self-navigating robotics projects I do on my own time, but I'm not gonna use an AI to do it for me, the joy I get is figuring it out myself.
I will use AI if I'm stuck on something or need a specific algo written that I've spent enough time on and couldn't figure out.
You lose some ideas of thinking if AI does all the coding unless you study that code. But that’s still different to creating the code yourself.
Same with authors or songwriters. Their brain is used to create stories and songs that‘s why it’s easier for them to come up with ideas.
If you are just a reader or listener your brain doesn’t get wired in the same way.
AI is a lesser problem for already experienced developers, because they just lose some abilities but new developers will never get those abilities in the first place, which will limit their thinking especially for edge cases that need creativity
I do think that what people are being paid might get adjusted to whatever is happening.
Firstly off-shores, then now Tech companies have convenient response to lay off people and they genuinely believe that companies can be 5-10x shorter with AI and 90% of code will be written by AI.
They then push it on engineers and some adopt, some don't. It becomes a goodhart's law and people just start spending tokens to look good too and just spearhead using AI because hey 1) the corporate is recommending you to do this and then 2) the points you talked about.
The AI bill blows up (Cloudflare spends 5 million $ per month probably more in AI bills iirc) and with all of these, the company fires people off.
The amount of software engineers laid off all then try to create another AI tool (...using AI) or try to overcompete when the job market is at one of its all time low. Combine this with the overall trillion dollar and more of US stock market which is attached to the AI bubble.
I do think that you are paying a price in all of this, I feel like job insecurity is at an all time high, people are just scared of losing jobs from my understanding within this career. Some are closer to retirement than others but that's about it.
I think that nobody is that happy to be honest, the software engineer is worried about his job, the CEO is worried about being replaced or his product replaced by AI, the AI company is worried about how it would be profitable in first place, the investors are worried if they got into a bubble, the government is worried about all these people and other so distractions (think UFO files for example) and wars are happening and its successfully diverting our attention from real issues.
I don't know but I think that we are all paying a price and I say this as I sometimes feel the most over-empowered by AI, (like young guy in his teens) but I just feel like we lost something more critical along the way. We lost some senses of our humanity and peace as we are embedding this technology and just have people who are only thinking about it 24/7. I have to be honest but I do sometimes feel like I would've fared off okay without the AI thing too and I don't care about my personal gains so much sometimes but I do think that the world would've probably been net positive if AI's plateaued or were never created.
To do the thing I hate and use an analogy: It's not like asking a furniture maker to start using power tools; it's like asking a furniture maker to start telling a robot to make the furniture, in English. Yes, the people who were already good at furniture-making will have an advantage in how to direct the robot - but the salient point is that it's a recipe for misery for many people.
Can you point a LLM at a body of code, and tell it "give me a concise UML chart of what this does"? I'm not advocating humans writing UML, but some representation like that may be useful to AIs. Except that they don't really do graphs very well. We may need a specification language intended to be read and written by AIs, readable by humans but seldom written by them. Going directly from natural language specifications to code causes the LLM blithering problem to generate too much code.
I think they were making a joke about us getting dumber that I am confused about the premise of.
You seem to be suggesting we are going to fill spreadsheets (which claude already does pretty well) and that spatial reasoning is an insurmountable problem instead of just something that doesn’t emerge naturally from training on text/code corpi.
Assembly is a stretch (albeit a few applications still need it), but otherwise that sentiment (and people who actually believe it) speaks a lot to me about what makes today's PCs slower, more latent, and less enjoyable to use than the machines of the past. Today's world sucks.
2. we will think of verification loop more - tasks will be chosen that have more ability to be easily verified
3. the concept of the difference between "generation" and "verification" will be more mainstream [1]
4. spec driven development will become more common
5. scenario testing will become mainstream
i have few more predictions like these.
[1] I wrote a blog post on this explaining why I keep this generation vs verification difference in many parts of life https://simianwords.bearblog.dev/the-generation-vs-verificat...
There are things that I think are very cool; there are lots of projects that I've sort of wanted to do for the last decade that I have pushed off because they're reasonably high effort and I don't want them that much, so being able to have a pretend intern write it for me has been great.
On the other hand, I do think that using Claude/Codex to do all the coding at work has become a little soul sucking. Now instead of being paid to do fun software work, a lot of my work still boils down to babysitting interns.
When I do get to work on projects that are interesting, it's still fun because I can justify writing TLA+, and using that as a guiding spec for my projects. The problem is that most work really isn't that interesting; a lot of it is glorified SQL queries, or CRUD, or "put thing into Kafka in one place, and take it out in another place". Those jobs can be tedious, but they aren't interesting, and now instead of even getting that, I yell at Codex to do it and I awkwardly sit and wait.
I didn't think I'd miss writing stupid CRUD apps, but here we are.
Codex and Claude have been a bit soul sucking. I feel like I'm doing less of the planning and the like. I acknowledge that most code that makes it into production doesn't have to be amazing, but I would still take some level of pride when I would figure out an interesting optimization, even for a simple CRUD app, and now I am somewhat deprived of that kind of stuff.
You need to spend time on coding without agents and writing without AI as practice if nothing else.
You should not get complacent in offloading all detail oriented work to agents.
And how are people forgetting to code by using LLMs? Do they just mean they forgot the syntax of a particular language? Or forgot how to architect features or how the development lifecycle works?
I've mostly used LLMs to build more complex things that would have been a lot to manage previously, or to build something completely new and learn how it works. I feel like I've only become a better engineer (and programmer too) because of LLMs.
Today I'm forcing myself to learn SwiftUI and type each character with my hands, there is a part of me asking "Why are you wasting your time instead of prompting it and getting the UI you want in minutes?". Well, even I use AI I must know the domain I'm operating in to create good products instead of useless slop. Even though I've been coding for 20 years now, I still need to be humble to grown in anything new. I can vibecode full apps but I'm not gonna pretend that my experience isn't playing a massive role in guiding the models.
Don't let AI take away your joy for building stuff, it's totally fine not being "productive" and taking your time. Just force yourself to have, at least, 2 AI days off every week.
I've been using AI coding tools a lot lately, though I'm always in the loop. I write most of the important code by hand, but I like to send Claude Code or Codex off to try to come up with a solution in parallel to compare.
Having reviewed so much of my hand-written code side by side with AI-written alternatives, I am still amazed that anyone admits to letting AI write all of their code. Either you're working on much simpler problems than I am, or you don't really care about anything other than making the tests go green and waiting for bug reports to come back so you can feed them back into the LLM again.
Some times the coding tools come back with better ideas than I came up with. Some times my idea is much better. Most often with medium to high complexity problems, if the AI comes up with a working solution it has enough problems that an attentive human reviewer would have rejected it at best. At worst, it creates a mess of spaghetti code with maintenance time bombs ticking away. And that's for one change. I can't imagine what a codebase would look like if you completely deferred to AI tools to do everything.
This quote is even weirder because they claim to have been doing this for two years! Two years ago, coding tools were much worse than they are today. Using AI to write all of your code 2 years ago would have been a weird choice.
When I read posts like this I don't know what to think. Is this real? Or is it exaggerated for effect?
I also roll my eyes a little bit at the idea that not writing code for 1-2 years means you forget how to code. I've been back and forth between 100% management and 100% IC in my career. While there is a warm-up time to get back into coding, you should not completely forget how to code after such a short time. The only reason this person feels like they've forgotten how to code is that they've made a choice not to code for 2 years and, apparently, they don't feel like making any effort to change this. For someone who claims to love writing code, I don't get it. Something doesn't make sense about this writing.
If coding a new feature, I do one step and check the code, doing git diff, reading changes, or just asking Codex, to show me changes.
If writing an article, I ask for only one paragraph. I read paragraph and if it is ok, I accept it, if it doesn't show off my thoughts I work on one paragraph.
If doing data analysis with AI, I do one step of analysis and ask AI to display intermediate results so I can see if all is going in good direction and there are no hallucinations, additionally I have follow-up prompts for AI to do results verification. If all looks good, then I continue to the next step.
I don't like situation when I ask AI to do all code changes, or all article, or all data analysis in one pass with one prompt. It is simply impossible to check if AI is correct and results are not satisfactory. You can easily see this when asking AI to write a deep article with one prompt - you clearly see that it doesn't reflect your thoughts.
Maybe step-by-step is the approach to use AI and not feel dumber.
As far as the topic on hand, I work with someone whenever you ask them a question they say "AI says..." I'm not a big fan of that.
The work rhythm has ballooned and as every co worker is now pushing work (generally mediocre but acceptable due to strong codebase fundamentals and them being good engineers) it is increasingly becoming a rat race of who delivers more. Companies don't even need to promote AI productivity because engineers being engineers will engineer the minimum effort required to deliver as much output that makes stakeholders happy.
I am less and less fond of this work.
I'm sure there will be people with different experiences, but I've never worked as much as I did in the last two years, but I'm too burned out. I genuinely feel I've regressed as an engineer and I see the same in my coworkers, some of them contributors to the highest impact OSSs you can think of.
Every day, I'm more and more leaning into changing industry.
I love code and programming and solving product problems. But the job has changed dramatically.
If the pay+comfort ratio wasn't that good I would've done that already.
It's hard to give up to 6/7k+ net per month in southern Mediterranean. I'm way better off financially than most US devs making even more, there's no comparison.
Based on the MIT and MSFT studies.
1. a general coding AI: Completely broken. Should auto-comment, but never does anymore. Stopped a while back, nobody seems to know why.
2. another general AI: You have to at-chat it. It reacts to the message with <eyes emoji>, but never actually posts a comment?
3. a security bot. Comments, when it thinks there's a problem, in the most obtuse way possible. "SAST findings". But the findings are behind a link, and none of us devs are given access.
I could lean on and press the various people shoving AI down my gullet to like … look at this, and the actual lived experience of devs trying to derive productivity from this mess? But IDK what's in it for me, really.
Even Claude, when it worked, would comment in the most sociopathic manner possible: an English prose description of the problem, attached to an utterly unrelated line of code. Part of that is probably Github, who does let you attach comments to arbitrary lines of code in a review, only the blesséd lines can have comments. Literally none of our AI can format their complaint with a freaking suggested change (i.e., the Github feature, no, instead I get English prose).
Honestly for all I know we failed to pay the bill or something inane, but it would be nice if the AI could format an error message, or something.
Most people, given a nail gun, cant build a house, thats where the skill is...
Im not someone whose validation came from the lines of code, but from the resulting working system.
And of course the hedonic treadmill (if that's even valid any more, IDK) has reset the baseline so that anything less than the quick gratification feels like nothing. It makes the stuff I used to absolutely love feel like more of a chore compared to just cranking out features with code only an AI can love.
Entering the workforce happens at an age where people have built (some more rudimentary than others) a level of understanding and self control regarding delayed gratification and Type II fun.
Did you have the kind of life where you were never really challenged to build that skillset, or is the mental stimulation so strong for you when you use AI that it overcomes executive function?
Do you really think phrasing a question like this will ever induce a productive response?
I think it's pretty normal to be able to reflect on the difference in life skills between myself and those I see in others. There are things I've struggled with throughout adulthood because through some happenstance I was able to avoid the class of challenge as a child.
I didn't learn how to study until my 20s. I didn't have will-power over eating and exercise until my body changed around 30 and I suddenly got fat, then I talked with friends that teased me for being less skilled at something than a teenage version of themselves.
What's the saying: someone who's never smoked doesn't have to learn how to quit smoking?
So for example, once AI deleted my project, I was able to recover it but I lost version control through series of mistakes and IMO I lost a good version. (I think after abandoning that project and coming back, I was able to accomplish it)
Another example which is the one which is biting me the most is that I wanted to create a copy.sh/v86 based thing where you are able to edit the .img files of distros and save them all within the browser. I was able to run v86 custom way but I wasn't able to mount or have a proper way for making it work.
And now although I mean this is just an optional project and I just thought hey it would be fun to edit .img files in browser but now it feels like I get disappointed.
I think that disappointment is in both say a frustration of thing not working and secondly, just realizing that I might be dropping this idea altogether. Now I must admit that this is a field that I have absolutely no expertise at all in, but still, it feels disappointing to me and I kept thinking about it for sometime now.
I wonder how many people just feel that if AI is unable to make their project, to then either get frustrated/disappointed and even a salt of panic. I think its just wrong for how damn much we are relying on LLM's at this point. It feels like the whole economy is just doing what I am doing but with billions of dollars.
Another thing that I feel like is that both young and elderly people are really much like the same in vibe-coding. (Yes specs can help but LLM's are still autocorrect on steroids), I feel like we are both forsaking the junior developers and also forsaking the expertise created by senior developers as we replace it with these LLM's
If you don’t do this constantly, LLMS can certainly lead you right down the Dunning-Kruger path (though that’s a big oversimplification of a whole collection of psychological features from idee fixe to narcissism to fear of failure/criticism). If you really work at getting the LLM into the proper state it will happily rip your work apart in a rather cruel and indifferent manner, like an unsympathetic corporate gatekeeper who relishes exposing your flaws in a public setting. Debate club is another tactic that’s a bit less harsh, you have the LLM flip back and forth between defense and prosecution of your work.
I think this should be the default setting, but it doesn’t encourage engagement, the average customer will think the LLM is a mean jerk if it starts off like that.