Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

78% Positive

Analyzed from 1595 words in the discussion.

Trending Topics

#job#without#more#companies#using#still#everything#don#company#why

Discussion (36 Comments)Read Original on HackerNews

jwilliamsabout 1 hour ago
> It’s Still Your Code... AI maximalists will read this section and scoff. They’re already vibe coding everything and have little to no idea what the generated code looks like.

This frames the argument like a dichotomy. And to be honest, using the Social Media "vibe-coding" as a strawman risks anchoring against something that's a mirage.

There are plenty of good engineers getting good results whilst accepting code-ownership as a continuum.

> If Claude goes down tomorrow, can you still do your job?

This is a valid counterpoint, but doing software is already a tricky set of dependencies. The answer here isn't automatically "you need to be able to do everything". It could simply be also use Codex.

I think the overall point is well made, I just don't agree with the absolute framing. There are things you can hand over AI safely. Even if you start small and increment it'll have a decent impact.

djoldmanabout 1 hour ago
Is anyone actually at a company that is purposely trying to use a ton of tokens? It gets expensive really fast.
bormaj16 minutes ago
Yes, but also this particular company has the means to justify the expense. I think there's enough opportunity at scale in the industry to really change the business landscape.

Aside from that (and assuming a large enough sample size) I think it's a safe experiment to at least bet on finding profitable use cases. In 1-2 years, after this experiment runs its course, not everyone will have "unlimited" usage.

jwilliamsabout 1 hour ago
I've personally had people talk about token leaderboards at their work. Amazon and Meta did have ones, but I'd take with a decent grain of salt.

We all know it's such an insanely gameable metric you'd be insane to actually use it...

djoldman44 minutes ago
I came across a comedy clip where the employees are fighting over how many billion tokens they were using and assumed it was a joke.
adonovan10 minutes ago
It seems quite common for the infrastructure teams to put up a dashboard just to keep a sense of what is going on, but it is then misinterpreted as a “leaderboard” and encourages the most prolific users to find creative ways to squander more to stay the “winner”. Management is slightly disappointed by the waste but also happy that staff are engaging with their future replacements.
denkmoonabout 1 hour ago
For about 2 months, then I assume our fearless leaders saw the bill and wet themselves. Since then, Opus is off limits lol.
mikestorrentabout 2 hours ago
> You must understand what your AI generated code does

Absolutely yes.

> You must be able to do your job if your AI tooling disappears

Absolutely not.

Look, I'm an alright programmer. Not good, far from great. Interpreted languages work for me; add all that strong typing and compilation and it starts to go beyond what I'm interested in. Nonetheless, pre-AI, I have shipped many very functional, production-grade applications for many companies.

Now, I can write stuff in Go, and Rust, and it's fantastic. So much faster. The AI likes the strong typing, the test-ability, predictability, it all makes total sense. I'm using this stuff all the time, but I have not learned any Go; I'm too busy focusing on the parts the AI cannot do for me, like real requirements gathering, architecture, fit and finish, engaging stakeholders, etc. that still require the human touch. Maybe I could have learned some Go using that time, but at the end of the day my employer is paying me for results, not for my edification!

There are now huge parts of my job I cannot do without AI. Sure, it's like 800-1200 bucks a month of extra cost; ok; but with that extra low-5-figs a year of cost I am a much better employee in terms of my capabilities. It's easily delivering ROI for me, and therefore for my employer. Instead of sitting around wishing I had a Go developer to ask for help implementing a simple feature in a Terraform provider, I can just fork it and add what I need, try to submit it upstream for inclusion, etc. and the lack of language specific skills is no longer holding me back.

Take away the tool and I can't do that part of the job anymore, sorry. I can still do a lot, but slower, and honestly it would feel like going from a car back to walking, now; walking's fun, I do it recreationally for the sheer joy, but when there's hundreds of kilometres to cover in a short amount of time, the car is clearly the correct choice. So too is it with AI: we've invented the car for computers and only a fool would pretend he can do everything the same without it now.

ergonaughtabout 2 hours ago
If you can't do the job without AI, you can't do the job.

Spoiler alert: if you can't do the job, you're not going to be doing the job much longer.

drodgersabout 1 hour ago
'If you can't build a TODO list app using only punchcards, then you can't do your job...'

Obviously our ambitions expand due to better tools. I now commit to and deliver much more work than before LLMs, and — before then — ditto for frontend frameworks, generation 4 languages etc.

There are projects I now start without thinking twice that I never would have considered a few years ago.

That's what productivity looks like, and it makes you more valuable, and your job more secure (up until the ASI kills us all...).

16bitvoid29 minutes ago
It makes you less valuable and your job less secure because as LLMs improve, the level of knowledge/skill required goes down, thus putting more people at the level of "good enough", which is generally what companies optimize for over time with regards to hiring (least amount for good enough).

> There are projects I now start without thinking twice that I never would have considered a few years ago.

I'm sick of seeing this argument because it's not as persuasive as you think. If you were incapable of doing it before, why would I ever trust that you could properly evaluate the result? Even if I did, it's still like saying, "I never would've been able to do this project without a subordinate that knew how to do it, now look at me!" Okay? So why would I choose you when it sounds like I could pick anyone with basic programming knowledge to manage the subordinate since I clearly don't need someone with the know-how to do the thing, just someone capable of wrangling a coding agent? Might as well get the cheapest college CS graduate I can find.

physPop2 minutes ago
false dichotomy.
glhaynes8 minutes ago
Why? This doesn't follow at all.
daishi55about 1 hour ago
How is this different from saying “if you can’t do the job without the compiler, you can’t do the job”?
bigstrat200326 minutes ago
For one thing, compilers actually work and enable you to do useful things.
esafakabout 1 hour ago
AI allows you to do things you could not do before so it is fair to say they can't do the new job without AI.
rsoto2about 1 hour ago
What if I can do everything the AI can like read, interpret, and implement code(and not in a likely copyright-breaking way) but also reason about it better.
rsoto2about 1 hour ago
in before the mods accuse you of being "too mean"
bigstrat200325 minutes ago
I agree with other posters. You can't actually do the job if you can't do it without a half baked AI doing it for you.
ares623about 2 hours ago
A better analogy would be "the trebuchet for computers".

"but when there's hundreds of kilometres to cover in a short amount of time, the trebuchet is clearly the correct choice."

you point it in the rough direction and distance you want to go, pull the lever, see if you hit your mark, adjust, pull the lever again, etc.

And once you have dialed in the variables for that particular piece of rock that one time, you write it down in a "skill.md" file and announce to everyone on the team "this trebuchet has been carefully calibrated. Trust it with your other rocks too."

sublinearabout 2 hours ago
> only a fool would pretend he can do everything the same without it now

Unless you're working in a coding sweatshop, I don't see why you need AI to do what people have been doing for decades just fine without breaking a sweat.

What are you working on?

SeanAndersonabout 1 hour ago
Your competition's behavior necessarily affects you unless your company has an unassailable moat.

If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.

It's fine if you disagree with the idea that AI lets established companies ship faster. I'm not here to argue that. But I think it's pretty easy to empathize with "why might one need to change their behavior due to this new technology?"

sublinearabout 1 hour ago
> unless your company has an unassailable moat

Is not working in SV enough of a moat?

> If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.

I'm saying that B2B services are very common outside of SV and more focused on stability, compliance, long-term maintenance, and the operational knowhow that comes with all that rather than just shipping new features. It's not that there isn't some competition, but that the business is built on much more comprehensive partnerships than just being a software vendor. I can't believe I'm saying this, but "synergy" sometimes isn't just a meaningless buzzword.

When you try to jam "AI" into the mix, the disruption harms the business value. Many including myself would like to be enlightened if you think otherwise.

potsandpansabout 2 hours ago
> Unless you're working in a coding sweatshop

You are obviously unaware of what the silicon valley companies are asking for and commiting to.

applfanboysbgon42 minutes ago
The same shit they've always been asking for, judging by what OpenAI and Anthropic are pumping out surrounding their models: bloated, buggy Electron apps that consume gigabytes of memory to display fucking <1kb of text. We are not witnessing better software, even from the people who have unlimited capital and unlimited access to frontier models and are true believers in its potential to replace engineers.
rsoto2about 1 hour ago
I can do everything the same without it, because I'm still not using it. Why would I want to be a guinea pig for the world's richest companies and also atrophy my brain.
rsoto240 minutes ago
uh oh you guys didn't realize you were guinea pigs for products that can permanently alter your mental health?
ktallettabout 1 hour ago
Academia is the place with the least coherent policy. In the few institutions I'm aware of the AI rules for, the guide is usually 3 lines long and it is basically we don't promote usage of it, which is a meaningless phrase. Therefore you end up with students who are not supposed to use it unless they are international masters students who require it because of language barriers, and in that scenario, it is basically allow them to use it however they like even if it makes a mockery of the rigour of a degree. Lecturers can use it as and when they wish, then you get researchers who either use it endlessly or not at all. Then upper management who use it instead of using their own brain.
ngriffithsabout 1 hour ago
As a fun exercise replace AI with "junior" and "junior" with "mid-level." It holds up pretty well, as a manager you have responsibility for the work your team does and "make everyone put in more hours for no reason" is dumb. Maybe it comes across a bit neglecting of the "juniors" (in particular, it doesn't show any desire for figuring out ways for AI/"the juniors" to grow their responsibilities in a sustainable way).

Imagine reading that version as someone who doesn't know how big companies work. "But then they'll just fire all the mid-level managers, since they don't do any of the actual work!" Haha, boy would you be wrong.

__MatrixMan__about 2 hours ago
For another type of incoherent policy: don't restrict your employees to 2025 models and then accuse them of being sticks in the mud when they say the models are inadequate.
rifficabout 2 hours ago
DORA.dev (DevOps Research And Assessment) also point to having a clearly communicated stance concerning AI to be a foundational capability.

https://dora.dev/capabilities/clear-and-communicated-ai-stan...

lprimeisafkabout 2 hours ago
When I see "in the year of our Lord" I immediately tune out the writer. Almost as bad as "Unreasonable Effectiveness"