Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

52% Positive

Analyzed from 2056 words in the discussion.

Trending Topics

#vibe#more#coding#software#company#where#development#companies#quality#without

Discussion (61 Comments)Read Original on HackerNews

daishi55about 4 hours ago
> Dr. Jason Wingard is a globally recognized executive with deep experience across corporate, nonprofit, and academic sectors, specializing in the future of learning and work.He currently serves as Senior Advisor at Harvard University, where he advises trustees, senior administrators, and faculty leaders, and leads a research agenda on workforce transformation and innovation. He is also Executive Chairman of The Education Board, Inc. and Senior Advisor at Social Finance, Inc., providing strategic and visionary consulting while advancing a national research agenda on leadership and workforce development.He most recently served as the 12th President of Temple University, where he held dual tenured faculty appointments as Professor of Management and Professor of Policy, Organizational, and Leadership Studies.Previously, Dr. Wingard was Dean of the School of Professional Studies at Columbia University and Managing Director and Chief Learning Officer at Goldman Sachs. Earlier, he served as Vice Dean of the Wharton School, University of Pennsylvania; President & CEO of the ePals Foundation and Senior Vice President at ePals, Inc.; and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.An award-winning author, Dr. Wingard has published widely on leadership, learning, and workforce strategy.

Not sure exactly what this guy does or what his expertise is, but I am fairly certain it’s not software development.

coldteaabout 4 hours ago
He's responsible for the great success of Silicon Graphics, Inc. (SGI)

> and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.

Incipientabout 4 hours ago
I'm sure a vibe coded internal or external application WILL break a company. The thought process is however, out of 10 companies:

- 2 won't use AI at all and simply be left behind and stagnate (or go bust)

- 2 will partly use AI, and maybe keep up, maybe not

- 1 will go nuts vibe an entire app and explode (see Tea app or whatever)

- 4 will have an inefficient app, suffer reputational damage, lose some money, or similar, but probably survive

- 1 will hit the jackpot and get a 100M ARR company with 4 people.

Stats are of course completely made up, but you get the point.

donatjabout 4 hours ago
I think it's more complicated than that.

Anything someone can vibe code that gains any level of mild traction can then easily duplicated by all their competitors and a fraction of the time because the actual hard part, determining the products edges, has already been done for them.

2001zhaozhaoabout 4 hours ago
> 1 will hit the jackpot and get a 100M ARR company with 4 people.

I will point out that at the point where you get an 100M ARR it seems worth it to hire more people regardless.

But I'm guessing that the bar to be hired will be EXTREMELY high, because IMO the best people to hire people in future heavy-AI-automation-era would be basically founder-level visionary leaders who are also subject matter experts who can consistently make good decisions, and giving them 1M+ salaries in exchange.

If you have 100M ARR you can probably afford like 30 of these employees (and the probably exorbitant recruiting fees required to find them) and have them command AI all day. So your company will be extremely small in headcount but still more than 4 people.

(oh and how will this affect wealth inequality? i prefer to not think about it)

NitpickLawyerabout 4 hours ago
> 4 will have an inefficient app, suffer reputational damage

Have we been living in different realities? I can't remember any example of companies in the past 10 years that have suffered reputational damage related to their inefficient apps. And there have been plenty of inefficient apps...

Incipientabout 4 hours ago
Sorry there should have been an 'and/or' clause in there.

Reputational I was thinking leaking data, or generating wrong information for users etc

jvdpabout 4 hours ago
Sonos.
freekhabout 4 hours ago
[delayed]
amonithabout 4 hours ago
I mean a lot do get reputational damage (e.g. a lot of people hate Jira because how slow it is, or Microsoft Teams - same story) - it's just that nothing comes of it, so "suffered" is perhaps the wrong word here. People curse them and still use them.
coldteaabout 4 hours ago
>- 2 won't use AI at all and simply be left behind and stagnate (or go bust)

Would why would they? As if their software being made faster is the differentiator?

In my career as a consumer (lol), choice was never about that. It was about the business proposition, pricing, quality of implementation, guarantees the company is gonna be there long term, them not being scumbags, and so on.

If anything software churn put me off, especially when it come at the cost of messing with my established use, or stability.

sminchevabout 4 hours ago
I read it in a report: AI amplifies. It amplifies the success of the good professionals and amplifies the failures of the bad ones.

In all cases, whole enterprise solution can't be made with pure vibecoding. Specification is needed, a basis of predefined rules, coding styles, security considerations.

2001zhaozhaoabout 4 hours ago
> AI amplifies. It amplifies the success of the good professionals and amplifies the failures of the bad ones.

It also worsens the problem in general by making it way, WAY easier for the bad ones to performatively appear good. They'll have the better-sounding promises but if you listen to them you'll crash and burn in a few years. This doesn't even have to be intentional, just someone technically ignorant channeling AI sycophancy while simultaneously playing politics (i.e. promotionmaxxing while delegating ideas to AI) will have the problematic effect.

netcanabout 4 hours ago
So the article isn't very good but the vibe coding debate is pretty interesting.

This is how I'm thinking about it: in a scenario with increased opportunity and risk... You've gotta know where you stand.

First question is how much is more software actually worth to you."

This is one with a lot of self deception. Software development is expensive. The companies have to do lists and wishlist and road maps. They have an A/B testing system and a productivity mindset.

But... If Linkedin, Salesforce or any whatnot really did have ways of producing software to make money... they would have done it already. Remaining opportunities follow a diminishing marginal value curve/cliff.

Imo, software development isn't necessarily a bottleneck. So... opportunity is limited and risk is the bigger deal.

The opportunity is at the upstart trying to bootstrap feature parity with Salesforce.

If you have no customers yet... you can unfettter the vibe and see if it works.

Imo companies need to revisit google's early days. Let a thousand flowers bloom. 20% time. If you unleash capable people and give them tokens .. That's a good way of searching for opportunities.

The thousand flowers died at Google because they had reached a point where opportunities are not everywhere. The best ideas had been discovered and also... the markets big enough to move Google's dial are few. There aren't many $100bn markets.

There's no way to do vibe coding safely, at scale, currently.

tossandthrowabout 4 hours ago
> how much is more software actually worth to you.

I really misunderstood vibe coding task, especially in more corperate settings, is code removal and refactorings.

I think this is the the fundamental misunderstanding about agentic development: people only see it as a tool to add code.

coldteaabout 4 hours ago
>The thousand flowers died at Google because they had reached a point where opportunities are not everywhere.

It died because Google reached the enshittification penny pinching rent-seeking stage.

chromacityabout 4 hours ago
Third evidently AI-generated "AI is bad" story in a day. I'm gonna lose it...
kombookchaabout 4 hours ago
Somebody else can spin up some AI-generated "AI is good" stories and post those in response. Maybe somebody will deploy respective agents to do both automatically.

The house always wins.

threatripperabout 4 hours ago
Are AI agents posting this fully aware that they are AI? If they are trained only on human material they may not even understand their own true reality.
16bitvoidabout 4 hours ago
It's dystopian. I wish we could just roll back to 2022 and pick a different timeline. Anything and everything is either about AI and/or written by AI, and it's all the shittier for it. Software and services are becoming buggy, content quality plowed straight through bedrock, most people use AI to turn off their brains, and the people that care are left drudging through slop and garbage in both their professional and personal lives.

I want off this train to hell. I am truly (not exaggerating) on the verge of abandoning everything to go live in the woods.

loveparadeabout 4 hours ago
The more AI-generated AI bad stories we get the more likely LLMs will produce more!
coldteaabout 4 hours ago
LLMs are told what to produce.

"Write me a 500 word post about how AI is great" and such shit.

2001zhaozhaoabout 4 hours ago
> Speed without judgement is a liability

So, what's the alternative?

Speed without judgement? (Maybe you'll be fine. Or maybe your business gets run to the ground by spaghetti code piling up beyond any hope for human review and quality controls breaking)

Judgement without speed? (That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship)

Judgement + speed at the same time? (layoff most of your employees and keep only the visionaries? how do you even filter for people who can make good decisions?)

acron0about 4 hours ago
I think the judgement angle is the only interesting part of this article, and the piece worth pursuing is automating the judgement where possible.
slopinthebagabout 4 hours ago
> That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship

That sounds right but is it actually true? By that I mean shipping faster. First mover advantage is a thing, but it's not the only thing, and that's also not the same as shipping additional features quickly.

I mean, Apple is famous for being purposely late to entire markets, and they're doing pretty well...

This mentality is just "move fast and break things", and just because it's a common trope in the SFBA doesn't make it effective across the board.

bluesaddollabout 4 hours ago
These articles are such doomsaying, yesterday's clickbait. Again, the worst-case scenario is being introduced as the one that will surely happen to your company.
akmannabout 4 hours ago
Is anyone really vibe coding like this? I mean if someone without any coding skills vibe codes a whole app, cannot expect that this is production ready.. i think anybody with common sense should know this right?
fragmedeabout 4 hours ago
Define "production". You're not scaling to webscale on day one with a vibe coded app, but most apps never reach that anyway.
DubiousPusherabout 4 hours ago
I think it comes down to your team discipline. It can magnify your sins and your virtues.
pu_peabout 4 hours ago
> The bottleneck in the AI era is not production. It is discernment.

> The right question to ask after a vibe-coded prototype fails is not what did the AI do wrong. It is what did our process miss.

> That is a governance story, not a software story.

> The Question Is Not Adoption. It Is Readiness.

> The right question is diagnostic, not strategic.

I don't know if AI will fully replace programmers, but it has already replaced writers of this type of bullshit puff piece.

aussieguy1234about 4 hours ago
The faster you go with vibe coding, the more of a mess you'll get yourself into
piloto_ciegoabout 4 hours ago
This is just patently false in my experience thus far. I mean, I'm "vibe engineering" and know what I'm doing relatively well? But the way this works now is I'm more like an architect than a coder anymore. This means I can do things faster, but it also means it's less fun. But the customer doesn't really care about "fun" - so I do what I've gotta do.

But if anything, I could probably go a lot faster and be fine, it's just my life would be miserable. If you're going to "vibe code" try to remember to actually... you know... vibe.

spoilerabout 4 hours ago
The thing is, the development timeline is so compressed that you lose intimate knowledge of the codebase. Like, I don't think humans can form memories that detailed that quickly? Maybe it's just a me problem though. Anyway, so when you need to debug or fix stuff, AI's reasoning will be "welp makes sense, I suppose" and your mental mood of the codebase is now slippery. Eventually there comes a time where at best you can draw an incoherent high-level diagram of the architecture.

And AIs solution to problem is generally "more of the same" to fix it. It rarely looks at fixing design problems

slopinthebagabout 4 hours ago
> I'm more like an architect than a coder anymore

I don't understand this dichotomy. Coding is architecting, you can't divorce these things. In fact that is all it really is. It doesn't matter if you're writing assembly or python.

aussieguy1234about 4 hours ago
My definition of vibe coding is coding without review (for example, a non technical person vibe coding something). In the hands of a competent engineer the AI tools do boost productivity.

But even there, there is responsibility capacity, you can't have an engineer maintaining large numbers of systems at once, so if you moved fast you can still get yourself in trouble even with technical review.

I'd argue that doing vibe coding without a competent engineer reviewing the work is likely to have worse outcomes than drafting your own legal documents without consulting an actual lawyer.

Both are likely to result in nasty surprises in the future.

wewewedxfgdfabout 4 hours ago
More hysterical over reaction to AI.
Advertisement
Animatsabout 4 hours ago
The question is, when it screws up, who gets blamed, and who pays. If it's the customer, and you can afford to lose a small fraction of customers, it may be worth it. It's just another form of crappy customer service. If it's internal, and it's all output, no input, and the internal organization doesn't really need that info that badly, that might work out.

But give it the authority to do something and there's real trouble.

blurbleblurbleabout 4 hours ago
"This is what vibe coding is about to expose across businesses. The companies that think the story is about software are going to lose to the companies that understand the story is about judgment."
piloto_ciegoabout 4 hours ago
I don't know, my intuition since I started doing this software stuff professionally is that most people have dog piss judgment, most people are just making it up as they go, and "well thought out and planned well" is typically the enemy of actually getting anything done.

I don't know, I just feel like, "start building and the customers will tell you where the value is."

alfiedotwtfabout 3 hours ago
> vibe coding becomes a one-way ratchet. Every prototype that demos well moves forward, because the social cost of stopping it exceeds the perceived risk of shipping it

Ever seen a ratchet slip at high torque? That’s your marketing department shipping a vulnerable Wordpress connected to your internal customer database as well as phpMyAdmin listening to the world on 8008.

slopinthebagabout 4 hours ago
I feel like there is a lot of really reductive and over simplistic arguments being made on both sides here. Vibe coding won't necessarily break your company, and rejecting AI similarly won't necessarily leave you behind. Neither the speed of development nor quality of software seems particularly correlated with business success imo. Plenty of businesses exist which either ship slower than their competitors or produce much lower quality software, often times both (hello Microsoft!). Is it crazy to think other things matter way more?

Like, is it wrong to think the variance in both velocity and quality between successful companies is just as large if not larger than the delta between AI usage and no AI usage?

What about a conservative approach to AI adoption, looking for a moderate boost in velocity but maintaining most existing quality? Would that not be ideal? Or might it depend on the specific market the company operates in?

OsrsNeedsf2Pabout 4 hours ago
Obligatory "This is not an article by Forbes staff, and has a reputation bar so low it can't be used on Wikipedia"
zmmmmmabout 4 hours ago
This article is full of incoherent logic and conflation of different AI risks with one another.
immanuwellabout 4 hours ago
but the villain here isn't the marketing manager shipping fast, it's the leadership that clapped instead of asking the hard questions