HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
54% Positive
Analyzed from 2854 words in the discussion.
Trending Topics
#vibe#software#more#coding#company#development#decisions#where#sgi#lot

Discussion (61 Comments)Read Original on HackerNews
Not sure exactly what this guy does or what his expertise is, but I am fairly certain it’s not software development.
> and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.
But SGI also had quite a lot of software, including their OS (IRIX), imaging and 3D modelling libs and tools, and this little thing called OpenGL.
The whole resume just screams BS.
Err, SGI was one of the pillars of the industry.
You must be trolling? SGI is one of the legend companies of Silicon Valley.
I'm betting he's never shipped a single line of code into a real production system in his life.
http://linkedin.com/in/wingardjason/
Note: Don't downvote or flag me for linking to his LinkedIn. Clearly he wrote this Forbes article to chase clout and influence like most Forbes writers. This is what he wants, for us to talk about him and his credentials.
- 2 won't use AI at all and simply be left behind and stagnate (or go bust)
- 2 will partly use AI, and maybe keep up, maybe not
- 1 will go nuts vibe an entire app and explode (see Tea app or whatever)
- 4 will have an inefficient app, suffer reputational damage, lose some money, or similar, but probably survive
- 1 will hit the jackpot and get a 100M ARR company with 4 people.
Stats are of course completely made up, but you get the point.
I will point out that at the point where you get an 100M ARR it seems worth it to hire more people regardless.
But I'm guessing that the bar to be hired will be EXTREMELY high, because IMO the best people to hire people in future heavy-AI-automation-era would be basically founder-level visionary leaders who are also subject matter experts who can consistently make good decisions, and giving them 1M+ salaries in exchange.
If you have 100M ARR you can probably afford like 30 of these employees (and the probably exorbitant recruiting fees required to find them) and have them command AI all day. So your company will be extremely small in headcount but still more than 4 people.
(oh and how will this affect wealth inequality? i prefer to not think about it)
Have we been living in different realities? I can't remember any example of companies in the past 10 years that have suffered reputational damage related to their inefficient apps. And there have been plenty of inefficient apps...
Reputational I was thinking leaking data, or generating wrong information for users etc
Anything someone can vibe code that gains any level of mild traction can then easily duplicated by all their competitors and in a fraction of the time because the actual hard part, determining the products edges, has already been done for them.
Even with network effects, it's still a race between you building a ecosystem and your competitors catching up to you.
However, if you DO have some sort of network effect moat and your competitors DON'T (yet), then you have the only advantage that matters in the world, because remember, vibe-copying goes both ways. You can copy your competitors feature-by-feature just like they can copy you. So you'll just always keep up feature parity while everyone only uses you because you're the established player with the biggest ecosystem, and soon enough you'll turn your temporary advantage into a permanent one.
Note: legacy platforms can't really benefit from this because you probably need to rewrite your product from scratch to fit any sort of cutting-edge AI dev workflow. Whoever creates a AI-native platform and scales it first wins.
Chances are that most projects that use vibe coding will fail, and chances are that most projects that succeed will use LLMs
Would why would they? As if their software being made faster is the differentiator?
In my career as a consumer (lol), choice was never about that. It was about the business proposition, pricing, quality of implementation, guarantees the company is gonna be there long term, them not being scumbags, and so on.
If anything software churn put me off, especially when it come at the cost of messing with my established use, or stability.
In all cases, whole enterprise solution can't be made with pure vibecoding. Specification is needed, a basis of predefined rules, coding styles, security considerations.
It also worsens the problem in general by making it way, WAY easier for the bad ones to performatively appear good. They'll have the better-sounding promises but if you listen to them you'll crash and burn in a few years. This doesn't even have to be intentional, just someone technically ignorant channeling AI sycophancy while simultaneously playing politics (i.e. promotionmaxxing while delegating ideas to AI) will have the problematic effect.
This is how I'm thinking about it: in a scenario with increased opportunity and risk... You've gotta know where you stand.
First question is how much is more software actually worth to you."
This is one with a lot of self deception. Software development is expensive. The companies have to do lists and wishlist and road maps. They have an A/B testing system and a productivity mindset.
But... If Linkedin, Salesforce or any whatnot really did have ways of producing software to make money... they would have done it already. Remaining opportunities follow a diminishing marginal value curve/cliff.
Imo, software development isn't necessarily a bottleneck. So... opportunity is limited and risk is the bigger deal.
The opportunity is at the upstart trying to bootstrap feature parity with Salesforce.
If you have no customers yet... you can unfettter the vibe and see if it works.
Imo companies need to revisit google's early days. Let a thousand flowers bloom. 20% time. If you unleash capable people and give them tokens .. That's a good way of searching for opportunities.
The thousand flowers died at Google because they had reached a point where opportunities are not everywhere. The best ideas had been discovered and also... the markets big enough to move Google's dial are few. There aren't many $100bn markets.
There's no way to do vibe coding safely, at scale, currently.
A really misunderstood vibe coding task, especially in more corperate settings, is code removal and refactorings.
I think this is the the fundamental misunderstanding about agentic development: people only see it as a tool to add code.
LLMs are not being used for code removal or refactorings, it’s either to “hopefully unblock” this large project that has been behind deadline for 12 months, or to just speed up development (somewhat).
You are right that they are not. And that is the issue, the misunderstanding.
It died because Google reached the enshittification penny pinching rent-seeking stage.
I want off this train to hell. I am truly (not exaggerating) on the verge of abandoning everything to go live in the woods.
The house always wins.
"Write me a 500 word post about how AI is great" and such shit.
What such stories would change is worsen the training data, so that we get more of that style of writing (rather than angle).
So, what's the alternative?
Speed without judgement? (Maybe you'll be fine. Or maybe your business gets run to the ground by spaghetti code piling up beyond any hope for human review and quality controls breaking)
Judgement without speed? (That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship)
Judgement + speed at the same time? (layoff most of your employees and keep only the visionaries? how do you even filter for people who can make good decisions?)
That sounds right but is it actually true? By that I mean shipping faster. First mover advantage is a thing, but it's not the only thing, and that's also not the same as shipping additional features quickly.
I mean, Apple is famous for being purposely late to entire markets, and they're doing pretty well...
This mentality is just "move fast and break things", and just because it's a common trope in the SFBA doesn't make it effective across the board.
Very rough maths:
If your 100 person team still follows collaborative processes to cancel out errors (let's say it takes 10 people a day to decide on a single deliverable's shape), then give the design to the AI to implement (as we assume the AI can do it without supervision), then you can ship 10 deliverables a day.
At the same time, that 4 people team can have all of them bouncing ideas off of AIs to help them make decisions in rapidfire all day. They'll each individually spend an hour working on a decision then hand it to an AI. Their decisions are on average as good as your 10-member team meetings because while your medium-sized company's decisions sometimes end up suboptimal due to politics, the startup's decisions are individuals so make the wrong call more often, and I assume these two effects cancel out. In that case, your competitor with 4 people cranks out 32 deliverables a day assuming that the implementation AIs don't have to be supervised at all.
In summary it's not "move fast and break things", it's just "move fast, focus on making decisions, delegate everything else to the AI". Remember that the decisions are all that matters if the AI can do all the implementation.
But it also makes some more fundamental assumptions that I'm trying to challenge a bit. It assumes delivering 32 "deliverables" per day (the meaning of which is context specific) is better than 10. Is that always true? Is that delta the most relevant factor in the success of a business compared to its competitors? Etc.
> The right question to ask after a vibe-coded prototype fails is not what did the AI do wrong. It is what did our process miss.
> That is a governance story, not a software story.
> The Question Is Not Adoption. It Is Readiness.
> The right question is diagnostic, not strategic.
I don't know if AI will fully replace programmers, but it has already replaced writers of this type of bullshit puff piece.
But if anything, I could probably go a lot faster and be fine, it's just my life would be miserable. If you're going to "vibe code" try to remember to actually... you know... vibe.
And AIs solution to problem is generally "more of the same" to fix it. It rarely looks at fixing design problems
But even there, there is responsibility capacity, you can't have an engineer maintaining large numbers of systems at once, so if you moved fast you can still get yourself in trouble even with technical review.
I'd argue that doing vibe coding without a competent engineer reviewing the work is likely to have worse outcomes than drafting your own legal documents without consulting an actual lawyer.
Both are likely to result in nasty surprises in the future.
I don't understand this dichotomy. Coding is architecting, you can't divorce these things. In fact that is all it really is. It doesn't matter if you're writing assembly or python.
But give it the authority to do something and there's real trouble.
I don't know, I just feel like, "start building and the customers will tell you where the value is."
Ever seen a ratchet slip at high torque? That’s your marketing department shipping a vulnerable Wordpress connected to your internal customer database as well as phpMyAdmin listening to the world on 8008.
Like, is it wrong to think the variance in both velocity and quality between successful companies is just as large if not larger than the delta between AI usage and no AI usage?
What about a conservative approach to AI adoption, looking for a moderate boost in velocity but maintaining most existing quality? Would that not be ideal? Or might it depend on the specific market the company operates in?