DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
ā” Community Insights
Discussion Sentiment
49% Positive
Analyzed from 4226 words in the discussion.
Trending Topics
#should#software#data#app#vibe#coding#professional#security#more#engineering

Discussion (184 Comments)Read Original on HackerNews
I sent them an email and they threatened to sue me. I was a bit in shock from such dumb response, but I guess some people only learn the hard way, so I filed a report to the AEPD (Data protection agency in Spain) for starters, known to be brutal.
I've also sent them a burofax demanding the removal of my data on their systems just last friday.
I have a feeling that next year's theme will be security. People have turned off their brain when it comes to tech.
Nice. I wish more countries had something like that. Many of these organizations are lethargic and have to be forced into action by civilian efforts or the press.
I think that having paper documentation will be safer very soon :)
It is just a matter of time when something really really bad happens.
It kinda falls apart once you get past a few thousand lines of code... and real systems aren't just big, they're actually messy...shit loads of components, services, edge cases, things breaking in weird ways. Getting all of that to work together reliably is a different game altogether.
And you still need solid software engineering fundamentals. Without understanding architecture, debugging, tradeoffs, and failure modes, it's hard to guide or even evaluate what's being generated.
Vibe-coding feels great for prototypes, hobby projects, or just messing around, or even some internal tools in a handful of cases. But for actual production systems, you still need real engineering behind it.
As of now, I'm 100% hesitant to pay for, or put my data on systems that are vibe-coded without the knowledge of what's been built and how it's been built.
The quick fix was a simple, empty index.html file (or setting the -Indexes option in the apache config). The surgeon had no idea what this meant or why it was important. And the AI bots didn't either.
The odd part of this to me was that the AI had made good choices (strong password hashes, reasonable DB schema, etc.) and the app itself worked well. Honestly, it was impressive. But at the same time, they made some very basic deployment/security mistakes that were trivial. They just needed a bit of guidance from an experienced devops security guy to make it Internet worthy, but no one bothered to do that.
Edit: I do not recommend backing up web apps on the web server itself. That's another basic mistake. But they (or the AI) decided to do that and no one with experience was consulted.
They got the application right, and began stumbling with the integration - created a developer account, got the API key, but in place of the applications URL, the had input ālocalhost:5345ā and couldnāt get that to work, so they gave up. They never asked the tech team what was wrong, never figured out that they needed to host the application. Some of the fundamental computer literacy is the missing piece here.
I think (maybe hopeful) people will either level up to the point where they understand that stuff, or they will just give up. Also possible that the tools get good enough to explain that stuff, so they donāt have to. But tech is wide and deep and not having an understanding of the basic systems is⦠IMO making it a non-starter for certain things.
A simple "I'm a professional surgeon, but sadly know nothing about making software" would definitely make the conversation play out differently. How? Needs to be seen. But in an idealized scenario (which could easily become real if models are trained for it), the model would coach the (self-stated) non-expert users on the topics it would ordinarily assume the (implicitly self-stated) expert already knows.
Disabling auto-indexing just makes it harder to spot the issue. (To clarify, also not a bad idea in principle, just not _the_ solution.) If the file is still there and can be downloaded, that's strictly something which should not be possible in the first place.
How I see it happening: agents like CC have in built skills for deployment and uses building blocks from either AWS or other simpler providers. Payment through OAuth and seamless checkout.
This should be standardised
Gone should be the days of āI taught myself so now I can [design software in a professional setting / design a bridge in a professional setting].ā Iām not advocating gatekeeping - if you want to build a small bridge at the end of your garden for personal use, go for it. If you want to build a bridge in your local town over a river, youāre gonna need professional accreditation. Same should be true for software engineering now.
The solution to the problem youāre describing is the very thing youāre claiming exists but which doesnāt for software engineering!
High profile convictions are not a good way of dealing with this. Not in the short or long term. Sure they have an impact, and laws should be enforced, but thatās not a substitute for managing the industry properly.
We already have laws in place, and certifications that help someone understand if a given organization adheres to given standards. We can argue over their validity, efficacy, or value.
The infrastructure, laws, and framework exist for this. More regulation and. eaurocracy doesn't help when current state isn't enforced.
In most of those professions, it is a crime or a civil violation to offer services without the proper qualifications, experience and accreditation from one of the appropriate professional bodies.
We DO NOT have this in software engineering. At all. Anyone can teach themselves a bit of coding and start using it in their professional life.
Analogous to law, you can draft a contract by yourself, but if it goes wrong you have a major headache. You cannot, however, offer services as a solicitor without proper qualifications and accreditation (at least in the UK). Yet in software engineering, not only can we teach ourselves and then write small bits of software for ourselves, we can then offer professional services with no further barriers or steps.
The mishmash of laws we have around data and privacy are not professional standards, nor are they accreditation. We donāt have the framework or laws around this. And I am not aware of the USA (federal level) or Europe (or member states) or China or Russia or India or etc having this.
For example, the BCS in the UK is so weak that although it exists, exceedingly few professional software engineers are even registered with them. They have no teeth. Thereās no laws covering any of this stuff. Just good-olā GDPR and some sector-specific laws here and there trying to keep people mildly safe.
This is the top!
This is a typical example of someone using Coding Agents without being a developer: AI that isn't used knowingly can be a huge risk if you don't know what you're doing.
AI used for professional purposes (not experiments) should NOT be used haphazardly.
And this also opens up a serious liability issue: the developer has the perception of being exempt from responsibility and this also leads to enormous risks for the business.
The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.
> The entire application was a single HTML file with all JavaScript, CSS, and structure written inline.
This is not my experience of how agents tend to build at all. I often _ask_ them to do that, but their tendency is to use a lot of files and structure
> They even added a feature to record conversations during appointments
So they have the front-desk laptop in the doctor's room? Or they were recording conversations anyway and now they for feed them into the system afterwards?
> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.
Also definitely not the normal way an agent would build something - security flaws yes, but this sounds more like someone who just learnt coding or the most upvoted post of all time on r/programmerhorror, not really AI.
Overall I'm skeptical of the claims made in this article until I see stronger evidence (not that I'm supporting using slop for a medical system in general).
> Welcome to VibeToolX.
> By pressing Confirm you accept all responsibility for user data stewardship as regulated in every country where your users reside.
Would that be scary enough to nudge some risk analysis on the user's part? I am sure that would drop adoption by a lot, so I don't see it happening voluntarily.
My first office job was as an AutoCAD/network admin at a large Civil and Structural engineering firm. I saw how seriously real engineering is taken.
When I brought up your argument to my FAANG employed sibling, he said "well, what would it take to be a real software engineer in your mind?"
My response was, and always will be: "When there is a software Professional Engineer stamp, with the engineer's name on it, which carries legal liability for gross negligence, then I will call them "Software Engineers."
There are going to be a lot of accidents like this because it's just really easy to do. And some people are inevitably going to do silly things.
But it's not that different from people doing stupid things with Visual Basic back in the day. Or responding to friendly worded emails with the subject "I love you". Putting CDs/USB drives in work PCs with viruses, worms, etc.
That's what people do when you give the useful tools with sharp edges.
It's reminiscent of the 90s, where every middle manager had dragged and dropped some boxes on some forms, and could get a salesman to sell it, without a care in the world for what was going on behind the scenes.
Until something crashed and recovery was needed, of course.
The piper always needs to be paid.
https://archive.ph/GsLvt
https://web.archive.org/web/20260331184500/https://www.tobru...
Edit: the archive.ph one works for me :)
This is going to get more common (state sponsored hackers are going to have a field day)
Otherwise, you endup simulating the production. Lot of the non technical folks building products with AI Vibe coding are basically building Product Simulations. It looks like a product, functions like a product but behind the scene, you can poke holes.
Does the company which willingly sells the polymorphic virus editor bear any responsibility, or should the unaware vibe coder be incumbent ?
AI is a tool. Itās not intelligent, and it works at a much bigger scale than bricks and mortar, but itās still just a tool. Thereās lots we can blame AI companies for, but abuse of the tool isnāt a clear-cut situation. We should blame them for misleading marketing. But we should also blame users (who are often highly intelligent - eg doctors) for using it outside their ability. Much like doctors are fed up of patients using AI to try to act like doctors, software engineers are now finding out what itās like when clients try to use AI to act like software engineers.
I might be biased by my experience, because we actually cared about GDPR and AI act and proper medical data processing, and I've spent my fair share of time investigating the options that exist. Still, I'm struggling to imagine how one could possibly screw it up anywhere near as what the article described. Like, I can't think of a way to do it, to the point I might need to ask an LLM to explain it to me.
--
[0] - Not as a means of developing an actual product, but solely to see if we can, plus it was easier to discuss product ideas while having some prototypes to click around.
unless cybersecurity is such a dynamic practice that we can't create automated processes that are secured
Essentially a question of what can be done to make vibecoding "secure enough"
https://news.ycombinator.com/item?id=47707681
I created a GCP project (my-app-dev) for exploring how to scale up the cloud service. I added several resources to mock the production, like compute instances/cloud SQL/etc, then populated the data and run several benchmarks.
I changed the specs, number of instances and replicas, and configs through gcloud command.
But for some reason, at one point codex asked to list all projects; I couldn't understand the reason, but it seemed harmless so I approved the command. And after this, for whatever reason it changed the target project from the dev (my-app-dev) to the production (my-app) without asking or me realizing.Of course I checked every commands. I couldn't YOLO while working on cloud resources, even in dev environment. But I focused on the subommands and its content and didn't even think it had changed the project ID along the way.
It continued to suggest more and more aggressive commands for testing, and I approved them brain-deadly...
It took a shamefully long time to realize codex was actually operating on production, so I DDoSed and SQL-injected to the production...Fortunately, it didn't do anything irreversible. But it was one of the most terrifying moments in my career.
1. Only be allowed via CI/CD
2. All infra should be defined as code
3. Any deployment to production should be a delayed process that also has a human-approval step in the workflow (at least one, if not more)
(Exactly where that review step is placed depends on your organisation - culture, size, etc.)
And anyone that does need to touch production should do so from an isolated VM with temporary credentials. Developers shouldn't routinely have production access from their terminal. This last aspect is easy and cheap to set up on AWS. I presume it's also possible in Google Cloud.
AI companies realized that all this vibe coding has released a shitstorm of security vulnerabilities into the wild and so unless they release a much better model to fix that mess theyāll be found out and nobody will touch AI coding with a 100ft pole for the next 15 years. This article points more towards this narrative.
Several years ago ran into one american company which consulted with me. They had 4000 paying customers and they rolled out their billing solution which accept crypto, paypal and stripe.
They had problem with payment going missing, i migrated them to WHMCs with hardening and they never had any issues after.
Now people may laugh at whmcs but use the right tool for job
U need battle tested billing solution then whmcs does count it can support VAT, taxes, reporting/accounting and pretty all which you'll error while you try to do it all yourself.
Too bad there aren't battle tested opensource solution for this
It sorta should do this anyway given that the user intent probably wasnāt to dump everyoneās data into Firebase or whatever.
I personally would like this as well since it gets tiring specifying all the guardrails and double-checking myself. Using this stuff feels too much like developing a skill I shouldnāt need while not focusing on real user problems.
Vibe coding is just a bad idea, unless youāre willing and able to vet the output, which most people doing it are not.
In these cases you donāt necessarily care too much about the code itself, as long as it looks reasonable at a glance.
Lets really hope they learned from their mistakes
I don't think solving this is all that complicated, at least for now. It isn't like it's currently difficult to tell what is and isn't LLM word salad, though that will likely change in the future, but by then the argument will involve whether it really matters or not. But for now, when 80% of the submissions are LLM garbage and it really is garbage, it's pretty jarring.
Lack of security theater is a good thing for most businesses
Iām not even sure what your last comment means, are you contending that it is a good thing this company violated multiple laws with sensitive patient data?
> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.
They are not the same thing.