Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

49% Positive

Analyzed from 8365 words in the discussion.

Trending Topics

#software#data#coding#vibe#more#app#should#don#engineering#code

Discussion (191 Comments)Read Original on HackerNews

spaniard89277about 8 hours ago
I did something similar to a local company here in Spain. Not medical, but a small insurance company. Believe it or not, yes, they vibecoded their CRM.

I sent them an email and they threatened to sue me. I was a bit in shock from such dumb response, but I guess some people only learn the hard way, so I filed a report to the AEPD (Data protection agency in Spain) for starters, known to be brutal.

I've also sent them a burofax demanding the removal of my data on their systems just last friday.

victornomadabout 6 hours ago
A similar thing happened to me back in the day when Wi-Fi was still new.

I joined an open network and it turned out to be a law firm. All their computers were on a Samba network with full C: drives shared. I wrote README.txt files on their drives telling them about the issue, but after some time it was still the same.

Then I went directly to the place to talk to them and also with the idea I could land my first job fixing that mess. But... They got incredibly angry with me, since they claimed they had some very good and expensive contractors taking care of their computers and network, and that I had basically broken in.

I left the place quickly...

embedding-shapeabout 6 hours ago
At one point I worked as a customer support agent outsourced to Apple via the company. Apple forced us to us some very outdated browser UIs, basically for filling in forms, across maybe 4-5 different services in some cases. The machines we were given by this outsourcing company of course where Apple computers, fairly locked down.

But one thing they hadn't locked doll wn, was installing extensions in Safari, and given I had some development chops from coding a bunch in my freetime, I saw the opportunity to write a tiny extension that saved me a ton of time by merely copy-pasting stuff into the right forms and so on. Basically making the whole thing more efficient for me.

Everything was great, until the person next to me saw I had something different. Cautiously eager, I let them try the extension too, they loved it, and without thinking about it, spread it to other people in our team. Eventually, the manager and the IT team picked up what was going on, said they'investigate if I could maybe start doing those kind of things full-time instead of being a support agent, and just focus on tooling.

Fast forward two weeks, I get called into a meeting, apparently someone in the company had been "stealing" CC numbers from the customers on the calls, and since they don't think they've found the right person who did it (or something like that), the person who was known for "doing stuff to the computers" was the next possible suspect, and they fired me right there.

Eventually this firing let me find my first actual programming job, so I'm not too mad about it, but it really shows how out of touch lots of companies and people are when it comes to how computers actually work.

randomeelabout 6 hours ago
Hope you are doing better now
fainpulabout 7 hours ago
> AEPD […] known to be brutal.

Nice. I wish more countries had something like that. Many of these organizations are lethargic and have to be forced into action by civilian efforts or the press.

bjoliabout 7 hours ago
AEPD are well known, even in the rest of the world. They have a different strategy compared to other countries. Ireland's DPC are also heavy handed, but focus on large companies mostly.

France's CNIL is also not bad. They are particularly hard against things like "you accidentally sign up for x y z services when only wanting to sign up to service A".

Gdpr in the EU is also miles ahead of what the US has, or at least what it has been enforcing for a long time.

rsynnottabout 5 hours ago
> Ireland's DPC are also heavy handed, but focus on large companies mostly.

Also, generally, very, very, VERY slow. The massive fines you hear about are usually for behaviour _years_ ago.

fakedangabout 7 hours ago
Is the CCPA anywhere near?
abc123abc123about 7 hours ago
Thta's wonderful! Most of europes GDPR/Data protection autohrities are completely worthless and seem to constantly side with big corps.

Only when they start to side with the people, actually fining business billions and billions will things start to change. I hope we'll see this happen in europe at large, and not only in a few countries.

embedding-shapeabout 7 hours ago
> Thta's wonderful! Most of europes GDPR/Data protection autohrities are completely worthless and seem to constantly side with big corps.

AFAIK, most ones seems to be acting at least once every now and then, judging by https://www.enforcementtracker.com/, is there any specific countries you're thinking about here?

Particularly, Romania, Italy and Spain seem to have had lots of cases.

darkwaterabout 7 hours ago
Can you keep us updated in this thread how it evolved?
ramon156about 8 hours ago
You only burn your hand once, unless you're a company, then you never learn.
thisisitabout 4 hours ago
People building these apps often have no idea about various data privacy rules.

I am part of a forum with many small business owners. One particular owner has been gung-ho about how he built his entire business app using vibe coding. And my first reaction was - All the power to him. It’s his business and he is free to do so.

But then came the question of data privacy rules and he had no clue. This was concerning because the impact went beyond his business. His response when the oversight was pointed out to him was that being ignorant of the law was enough to save him. Still he went to one of the vibe coding Reddit subs to get help. Then came back fuming because devs on Reddit asked him to hire real developers. He believes that these developers are delusional and a dying breed and AI is so ahead that developers are going to be dead in a years time.

ramon156about 8 hours ago
I'm also curious how much effort it would be to setup some OWASP tools with an agent and crawl for company tools. I'm sure I'm not the first one to think of this, but for local businesses it would give a solid rep, I suppose.

I have a feeling that next year's theme will be security. People have turned off their brain when it comes to tech.

petesergeantabout 8 hours ago
> [burofax is] a service that allows you to send a document with certified proof of delivery and confirmation of the date of receipt, and this confirmation has legal validity
franktankbankabout 3 hours ago
You rule.
sixtyjabout 8 hours ago
They should give you a chocolate at least.

I think that having paper documentation will be safer very soon :)

zzyzxd23 minutes ago
Vibe coding is fun, but I can't trust it to make any serious decisions. Like, it knows what's the best way to do a thing, but when encounters challenges, it started to make all kinds of excuses to cut corners, just like humans. "but honestly, it's cluster internal traffic so unencrypted traffic is fine". "Given the urgency and tight timeline, your best option is bypassing the pipeline and deploying it manually". "Per my research, XXX also did this so you are fine".

If I don't have disciplines or principles, or if I am just technically incompetent, its suggestions would sound so reasonable.

delis-thumbs-7eabout 8 hours ago
Meanwhile on Linkedin… Every sales bozo with zero technical understanding is screaming top of their virtual lungs that evrything must be done with AI and it is solution to every layoff, economic problem, everything.

It is just a matter of time when something really really bad happens.

funkyfourierabout 7 hours ago
The Hindenburg of coding.
m4rtinkabout 6 hours ago
That worked for years & traveled for tens of thousands of kilometers until the disaster. They were also quite aware of the risks associated with a hydrogen airship and did all sorts of mitigations to avoid them.

Compared to that vibe coding has no such qualities.

monkeydustabout 7 hours ago
Looks like bad stuff is happening, really bad is a bit scary if you qualify that as threat to life or livelihood. Let's see what the next generation of models bring to this equation.
freakynitabout 7 hours ago
I think vibe-coding is cool, but it runs into limits pretty fast (at least right now).

It kinda falls apart once you get past a few thousand lines of code... and real systems aren't just big, they're actually messy...shit loads of components, services, edge cases, things breaking in weird ways. Getting all of that to work together reliably is a different game altogether.

And you still need solid software engineering fundamentals. Without understanding architecture, debugging, tradeoffs, and failure modes, it's hard to guide or even evaluate what's being generated.

Vibe-coding feels great for prototypes, hobby projects, or just messing around, or even some internal tools in a handful of cases. But for actual production systems, you still need real engineering behind it.

As of now, I'm 100% hesitant to pay for, or put my data on systems that are vibe-coded without the knowledge of what's been built and how it's been built.

consumer451about 6 hours ago
There are all kinds of memory hacks, tools that index your code, etc.

The thing I have found that makes things work much better is, wait for it... Jira.

Everyone loves to hate on Jira, but it is a mature platform for managing large projects.

First, I use the Jira Rovo MCP (or cli, I don't wanna argue about that) to have Claude Code plan and document my architecture, features, etc. I then manually review and edit all of these items. Then, in a clean session, or many, have it implement, document decisions in comments etc. Everything works so much more reliably for large-ish projects like this.

When I first started doing this in my solo projects it was a major, "well, yeah, duh," moment. You wouldn't ask a human dev to magically have an entire project in their mind, why ask a coding agent to do that? This mental model has really helped me use the tools correctly.

edit: then there is context window management. I use Opus 4.6 1M all the time, but if I get much past 250k usage, that means I have done a poor job in starting new sessions. I never hit the auto-compact state. It is a universal truth that LLMs get dumb the more context you give them.

I think everyone should implement the context status bar config to keep an eye on usage:

https://code.claude.com/docs/en/statusline

orwinabout 4 hours ago
But even spec-first, using opus4.6 with plan, the output is merely good, and not great. It isn't bad though, and the fixes are often minors, but you _have_ to read the output to keep the quality decent. Notably, I found that LLM dislike removing code that doesn't serve active purpose. Completely dead code, that they remove, but if the dead code have tests that still call it, it stays.

And small quality stuff. Just yesterday it used a static method where a class method was optimal. A lot of very small stuff I used to call my juniors on during reviews.

On another hand, it used an elegant trick to make the code more readable, but failed to use the same trick elsewhere for no reason. I'm not saying it's bad: I probably wouldn't have thought about it by myself, and kept the worse solution. But even when Claude is smarter than I am, I still have to overview it.

(All the discourse around AI did wonder for my imposter syndrome though)

EdNuttingabout 5 hours ago
Doesn't require Jira but yes, specification-first is the way to get better (albeit still not reliably good) results out of AI tools. Some people may call this "design-first" or "architecture-first". The point is really to think through what is being built before asking AI to write the implementation (i.e. code), and to review the code to make sure it matches the intended design.

Most people run into problems (with or without AI) when they write code without knowing what they're trying to create. Sometimes that's useful and fun and even necessary, to explore a problem space or toy with ideas. But eventually you have to settle on a design and implement it - or just end up with an unmaintainable mess of code (whether it's pure-human or AI-assisted mess doesn't matter lol).

consumer451about 5 hours ago
I used to manually curate a whole set of .md files for specs, implementation logs, docs, etc. I operated like this for a year. In the end, I realized that I was rolling my own crappy version of Jira.

One of the key improvements for me when using Jira was that it has well defined patterns for all of these things, and Claude knows all about the various types of Jira tickets, and the patterns to use them.

Also, the spec driven approach is not enough in itself. The specs need sub-items, linked bug reports and fixes. I need comments on all of these tickets as we go with implementation decisions, commit SHAs, etc.

When I come back to some particular feature later, giving Claude the appropriate context in a way it knows how to use is super easy, and is a huge leap ahead in consistency.

I know I sound like some caveman talking about Jira here, but having Claude write and read from it really helped me out a lot.

It turns out that dumb ole Jira is an excellent "project memory" storage system for agentic coding tools.

Shorelabout 7 hours ago
It absolutely falls apart more often than not. And requires even better engineering practices than before, because people are just accepting the code changes without understanding the technical debt created by them. On this I agree. There are models that can be run locally, this morning I tested Gemma 4 running on 128 GB of RAM. It was very slow, like 20 minutes to refactor something instead of 20 seconds, but it seems to be as capable as the paid models that run on an expensive cloud subscription on one of these hated data centers. And no data is uploaded to them.
simianwordsabout 7 hours ago
I suggest actually using Claude code and make a sample app using it. It absolutely can make apps even if you don’t know any fundamentals. I think it can work up to 20k LOC from my experience. You do need a human to give feedback but not someone who understands software principles.
seethishatabout 7 hours ago
I saw something very similar a few months ago. It was a web app vibe coded by a surgeon. It worked, but they did not have an index .html file in the root web directory and they would routinely zip up all of the source code which contained all the database connection strings, API credentials, AWS credentials, etc.) and place the backup in the root web directory. They would also dump the database to that folder (for backup). So web browsers that went to https://example.com/ could see and download all the backups.

The quick fix was a simple, empty index.html file (or setting the -Indexes option in the apache config). The surgeon had no idea what this meant or why it was important. And the AI bots didn't either.

The odd part of this to me was that the AI had made good choices (strong password hashes, reasonable DB schema, etc.) and the app itself worked well. Honestly, it was impressive. But at the same time, they made some very basic deployment/security mistakes that were trivial. They just needed a bit of guidance from an experienced devops security guy to make it Internet worthy, but no one bothered to do that.

Edit: I do not recommend backing up web apps on the web server itself. That's another basic mistake. But they (or the AI) decided to do that and no one with experience was consulted.

shivaniShimpi_about 7 hours ago
interesting, so the ai got the hard stuff right. password hashing, schema design, fine. it fumbled the stuff that isn't really "coding" knowledge, feels more like an operational intuition? backup folder sitting in web root isn't a security question, it's a "have you ever been burned before" question, and surgeon hadn't. so they didn't ask and the model didn't cover it, imo that's the actual pattern. the model secures exactly what you ask about and has no way of knowing what you didn't think to ask. an experienced dev brings a whole graveyard of past mistakes into every project. vibe coders bring the prompt
NoGravitasabout 3 hours ago
The competence profile of any LLM-based AI is extremely spiky - whether it does a particular task well or not is pretty independent of the (subjective) difficulty of the task. This is very different from our experience with humans.
nerptasticabout 6 hours ago
This is what I’m noticing. At my workplace, we have 3 or 4 non-devs “writing” code. One was trying to integrate their application with the UPS API.

They got the application right, and began stumbling with the integration - created a developer account, got the API key, but in place of the applications URL, the had input “localhost:5345” and couldn’t get that to work, so they gave up. They never asked the tech team what was wrong, never figured out that they needed to host the application. Some of the fundamental computer literacy is the missing piece here.

I think (maybe hopeful) people will either level up to the point where they understand that stuff, or they will just give up. Also possible that the tools get good enough to explain that stuff, so they don’t have to. But tech is wide and deep and not having an understanding of the basic systems is… IMO making it a non-starter for certain things.

TeMPOraLabout 6 hours ago
Maybe this is what's missing in the prompt? We've learned years ago to tell the AI they're the expert principal 100x software developer ninja, but maybe we should also honestly disclose our own level of expertise in the task.

A simple "I'm a professional surgeon, but sadly know nothing about making software" would definitely make the conversation play out differently. How? Needs to be seen. But in an idealized scenario (which could easily become real if models are trained for it), the model would coach the (self-stated) non-expert users on the topics it would ordinarily assume the (implicitly self-stated) expert already knows.

Arch-TKabout 7 hours ago
The fix is to not let users download the credentials. In fact, ideally the web server wouldn't have access to files containing credentials, it would handle serving and caching static content and offloading requests for dynamic content to the web application's code.

Disabling auto-indexing just makes it harder to spot the issue. (To clarify, also not a bad idea in principle, just not _the_ solution.) If the file is still there and can be downloaded, that's strictly something which should not be possible in the first place.

simianwordsabout 7 hours ago
Agent-Native DevOps tools are probably necessary. There should be no reason they would do it manually.

How I see it happening: agents like CC have in built skills for deployment and uses building blocks from either AWS or other simpler providers. Payment through OAuth and seamless checkout.

This should be standardised

aledevvabout 8 hours ago
> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one command away from anyone who looked

This is the top!

This is a typical example of someone using Coding Agents without being a developer: AI that isn't used knowingly can be a huge risk if you don't know what you're doing.

AI used for professional purposes (not experiments) should NOT be used haphazardly.

And this also opens up a serious liability issue: the developer has the perception of being exempt from responsibility and this also leads to enormous risks for the business.

dgb23about 6 hours ago
Also it’s the wrong tool for this kind of work.

Claude, opencode etc. Are brute force coding harnesses that literally use bash tools plus a whole bunch of vague prompting (skills, AGENT.md, MCP and all that stuff) to nudge them probabilistically into desirable behavior.

Without engineering specialized harnesses that control workflows and validate output, this issue won‘t go away.

We‘re in the wild west phase of LLM usage now, where problems emerge that shouldn’t exist in the first place and are being solved at the entirely wrong layer (outside of the harness) or with the entirely wrong tools (prompts).

anal_reactorabout 7 hours ago
The problem isn't AI, the problem is lack of an intelligent person somewhere in this whole situation. Way before AI I've seen a medical company create a service where frontend would tell backend what SQL queries to execute.
bootsmannabout 5 hours ago
“You’re just holding it wrong”
EdNuttingabout 8 hours ago
Software engineering is looking more and more like it needs a professional body in each country, and accreditation and standards. Ie it needs to grow up and become like every other strand of engineering.

Gone should be the days of “I taught myself so now I can [design software in a professional setting / design a bridge in a professional setting].” I’m not advocating gatekeeping - if you want to build a small bridge at the end of your garden for personal use, go for it. If you want to build a bridge in your local town over a river, you’re gonna need professional accreditation. Same should be true for software engineering now.

cikabout 8 hours ago
Professional bodies act as nothing more then gatekeepers and rent seekers for things of this nature. Anyone can write software, but not everyone writes security minded software.

We already have laws in place, and certifications that help someone understand if a given organization adheres to given standards. We can argue over their validity, efficacy, or value.

The infrastructure, laws, and framework exist for this. More regulation and beaurocracy doesn't help when current state isn't enforced.

EdNuttingabout 7 hours ago
There’s a reason why many professions have professional bodies and consolidated standards - from medicine to accountancy, actuarial work, civil engineering, aerospace, electronic and electrical engineering, law, surveying, and so many more.

In most of those professions, it is a crime or a civil violation to offer services without the proper qualifications, experience and accreditation from one of the appropriate professional bodies.

We DO NOT have this in software engineering. At all. Anyone can teach themselves a bit of coding and start using it in their professional life.

Analogous to law, you can draft a contract by yourself, but if it goes wrong you have a major headache. You cannot, however, offer services as a solicitor without proper qualifications and accreditation (at least in the UK). Yet in software engineering, not only can we teach ourselves and then write small bits of software for ourselves, we can then offer professional services with no further barriers or steps.

The mishmash of laws we have around data and privacy are not professional standards, nor are they accreditation. We don’t have the framework or laws around this. And I am not aware of the USA (federal level) or Europe (or member states) or China or Russia or India or etc having this.

For example, the BCS in the UK is so weak that although it exists, exceedingly few professional software engineers are even registered with them. They have no teeth. There’s no laws covering any of this stuff. Just good-ol’ GDPR and some sector-specific laws here and there trying to keep people mildly safe.

x-complexityabout 6 hours ago
> There’s a reason why many professions have professional bodies and consolidated standards - from medicine to accountancy, actuarial work, civil engineering, aerospace, electronic and electrical engineering, law, surveying, and so many more.

Professional bodies = gatekeeping. The existence of the body means that the thing its surrounding will be barred from others to enter.

It means financial barriers & "X years of experience required" that actual programmers rightfully decry.

Caveat: When it comes to anything that will affect physical reality, & therefore the physical safety of others, the standards & accreditations then become necessary.

NOTE ON CAVEAT: Whilst *most* software will fall under this caveat, NOT ALL WILL. (See single-player offline video games)

To create a blanket judgement for this domain is to invite the death of the hobbyist. And you, EdNutting, may get your wish, since Google's locking down Android sideloading because they're using your desires for such safety as a scapegoat for further control.

https://keepandroidopen.org/

> We DO NOT have this in software engineering.

THIS IS A GOOD THING. FULLSTOP.

The ability to build your own tools & apps is one of the rightfully-lauded reasons why people should be able to learn about building software, WITHOUT being mandated to go to a physical building to learn.

To wall off the ability for people to learn how computers work is a major part of modern computer illiteracy that people cry & complain about, yet seem to love doing the exact actions that lead to the death of computer competency.

erelongabout 6 hours ago
> There’s a reason why many professions have professional bodies and consolidated standards

imo this is sold as "keeping people safe" but in practice it's really a gatekeeping grift that increases friction and prevents growth

rubzahabout 7 hours ago
As the sibling pointed out, there are already plenty of laws about, for example, handling of personally identifiable data. Somehow there is a lack of awareness, perhaps what is needed is a couple of high-profile convictions (which can't be too far off).
EdNuttingabout 7 hours ago
One of the key functions of a professional body is to ensure all members are aware of existing and new laws, standards and codes of practice. And to ensure different grades of engineer are aware of different levels of the standards. And that sector-specific laws and standards are accredited accordingly.

High profile convictions are not a good way of dealing with this. Not in the short or long term. Sure they have an impact, and laws should be enforced, but that’s not a substitute for managing the industry properly.

kjksfabout 5 hours ago
Nothing would be more effective at killing open source and commercial software business that requiring everyone that writes and ships software to users, directly or indirectly (e.g. an open-source library) to have License To Program from Software Licensing Organization.

> aware of existing and new laws, standards and codes of practice

Yeah, because software business is not at all ruled by fads.

1997: you have to follow Extreme Programming (XP) or you don't get your license

2000: you now have to use XML for everything in XML or you don't get your license

2002: you now have to follow Agile or you don't get your license

2025: you now have to write everything in Rust or you don't get your license

etc., etc.

voxleoneabout 6 hours ago
I agree with that and stand by these words. If people want to call it gatekeeping, so be it. Programming, software engineering if you will, is a serious discipline, and this craze needs to stop. Software building should be regulated and properly accredited as any serious activity.
Oryginabout 6 hours ago
In some countries you can't call yourself a software engineer if you don't have an engineer degree and a license to practice.

Should be the same everywhere. Anyone can be a coder, but not everyone is an engineer

flowerbreezeabout 6 hours ago
I think the problem is that the person described had no idea what they were doing even in their own professional capacity. They needed to know about patient data management, but they didn't.

The way I see it, if they didn't even realize that they are doing something they shouldn't, they wouldn't have even known they need accreditation, even if that was required. Unless we restricted access to gazillions of tools without it of course.

I think it'll work itself out over time as what AI is/isn't and what data privacy means is discussed more. I'd leave accreditation entirely out of it, because we cannot even agree on what are the actual best practices or if they matter.

sajithdilshanabout 6 hours ago
There are already laws and standards in almost every country. In this particular example, the people completely ignored all the privacy and data protection laws.
erelongabout 6 hours ago
> Software engineering is looking more and more like it needs a professional body in each country, and accreditation and standards.

I mean, people could voluntarily try to create rules of thumb they think are valuable and could try to popularize them

I don't think that requires further restrictive actions

bschwarzabout 6 hours ago
Regulations are written in blood and it will take a bit for vibe coding to cause enough problems.
EdNuttingabout 6 hours ago
Sadly, probably true.
MichaelRoabout 6 hours ago
>> Software engineering is looking more and more like it needs a professional body in each country, and accreditation and standards.

Doesn't help much, accounting needs accreditation and standards, but that doesn't prevent competition level of some 100 accountants per job. Only way you prevent that is by limiting numbers, like lawyers do, case when connections and nepotism matter, you basically get a hereditary aristocratic caste.

I guess we better get used to going back being peasants working shit jobs barely above starvation since that's what the future of capitalism seems to bring: https://realityraiders.com/fringewalker/irreverent-humor/mon...

BrissyCoderabout 8 hours ago
This reads like internet fiction to me. Very vague and short.
yawniekabout 8 hours ago
fwiw i know tobias and its very very unlikely he made this up. my guess its intentionally vague to not leak any information about the culprit which i guess is fair.
rausrabout 7 hours ago
heh, I know that username. I came to the same conclusion. (I hope all is well with you, Yannick)
BrissyCoderabout 7 hours ago
Okay. If it's real I apologize.

But in any case it's so lacking in detail and so brief as to make it so uninteresting that it might as well be fake.

> Somebody "vibecodes" medical app/system. The app was insecure. Personal info leaked.

Okay cool.

kubobleabout 7 hours ago
Is really weird to me that this is your reception.

It's a rarely updated personal blog, not a daily tabloid story.

spacebaconabout 7 hours ago
It’s unlikely any LLM tasked with a prompt involving medical records did not automatically address separation of concerns. The type of data involved is worst case scenario. One JS file is also worst case scenario. This is why it may feel manufactured. If it is true, they truly deserve to be put on blast.
moooo99about 7 hours ago
I can 100% imagine prompts that would even feel natural that would never hint at any medical background of the data being processed. Could be as simple as using customer instead of patient.
abrookewoodabout 8 hours ago
Given the subject matter, it would be highly unethical to reveal the name of the company before verifying it was indeed fixed. I'd be wary of getting sued.
mijoharasabout 6 hours ago
The first time I stumbled onto a big security vulnerability (exposed stripe/aws/play store keys. I was poking around an API a web app was using, and instead of hitting /api/v1, if you just hit /api it served them. I wasn't trying to do anything malicious), the very first thing I did was contacted a security researcher friend to ask about covering my ass while performing responsible disclosure.

You hear too much about people being persecuted for trying to point out security vulnerabilities. (Guess they haven't heard about "don't shoot the messenger").

(It turned out fine after finally managing to speak with someone. Had to ring up customer service and say "look, here are the last digits of your stripe private key. Please speak with an engineer". Figuring out how to talk with someone was the difficult thing)

croemerabout 7 hours ago
Company should just take down the whole thing. One vuln might be fixed but how many others might be there.
samuelabout 7 hours ago
I don't think it's the case, but it would be very funny that this would end being AI generated clickbait.
sixhobbitsabout 8 hours ago
yeah keeping it vague makes sense to protect the place if it's still online but the whole thing doesn't really make sense?

The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.

> The entire application was a single HTML file with all JavaScript, CSS, and structure written inline.

This is not my experience of how agents tend to build at all. I often _ask_ them to do that, but their tendency is to use a lot of files and structure

> They even added a feature to record conversations during appointments

So they have the front-desk laptop in the doctor's room? Or they were recording conversations anyway and now they for feed them into the system afterwards?

> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.

Also definitely not the normal way an agent would build something - security flaws yes, but this sounds more like someone who just learnt coding or the most upvoted post of all time on r/programmerhorror, not really AI.

Overall I'm skeptical of the claims made in this article until I see stronger evidence (not that I'm supporting using slop for a medical system in general).

mewpmewp2about 7 hours ago
I don't know what to make of the article. First I thought it seems like a made up LinkedIn story, it seems too crazy while talking about it in such a casual manner. Ultimately I don't know, maybe it was vague for a specific reason. I guess one thing I'd find odd is that whoever developed it, that they didn't run and get stuck with CORS issues, if everything was done client side to those services and that they managed to get API keys, subscription stuff everywhere while still making mistakes like this. And no mention of leaked api keys and creds which UI side there must have been, right?

> Everything that could go wrong, did go wrong.

Then this claim seems a bit too much, since what could have gone more wrong is malicious actors discovering it, right? Did they?

Maybe I have trouble believing that a medical professional could be that careless and naive in such a way, but anything could happen.

I guess another thought is... If they built it why would they share the URL to the author? Was author like "Ooh cool, let me check that out", and they just gave the url without auth? Because if it worked as it was supposed to it should have just shown a login screen right? That's the weirdest part to me, I suppose.

fzzzyabout 7 hours ago
The single file thing makes perfect sense if it was built as an artifact in one of the big provider's webui.
furyofantaresabout 6 hours ago
> The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.

I took that all to mean she had explained the history of it to the author, but it had already been written and deployed. It is worded a little weird. It's also translated from german, I don't know if that is a factor or not.

lesostepabout 6 hours ago
I could bet 5 dollars that they used chat and not an agent, and that's also the reason why it's a single html file.

Copypasted and than dropped into hosting folder, sweet web 1.0 style

BrissyCoderabout 7 hours ago
> The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.

Yeah although I didn't comment I found this weird as well. Chronology was vague and ill-defined. He went to a doctors office and the receptionist mentioned vibe coding their patient records system unprompted?

> A few days later, I started poking around the application.

What!? How... was there even a web-facing component to this system? Did the medical practice grant you access for some reason?

Yeah I'm back to calling bullshit. What a load of crap. Whole post probably written by an LLM.

viraptorabout 5 hours ago
Having experience working with medical software, I call BS on this article as presented, unless it was some minimal support app. When you deal with patient records, there's so much of local law, communication, billing rules and other things baked in that you CANNOT vibe code an app to handle even 1% of that. Your staff would rebel and your records would completely fall apart. Even basic things like appointment bookings have a HISTORY and it's a full blown room scheduling system that multiple people with different roles have to deal with (reception and providers). It takes serious time to even reverse engineer the database of existing apps, and you first have to know how to access the database itself. Then you'll see many magic IDs and will have to reverse engineer what they mean. (yes, LLMs are good at reverse engineering too, but you need some reference data and you can't easily automate that)

I have decompiled database updaters to get the root password for the local SQL Server instance with extremely restricted access rules. (can't tell you which one...) I have also written many applications auto-clicking through medical apps, because there's no other way to achieve some batch changes in reasonable time. I have a lot of collateral knowledge in this area.

Now for the "unless it was some minimal support app" - you'll see lots of them and they existed before LLMs as well. They're definitely not protecting patient data as much as other systems. If the story is true in any way, it's probably this kind of helper that solves one specific usecase that other systems cannot. For example I'm working on an app which handles some large vaccination events and runs on a side of the main clinic management application. But accidentally putting that online, accessible to everyone, and having actual patient data imported would be hard-to-impossible to achieve for a non-dev.

For the recording and transcription, there are many companies doing that at the moment and it would be so much easier to go with any of them. They're really good quality these days.

suddenlybananasabout 7 hours ago
I don't think you read the article very carefully, the timeline is that he met a person, and that person told him that they made vibe-coded an app after having seen a video. He then investigated the app.
BrissyCoderabout 7 hours ago
Yeah because every medical practice I go to, I'm always able to investigate all of their systems.
drkiz75about 7 hours ago
Agreed. It’s right there at plausibly deniable just short of falsifying facts you could look up.
watwutabout 7 hours ago
Short writing is just a good writing. Also, it was not vague, it just omitted identifying information.
rubzahabout 7 hours ago
I assure you that these kinds of things are happening right now.
camillomillerabout 6 hours ago
It’s Germany, you want to be as generic as you can because libel, privacy and similar laws are pretty strong here
shivaniShimpi_about 6 hours ago
Every other field that's figured out high stakes failure models eventually landed on the same solution - make sure two people that understand the details are looking at it - pilots have copilots surgeons with checklists and nuclear plants have independent verification. Software was always the exception, cause when it broke it mostly just broke for you, vibe coding is not going to change the equation, it barely removes one check that existed before is that the people who wrote the code understood what was going on, but now that's gone too
Ekarosabout 5 hours ago
We do have code reviews for pull requests. But on average I would guess there is great amount of complacency there. I suppose old proper QA phase was best answer we had. But that is expensive and slow.
NoGravitasabout 3 hours ago
Maybe expensive and slow is actually an improvement.
Garlefabout 1 hour ago
If this happened in Germany, this is most likely not only a breach of some contract but actually a criminal offense.

(In not a lawyer and so I might be mistaken about this; Especially the level of intentionality might be a factor)

rubzahabout 8 hours ago
I know, through personal acquaintance, of at least one boutique accounting firm that is currently vibe-building their own CRM with Lovable. They have no technical staff. I can't begin to comprehend the disasters that are in store.
antupisabout 7 hours ago
Generally why build your own CRM? ERP and other resource planning systems I get becouse you can tailor made those to your back office. But for CRM you need mostly reliability.
nslsmabout 7 hours ago
Because CRMs are very expensive, and they get much more expensive if you need custom development (which you usually need)
Advertisement
consumer451about 8 hours ago
What would a responsible on-boarding flow for all of these tools look like?

> Welcome to VibeToolX.

> By pressing Confirm you accept all responsibility for user data stewardship as regulated in every country where your users reside.

Would that be scary enough to nudge some risk analysis on the user's part? I am sure that would drop adoption by a lot, so I don't see it happening voluntarily.

sigseg1vabout 8 hours ago
We require someone with a professional engineering designation from an accredited engineering body to sign off and approve before a building can be built. If it is found to have structural issues later, that person can be directly liable and can lose their license to operate. Why this is not the case with health software I cannot explain. Every time I propose this the only argument I recieve against it is people who are mad that their field might dare to apply the same regulation every other field has.
consumer451about 7 hours ago
Oh man, I have gone off on rants about software "engineering" here in the past.

My first office job was as an AutoCAD/network admin at a large Civil and Structural engineering firm. I saw how seriously real engineering is taken.

When I brought up your argument to my FAANG employed sibling, he said "well, what would it take to be a real software engineer in your mind!??"

My response was, and always will be: "When there is a path to a software Professional Engineer stamp, with the engineer's name on it, which carries legal liability for gross negligence, then I will call them Software Engineers."

ZephyrBluabout 5 hours ago
People like to make this point, but traditional engineering has the opposite problem: insanely overwrought processes and box-checking that exists for no reason and slows everything down to a snail's pace. Yes there are safety-critical parts, but they surrounded by a ton of bullshit.

It's also absurd to think that there is no company which does genuine software "engineering". If you break ads at Google/Meta, streaming at Netflix, etc there are massive consequences. They are heavily incentivized to properly engineer their systems.

The main thing that governs whether time is spent to well-engineer something is if there is incentive to do it. In traditional engineering that incentive is the law (Getting council approval, not getting sued, etc). In software engineering that incentive is revenue.

EdNuttingabout 7 hours ago
Totally agree - not just medical software either. See replies to my other comment threads. Software engineers really don’t like the idea that they might have to show they can perform at a certain standard to be able to work as a software engineer.

Typically arguments come up:

“that’s gatekeeping” - yes, for good reason!

“Laws already exist” - yeah, and that’s not the same as professional accreditation, standards and codes of practice! Different thing, different purpose. Also the laws are a mishmash and not fit for purpose in most sectors.

jgrizouabout 8 hours ago
Would it? Feels a bit like when you use Facebook and handover all your data.
consumer451about 6 hours ago
Yeah, fair. I am just thinking out loud here. What is a decent solution to this problem? Is there one?
jillesvangurpabout 8 hours ago
I think the issue here is less about AI misbehaving and more about people doing things they should not be doing without thinking too hard about the consequences.

There are going to be a lot of accidents like this because it's just really easy to do. And some people are inevitably going to do silly things.

But it's not that different from people doing stupid things with Visual Basic back in the day. Or responding to friendly worded emails with the subject "I love you". Putting CDs/USB drives in work PCs with viruses, worms, etc.

That's what people do when you give the useful tools with sharp edges.

sersiabout 7 hours ago
I'd argue that back in the visual basic/Delphi day, there was a minimum level of competence needed AND, more importantly, apps didn't have as much surface area because they weren't exposed to internet
EdNuttingabout 7 hours ago
Particularly ironic for a doctor to have done this, given all the complaints about patients using Google (even pre-AI)!
aitchnyuabout 7 hours ago
Is there anybody making some framework where you declare the security intentions as code (for each CRUD action) and which agents can correctly do and unit test? I have seen a Lovable competitor's system prompt have 24 lines of "please consider security when generating select statements, please consider security when generating update statements..." since it expects to dump queries here and there.
ramon156about 8 hours ago
502, and the site is getting hugged to death it seems

Edit: the archive.ph one works for me :)

CrzyLngPwdabout 7 hours ago
I think it is wonderful.

It's reminiscent of the 90s, where every middle manager had dragged and dropped some boxes on some forms, and could get a salesman to sell it, without a care in the world for what was going on behind the scenes.

Until something crashed and recovery was needed, of course.

The piper always needs to be paid.

Steve16384about 7 hours ago
Or someone starts with an Excel spreadsheet just to "keep track of a few things". Then before they know it, it has become a critical part of the business but too monolithic and unorganised to be usable.
debarshriabout 7 hours ago
I believe there are various dimensions to vibe coding. If you work with an existing codebase, it is a tool to increase productivity. If you have domain specific knowledge, in this case - patient management system, you can build better systems.

Otherwise, you endup simulating the production. Lot of the non technical folks building products with AI Vibe coding are basically building Product Simulations. It looks like a product, functions like a product but behind the scene, you can poke holes.

dubeyeabout 2 hours ago
The person at the desk told the author this?

Interesting how unquestioning the responses are that this isn’t engagement bait

keysersoze33about 5 hours ago
The takeaway is to vet new companies one is dealing with - even just calling them up and asking if they've AI generated any system which deals with customer/patient data.

This is going to get more common (state sponsored hackers are going to have a field day)

mnlsabout 7 hours ago
Damn!!! And I keep hardening my RSS app which was partly vibe coded and not exposed to the WAN while "professionals" give data away.
sjamaanabout 8 hours ago
So much is missing from this story. Did they report it to the relevant data authority? Did the fix they said they applied actually fix anything? Etc.
Advertisement
GistNoesisabout 8 hours ago
Who should get jailed ?

Does the company which willingly sells the polymorphic virus editor bear any responsibility, or should the unaware vibe coder be incumbent ?

EdNuttingabout 8 hours ago
We don’t blame companies selling 3D Design software or 3D printers or mortar and cement, or graph paper and pencils. When people abuse those tools and build huts or houses or bridges that fall down, we usually blame the user for not having appropriate professional qualifications, accreditation, and experience. (Very occasionally we blame bugs in simulation software tools).

AI is a tool. It’s not intelligent, and it works at a much bigger scale than bricks and mortar, but it’s still just a tool. There’s lots we can blame AI companies for, but abuse of the tool isn’t a clear-cut situation. We should blame them for misleading marketing. But we should also blame users (who are often highly intelligent - eg doctors) for using it outside their ability. Much like doctors are fed up of patients using AI to try to act like doctors, software engineers are now finding out what it’s like when clients try to use AI to act like software engineers.

Hendriktoabout 7 hours ago
I largely agree, but if a company sold cement explicitly claiming that they will replace every job in the entire construction industry, that the cement is able to plan, verify, and build on its own, without supervision, and that any layperson can now create PhD level bridges with that cement without any input from or verification by professionals, some liability would definitely fall on the company selling that cement under these pretenses.
EdNuttingabout 6 hours ago
> We should blame them for misleading marketing.
0-bad-sectorsabout 4 hours ago
I think AI will be too expensive soon for normal/non technical people to tinker with and this kind of vibe coding stories will disappear.
coopykinsabout 6 hours ago
I interviewed some years ago for an AI related startup. After looking at the live product, first thing I see is their prod dB credentials and openAI api key publicly send in some requests... Bad actors will be having a lot of fun these days
agosabout 8 hours ago
I really hope OP also contacted their relevant national privacy authority, this is a giant violation
TeMPOraLabout 6 hours ago
I have my doubts on the story. I consulted on a medtech project in the recent past in similar space, and at various points different individuals vibe-coded[0] not one but three distinct, independent prototypes of a system like the article describes, and neither of them was anywhere near that bad. On the frontend, you'd have to work pretty hard to force SOTA LLMs to give you what is being reported here. Backend-side, there's plenty of proper turn-key systems to get you started, including OSS servers you can just run locally, and even a year ago, SOTA LLMs knew about them and could find them (and would suggest some of them).

I might be biased by my experience, because we actually cared about GDPR and AI act and proper medical data processing, and I've spent my fair share of time investigating the options that exist. Still, I'm struggling to imagine how one could possibly screw it up anywhere near as what the article described. Like, I can't think of a way to do it, to the point I might need to ask an LLM to explain it to me.

--

[0] - Not as a means of developing an actual product, but solely to see if we can, plus it was easier to discuss product ideas while having some prototypes to click around.

erelongabout 6 hours ago
To me it just sounds like eventually someone will figure out how to make vibecoding more reasonably secure (with prompts to have apps be looked at for security practices?)

unless cybersecurity is such a dynamic practice that we can't create automated processes that are secured

Essentially a question of what can be done to make vibecoding "secure enough"

zkmonabout 6 hours ago
Technology for greed vs technology for need. Greed has its cost.
high_byteabout 7 hours ago
this is exactly the kind of vibe coding horror stories I asked for just few days ago :)

https://news.ycombinator.com/item?id=47707681

ionwakeabout 8 hours ago
Anyone else read the title on HN and shudder not wanting to actually click it?
Hendriktoabout 7 hours ago
Every time I see “AI”, “LLM”, or “vibe conding” in the title. And then half the submissions not having it in the title are that anyways.
hamashoabout 5 hours ago
The worst blunder I made was when I explored cloud resources to improve the product's performance.

I created a GCP project (my-app-dev) for exploring how to scale up the cloud service. I added several resources to mock the production, like compute instances/cloud SQL/etc, then populated the data and run several benchmarks.

I changed the specs, number of instances and replicas, and configs through gcloud command.

  $ gcloud compute instances stop instance-1 --project=my-app-dev
  $ gcloud compute instances set-machine-type instance-1 --machine-type=c3-highcpu-176 --project=my-app-dev
  $ gcloud sql instances patch db-1 --tier=db-custom-32-131072 --project=my-app-dev
But for some reason, at one point codex asked to list all projects; I couldn't understand the reason, but it seemed harmless so I approved the command.

  $ gcloud projects list
  PROJECT_ID     NAME       PROJECT_NUMBER
  my-app-test    my app     123456789012
  my-app-dev     my app     234567890123     <- the dev project I was working on
  my-app         my app     345678901234     <- the production (I know it's a bad name)
And after this, for whatever reason it changed the target project from the dev (my-app-dev) to the production (my-app) without asking or me realizing.

Of course I checked every commands. I couldn't YOLO while working on cloud resources, even in dev environment. But I focused on the subommands and its content and didn't even think it had changed the project ID along the way.

It continued to suggest more and more aggressive commands for testing, and I approved them brain-deadly...

  $ gcloud sql instances patch db-1 --database-flags=max_connections=500 --project=my-app
  $ gcloud compute instances delete instance-1 --project=my-app
  $ echo 'DELETE FROM users WHERE username="test";' \
      | gcloud sql connect my-db 
   --user=user --database=my-db --project=my-app
  $ wrk -t4 -c200 -d30s \
      "http://$(gcloud compute instances describe instance-1 \
      --project=my-app \
      --format='get(networkInterfaces[0].accessConfigs[0].natIP)')"
It took a shamefully long time to realize codex was actually operating on production, so I DDoSed and SQL-injected to the production...

Fortunately, it didn't do anything irreversible. But it was one of the most terrifying moments in my career.

EdNuttingabout 5 hours ago
This is part of the reason deployments to production cloud environments should:

1. Only be allowed via CI/CD

2. All infra should be defined as code

3. Any deployment to production should be a delayed process that also has a human-approval step in the workflow (at least one, if not more)

(Exactly where that review step is placed depends on your organisation - culture, size, etc.)

And anyone that does need to touch production should do so from an isolated VM with temporary credentials. Developers shouldn't routinely have production access from their terminal. This last aspect is easy and cheap to set up on AWS. I presume it's also possible in Google Cloud.

Advertisement
repeekadabout 8 hours ago
A perfect example of why a product like Medplum exists, as opposed to completely reinventing the wheel from scratch
cmiles8about 7 hours ago
There’s another version of the Mythos narrative that reads like:

AI companies realized that all this vibe coding has released a shitstorm of security vulnerabilities into the wild and so unless they release a much better model to fix that mess they’ll be found out and nobody will touch AI coding with a 100ft pole for the next 15 years. This article points more towards this narrative.

krater23about 8 hours ago
The only thing what helps is deleting the database. Every day. Until the thing goes down because the 'developer' thinks he has a bug that he can't find.
rubzahabout 7 hours ago

  https://www.myvibesite.com/?id=10; DROP TABLE customer;--
zoobababout 8 hours ago
Avoid javascript like plague, it can be overwritten at the client side.
faangguyindiaabout 8 hours ago
It's nothing new, dunning kruger existing long before AI entered coding realm.

Several years ago ran into one american company which consulted with me. They had 4000 paying customers and they rolled out their billing solution which accept crypto, paypal and stripe.

They had problem with payment going missing, i migrated them to WHMCs with hardening and they never had any issues after.

Now people may laugh at whmcs but use the right tool for job

U need battle tested billing solution then whmcs does count it can support VAT, taxes, reporting/accounting and pretty all which you'll error while you try to do it all yourself.

Too bad there aren't battle tested opensource solution for this

t43562about 7 hours ago
AI empowers bullshitters but for sure they existed before. The guys who do something quickly and are gone before it starts to fall over. It often works because everyone is impressed with them and the problems that arise are seen as the fault of whoever is left to clean up the mess. You can probably detect my bitterness :-D
fakedangabout 7 hours ago
Report them - that right there is 5+ different violations. Only then will they realize their stupidity.
peytonabout 8 hours ago
Kinda crazy but hopefully the future holds a Clippy-esque thing for people who don’t know to set up CI, checkpoints, reviews, environments, etc. that just takes care of all that.

It sorta should do this anyway given that the user intent probably wasn’t to dump everyone’s data into Firebase or whatever.

I personally would like this as well since it gets tiring specifying all the guardrails and double-checking myself. Using this stuff feels too much like developing a skill I shouldn’t need while not focusing on real user problems.

grey-areaabout 8 hours ago
This problem is unrelated to CI and dev practices etc, this is about trusting the output of generative AI without reading it, then using it to handle patient data.

Vibe coding is just a bad idea, unless you’re willing and able to vet the output, which most people doing it are not.

lonelyasacloudabout 6 hours ago
> Vibe coding is just a bad idea, unless you’re willing and able to vet the output, which most people doing it are not.

It says quite a lot about where we are with ai tooling that none of the big players have “no need to review, certified for market X” offerings yet.

dgb23about 8 hours ago
Fully agentic development is neat for scripts and utilities that you wouldn‘t have the time to do otherwise, where you can treat it as intput/output and check both.

In these cases you don’t necessarily care too much about the code itself, as long as it looks reasonable at a glance.

edwinjmabout 8 hours ago
It is related to CI and dev practices etc. A experienced developer using AI would add security/data protection, even when vibe coding.
Shorelabout 6 hours ago
CI doesn't magically takes care of security, that's a naïve understanding of vulnerabilities.

Someone with the right mindset needs to be there providing guidance and architectural input.

And even then that's not enough. Something like a super extensive testing set like in SQLite is the best we can do.

mikojanabout 8 hours ago
Hard to believe... This activity should certainly land you in a German prison?!
VanTodiabout 8 hours ago
since its a .ch domain, i believe its in swiss. In germany we have our DSGVO (GDPR), and you can report it too. If a breach happen, you have to inform all your customers. if its a first time and you tried to your own best, the punishment is not that hard, but since these are medical infos they should have known better.

Lets really hope they learned from their mistakes

piokochabout 7 hours ago
Switzerland is very liberal in terms of business-oriented regulations to the point that you could crate a new year party in a closed cellar without emergency exists, not to mention anti-fire installation and burn people alive there.
crvstabout 6 hours ago
Cool story bro. Of course it’s true if it made it to HN. Who needs proofs.
avazhiabout 6 hours ago
You guys realise this is AI slop on AI slop, right?
krappabout 6 hours ago
This is reality now, what do you want?
avazhiabout 1 hour ago
HN should just have a rule that all content should be human-generated. This post is literally an LLM writing about something an LLM did; it's a bot botting about a bot. Aside from how funny and dystopian that is - just ban it? Just make a rule that submissions require a human author.

I don't think solving this is all that complicated, at least for now. It isn't like it's currently difficult to tell what is and isn't LLM word salad, though that will likely change in the future, but by then the argument will involve whether it really matters or not. But for now, when 80% of the submissions are LLM garbage and it really is garbage, it's pretty jarring.

Advertisement
direwolf20about 8 hours ago
Some people only care about actual consequences. Download all the data and send it, in the post on a flash drive, to the GDPR regulator's office and another copy to the medical licensing board because why not.
krater23about 8 hours ago
Hopefully. And I hope he wasn't dumb enough to remove himself.
sajithdilshanabout 7 hours ago
Don't blame the AI for what is clearly gross human negligence. It's like renovating your entire house and then acting surprised when the pipes burst because you used duct tape as a permanent fix.
t43562about 7 hours ago
At least part of the negligence is about the people who knowingly promote AI without also promoting knowledge of the limitations. Those who post stories about vibe-coding XXX in a week and don't bother to point out that they have no idea if it's not a piece of crap, waiting to explode, because there's no way they could have tested it properly in a week let alone read the mountains of code produced.

There's a hype machine working and lots of people riding on it.

sajithdilshanabout 7 hours ago
That's what is meant by human negligence. There will always be a hype about something and that is not an excuse to have a devil may care attitude on any work being done
t43562about 6 hours ago
Negligence depends on what you believe to be true. If you're being told "this is possible and the AI will do it properly you don't have to worry" then it's not negligence really - on the part of the person who believes what they are told.

For the rest of us it is about being put under pressure by managers who don't understand whether to believe what you say or what they read about vibe coding on some linked-in post. As far as they are concerned you're not the authority and some hype-ster is.

Mordisquitosabout 7 hours ago
Are duct tape manufacturers and their investors constantly hyping about how duct tape is the future, and how it is making professional plumbing obsolete?
sajithdilshanabout 6 hours ago
I assume you haven't seen those advertisements where they put duct tapes on everything and present it as a universal solution, also there will always be a hype about something in this world and that is not an excuse to jump on the bandwagon unless you're braindead
Mordisquitosabout 6 hours ago
You assume correctly. I have never seen such advertisements.
websapabout 8 hours ago
Do you think if the agency hired a consultant to build this , a consultant couldn’t have made the same mistakes?

Lack of security theater is a good thing for most businesses

grey-areaabout 8 hours ago
Usually they would just use an off the shelf product and extend it, so they wouldn’t produce the absolute horror story described in the article, no.

I’m not even sure what your last comment means, are you contending that it is a good thing this company violated multiple laws with sensitive patient data?

trick-or-treatabout 7 hours ago
> Usually they would just use an off the shelf product and extend it

AI does the same thing an agency or dev would do. Those vibe coding platforms have a template for these things which is usually Vite + React with Supabase for the backend, the same as a dev might use because surprise the LLM trained on the dev's work.

OP's point is that you're not guaranteed a good outcome hiring an agency or solo dev either, in fact I would say you're almost guaranteed a bad outcome either way.

grey-areaabout 7 hours ago
Apparently your assumptions about AI are completely wrong, if you read the article it produced terrible code.
miningapeabout 7 hours ago
If a consultant made the same mistakes I'd expect the consultant to be held accountable, not the client business that hired the consultancy - they knew they didn't have the requisite skills and so outsourced to an "expert" (and therefore can't be judged for not knowing how to secure their software since they did everything possible)

In this case the "client" is fully liable for the security issues.

rightofcourseabout 8 hours ago
It is possible. If you select consulting that you know nothing about, and they know nothing about programming and vibe coded it for you... and maybe you dont even have a contract to held them responsible and maybe they dont really have a company either... Then I can imagine something like this.
voidUpdateabout 8 hours ago
It is physically possible for a consultant to write bad code. But you'd hope that a consultant could understand that medical data is extremely important to keep secure, and actually write it to have some level of security
trick-or-treatabout 7 hours ago
Sure, but you'd hope that the LLM could understand that too.
voidUpdateabout 5 hours ago
And yet it seems it didn't
ginkoabout 8 hours ago
There's lack of security theater and there's:

> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.

They are not the same thing.

chrisjjabout 8 hours ago
You've got to wonder from where did the "AI" parroted that.
chrisjjabout 4 hours ago
A Stackoverflow wrong answer?