Back to News
Advertisement
Advertisement

⚔ Community Insights

Discussion Sentiment

49% Positive

Analyzed from 4226 words in the discussion.

Trending Topics

#should#software#data#app#vibe#coding#professional#security#more#engineering

Discussion (184 Comments)Read Original on HackerNews

spaniard89277•about 8 hours ago
I did something similar to a local company here in Spain. Not medical, but a small insurance company. Believe it or not, yes, they vibecoded their CRM.

I sent them an email and they threatened to sue me. I was a bit in shock from such dumb response, but I guess some people only learn the hard way, so I filed a report to the AEPD (Data protection agency in Spain) for starters, known to be brutal.

I've also sent them a burofax demanding the removal of my data on their systems just last friday.

ramon156•about 7 hours ago
I'm also curious how much effort it would be to setup some OWASP tools with an agent and crawl for company tools. I'm sure I'm not the first one to think of this, but for local businesses it would give a solid rep, I suppose.

I have a feeling that next year's theme will be security. People have turned off their brain when it comes to tech.

fainpul•about 7 hours ago
> AEPD […] known to be brutal.

Nice. I wish more countries had something like that. Many of these organizations are lethargic and have to be forced into action by civilian efforts or the press.

darkwater•about 7 hours ago
Can you keep us updated in this thread how it evolved?
ramon156•about 8 hours ago
You only burn your hand once, unless you're a company, then you never learn.
sixtyj•about 8 hours ago
They should give you a chocolate at least.

I think that having paper documentation will be safer very soon :)

petesergeant•about 8 hours ago
> [burofax is] a service that allows you to send a document with certified proof of delivery and confirmation of the date of receipt, and this confirmation has legal validity
delis-thumbs-7e•about 8 hours ago
Meanwhile on Linkedin… Every sales bozo with zero technical understanding is screaming top of their virtual lungs that evrything must be done with AI and it is solution to every layoff, economic problem, everything.

It is just a matter of time when something really really bad happens.

funkyfourier•about 7 hours ago
The Hindenburg of coding.
freakynit•about 7 hours ago
I think vibe-coding is cool, but it runs into limits pretty fast (at least right now).

It kinda falls apart once you get past a few thousand lines of code... and real systems aren't just big, they're actually messy...shit loads of components, services, edge cases, things breaking in weird ways. Getting all of that to work together reliably is a different game altogether.

And you still need solid software engineering fundamentals. Without understanding architecture, debugging, tradeoffs, and failure modes, it's hard to guide or even evaluate what's being generated.

Vibe-coding feels great for prototypes, hobby projects, or just messing around, or even some internal tools in a handful of cases. But for actual production systems, you still need real engineering behind it.

As of now, I'm 100% hesitant to pay for, or put my data on systems that are vibe-coded without the knowledge of what's been built and how it's been built.

seethishat•about 7 hours ago
I saw something very similar a few months ago. It was a web app vibe coded by a surgeon. It worked, but they did not have an index .html file in the root web directory and they would routinely zip up all of the source code which contained all the database connection strings, API credentials, AWS credentials, etc.) and place the backup in the root web directory. They would also dump the database to that folder (for backup). So web browsers that went to https://example.com/ could see and download all the backups.

The quick fix was a simple, empty index.html file (or setting the -Indexes option in the apache config). The surgeon had no idea what this meant or why it was important. And the AI bots didn't either.

The odd part of this to me was that the AI had made good choices (strong password hashes, reasonable DB schema, etc.) and the app itself worked well. Honestly, it was impressive. But at the same time, they made some very basic deployment/security mistakes that were trivial. They just needed a bit of guidance from an experienced devops security guy to make it Internet worthy, but no one bothered to do that.

Edit: I do not recommend backing up web apps on the web server itself. That's another basic mistake. But they (or the AI) decided to do that and no one with experience was consulted.

shivaniShimpi_•about 6 hours ago
interesting, so the ai got the hard stuff right. password hashing, schema design, fine. it fumbled the stuff that isn't really "coding" knowledge, feels more like an operational intuition? backup folder sitting in web root isn't a security question, it's a "have you ever been burned before" question, and surgeon hadn't. so they didn't ask and the model didn't cover it, imo that's the actual pattern. the model secures exactly what you ask about and has no way of knowing what you didn't think to ask. an experienced dev brings a whole graveyard of past mistakes into every project. vibe coders bring the prompt
NoGravitas•about 3 hours ago
The competence profile of any LLM-based AI is extremely spiky - whether it does a particular task well or not is pretty independent of the (subjective) difficulty of the task. This is very different from our experience with humans.
nerptastic•about 6 hours ago
This is what I’m noticing. At my workplace, we have 3 or 4 non-devs ā€œwritingā€ code. One was trying to integrate their application with the UPS API.

They got the application right, and began stumbling with the integration - created a developer account, got the API key, but in place of the applications URL, the had input ā€œlocalhost:5345ā€ and couldn’t get that to work, so they gave up. They never asked the tech team what was wrong, never figured out that they needed to host the application. Some of the fundamental computer literacy is the missing piece here.

I think (maybe hopeful) people will either level up to the point where they understand that stuff, or they will just give up. Also possible that the tools get good enough to explain that stuff, so they don’t have to. But tech is wide and deep and not having an understanding of the basic systems is… IMO making it a non-starter for certain things.

TeMPOraL•about 5 hours ago
Maybe this is what's missing in the prompt? We've learned years ago to tell the AI they're the expert principal 100x software developer ninja, but maybe we should also honestly disclose our own level of expertise in the task.

A simple "I'm a professional surgeon, but sadly know nothing about making software" would definitely make the conversation play out differently. How? Needs to be seen. But in an idealized scenario (which could easily become real if models are trained for it), the model would coach the (self-stated) non-expert users on the topics it would ordinarily assume the (implicitly self-stated) expert already knows.

Arch-TK•about 6 hours ago
The fix is to not let users download the credentials. In fact, ideally the web server wouldn't have access to files containing credentials, it would handle serving and caching static content and offloading requests for dynamic content to the web application's code.

Disabling auto-indexing just makes it harder to spot the issue. (To clarify, also not a bad idea in principle, just not _the_ solution.) If the file is still there and can be downloaded, that's strictly something which should not be possible in the first place.

simianwords•about 6 hours ago
Agent-Native DevOps tools are probably necessary. There should be no reason they would do it manually.

How I see it happening: agents like CC have in built skills for deployment and uses building blocks from either AWS or other simpler providers. Payment through OAuth and seamless checkout.

This should be standardised

EdNutting•about 8 hours ago
Software engineering is looking more and more like it needs a professional body in each country, and accreditation and standards. Ie it needs to grow up and become like every other strand of engineering.

Gone should be the days of ā€œI taught myself so now I can [design software in a professional setting / design a bridge in a professional setting].ā€ I’m not advocating gatekeeping - if you want to build a small bridge at the end of your garden for personal use, go for it. If you want to build a bridge in your local town over a river, you’re gonna need professional accreditation. Same should be true for software engineering now.

rubzah•about 7 hours ago
As the sibling pointed out, there are already plenty of laws about, for example, handling of personally identifiable data. Somehow there is a lack of awareness, perhaps what is needed is a couple of high-profile convictions (which can't be too far off).
EdNutting•about 7 hours ago
One of the key functions of a professional body is to ensure all members are aware of existing and new laws, standards and codes of practice. And to ensure different grades of engineer are aware of different levels of the standards. And that sector-specific laws and standards are accredited accordingly.

The solution to the problem you’re describing is the very thing you’re claiming exists but which doesn’t for software engineering!

High profile convictions are not a good way of dealing with this. Not in the short or long term. Sure they have an impact, and laws should be enforced, but that’s not a substitute for managing the industry properly.

cik•about 7 hours ago
Professional bodies act as nothing more then gatekeepers and tent seekers for things of this nature. Anyone can write software, but not everyone writes security minded software.

We already have laws in place, and certifications that help someone understand if a given organization adheres to given standards. We can argue over their validity, efficacy, or value.

The infrastructure, laws, and framework exist for this. More regulation and. eaurocracy doesn't help when current state isn't enforced.

EdNutting•about 7 hours ago
There’s a reason why many professions have professional bodies and consolidated standards - from medicine to accountancy, actuarial work, civil engineering, aerospace, electronic and electrical engineering, law, surveying, and so many more.

In most of those professions, it is a crime or a civil violation to offer services without the proper qualifications, experience and accreditation from one of the appropriate professional bodies.

We DO NOT have this in software engineering. At all. Anyone can teach themselves a bit of coding and start using it in their professional life.

Analogous to law, you can draft a contract by yourself, but if it goes wrong you have a major headache. You cannot, however, offer services as a solicitor without proper qualifications and accreditation (at least in the UK). Yet in software engineering, not only can we teach ourselves and then write small bits of software for ourselves, we can then offer professional services with no further barriers or steps.

The mishmash of laws we have around data and privacy are not professional standards, nor are they accreditation. We don’t have the framework or laws around this. And I am not aware of the USA (federal level) or Europe (or member states) or China or Russia or India or etc having this.

For example, the BCS in the UK is so weak that although it exists, exceedingly few professional software engineers are even registered with them. They have no teeth. There’s no laws covering any of this stuff. Just good-ol’ GDPR and some sector-specific laws here and there trying to keep people mildly safe.

aledevv•about 7 hours ago
> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one command away from anyone who looked

This is the top!

This is a typical example of someone using Coding Agents without being a developer: AI that isn't used knowingly can be a huge risk if you don't know what you're doing.

AI used for professional purposes (not experiments) should NOT be used haphazardly.

And this also opens up a serious liability issue: the developer has the perception of being exempt from responsibility and this also leads to enormous risks for the business.

shivaniShimpi_•about 6 hours ago
Every other field that's figured out high stakes failure models eventually landed on the same solution - make sure two people that understand the details are looking at it - pilots have copilots surgeons with checklists and nuclear plants have independent verification. Software was always the exception, cause when it broke it mostly just broke for you, vibe coding is not going to change the equation, it barely removes one check that existed before is that the people who wrote the code understood what was going on, but now that's gone too
Ekaros•about 5 hours ago
We do have code reviews for pull requests. But on average I would guess there is great amount of complacency there. I suppose old proper QA phase was best answer we had. But that is expensive and slow.
NoGravitas•about 3 hours ago
Maybe expensive and slow is actually an improvement.
BrissyCoder•about 8 hours ago
This reads like internet fiction to me. Very vague and short.
yawniek•about 8 hours ago
fwiw i know tobias and its very very unlikely he made this up. my guess its intentionally vague to not leak any information about the culprit which i guess is fair.
rubzah•about 7 hours ago
I assure you that these kinds of things are happening right now.
watwut•about 7 hours ago
Short writing is just a good writing. Also, it was not vague, it just omitted identifying information.
abrookewood•about 8 hours ago
Given the subject matter, it would be highly unethical to reveal the name of the company before verifying it was indeed fixed. I'd be wary of getting sued.
sixhobbits•about 7 hours ago
yeah keeping it vague makes sense to protect the place if it's still online but the whole thing doesn't really make sense?

The timelines mentioned are weird - he spoke to them before they built it? Or after? It's not that clear, he mentions they mentioned watching a video.

> The entire application was a single HTML file with all JavaScript, CSS, and structure written inline.

This is not my experience of how agents tend to build at all. I often _ask_ them to do that, but their tendency is to use a lot of files and structure

> They even added a feature to record conversations during appointments

So they have the front-desk laptop in the doctor's room? Or they were recording conversations anyway and now they for feed them into the system afterwards?

> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.

Also definitely not the normal way an agent would build something - security flaws yes, but this sounds more like someone who just learnt coding or the most upvoted post of all time on r/programmerhorror, not really AI.

Overall I'm skeptical of the claims made in this article until I see stronger evidence (not that I'm supporting using slop for a medical system in general).

fzzzy•about 7 hours ago
The single file thing makes perfect sense if it was built as an artifact in one of the big provider's webui.
rubzah•about 7 hours ago
I know, through personal acquaintance, of at least one boutique accounting firm that is currently vibe-building their own CRM with Lovable. They have no technical staff. I can't begin to comprehend the disasters that are in store.
antupis•about 7 hours ago
Generally why build your own CRM? ERP and other resource planning systems I get becouse you can tailor made those to your back office. But for CRM you need mostly reliability.
consumer451•about 8 hours ago
What would a responsible on-boarding flow for all of these tools look like?

> Welcome to VibeToolX.

> By pressing Confirm you accept all responsibility for user data stewardship as regulated in every country where your users reside.

Would that be scary enough to nudge some risk analysis on the user's part? I am sure that would drop adoption by a lot, so I don't see it happening voluntarily.

sigseg1v•about 7 hours ago
We require someone with a professional engineering designation from an accredited engineering body to sign off and approve before a building can be built. If it is found to have structural issues later, that person can be directly liable and can lose their license to operate. Why this is not the case with health software I cannot explain. Every time I propose this the only argument I recieve against it is people who are mad that their field might dare to apply the same regulation every other field has.
consumer451•about 7 hours ago
Oh man, I have gone off on rants about software "engineering" here in the past.

My first office job was as an AutoCAD/network admin at a large Civil and Structural engineering firm. I saw how seriously real engineering is taken.

When I brought up your argument to my FAANG employed sibling, he said "well, what would it take to be a real software engineer in your mind?"

My response was, and always will be: "When there is a software Professional Engineer stamp, with the engineer's name on it, which carries legal liability for gross negligence, then I will call them "Software Engineers."

jgrizou•about 7 hours ago
Would it? Feels a bit like when you use Facebook and handover all your data.
jillesvangurp•about 7 hours ago
I think the issue here is less about AI misbehaving and more about people doing things they should not be doing without thinking too hard about the consequences.

There are going to be a lot of accidents like this because it's just really easy to do. And some people are inevitably going to do silly things.

But it's not that different from people doing stupid things with Visual Basic back in the day. Or responding to friendly worded emails with the subject "I love you". Putting CDs/USB drives in work PCs with viruses, worms, etc.

That's what people do when you give the useful tools with sharp edges.

Advertisement
aitchnyu•about 7 hours ago
Is there anybody making some framework where you declare the security intentions as code (for each CRUD action) and which agents can correctly do and unit test? I have seen a Lovable competitor's system prompt have 24 lines of "please consider security when generating select statements, please consider security when generating update statements..." since it expects to dump queries here and there.
CrzyLngPwd•about 7 hours ago
I think it is wonderful.

It's reminiscent of the 90s, where every middle manager had dragged and dropped some boxes on some forms, and could get a salesman to sell it, without a care in the world for what was going on behind the scenes.

Until something crashed and recovery was needed, of course.

The piper always needs to be paid.

Steve16384•about 7 hours ago
Or someone starts with an Excel spreadsheet just to "keep track of a few things". Then before they know it, it has become a critical part of the business but too monolithic and unorganised to be usable.
ramon156•about 8 hours ago
502, and the site is getting hugged to death it seems

Edit: the archive.ph one works for me :)

0-bad-sectors•about 4 hours ago
I think AI will be too expensive soon for normal/non technical people to tinker with and this kind of vibe coding stories will disappear.
keysersoze33•about 5 hours ago
The takeaway is to vet new companies one is dealing with - even just calling them up and asking if they've AI generated any system which deals with customer/patient data.

This is going to get more common (state sponsored hackers are going to have a field day)

debarshri•about 7 hours ago
I believe there are various dimensions to vibe coding. If you work with an existing codebase, it is a tool to increase productivity. If you have domain specific knowledge, in this case - patient management system, you can build better systems.

Otherwise, you endup simulating the production. Lot of the non technical folks building products with AI Vibe coding are basically building Product Simulations. It looks like a product, functions like a product but behind the scene, you can poke holes.

mnls•about 7 hours ago
Damn!!! And I keep hardening my RSS app which was partly vibe coded and not exposed to the WAN while "professionals" give data away.
sjamaan•about 7 hours ago
So much is missing from this story. Did they report it to the relevant data authority? Did the fix they said they applied actually fix anything? Etc.
GistNoesis•about 8 hours ago
Who should get jailed ?

Does the company which willingly sells the polymorphic virus editor bear any responsibility, or should the unaware vibe coder be incumbent ?

EdNutting•about 7 hours ago
We don’t blame companies selling 3D Design software or 3D printers or mortar and cement, or graph paper and pencils. When people abuse those tools and build huts or houses or bridges that fall down, we usually blame the user for not having appropriate professional qualifications, accreditation, and experience. (Very occasionally we blame bugs in simulation software tools).

AI is a tool. It’s not intelligent, and it works at a much bigger scale than bricks and mortar, but it’s still just a tool. There’s lots we can blame AI companies for, but abuse of the tool isn’t a clear-cut situation. We should blame them for misleading marketing. But we should also blame users (who are often highly intelligent - eg doctors) for using it outside their ability. Much like doctors are fed up of patients using AI to try to act like doctors, software engineers are now finding out what it’s like when clients try to use AI to act like software engineers.

coopykins•about 6 hours ago
I interviewed some years ago for an AI related startup. After looking at the live product, first thing I see is their prod dB credentials and openAI api key publicly send in some requests... Bad actors will be having a lot of fun these days
Advertisement
agos•about 8 hours ago
I really hope OP also contacted their relevant national privacy authority, this is a giant violation
TeMPOraL•about 6 hours ago
I have my doubts on the story. I consulted on a medtech project in the recent past in similar space, and at various points different individuals vibe-coded[0] not one but three distinct, independent prototypes of a system like the article describes, and neither of them was anywhere near that bad. On the frontend, you'd have to work pretty hard to force SOTA LLMs to give you what is being reported here. Backend-side, there's plenty of proper turn-key systems to get you started, including OSS servers you can just run locally, and even a year ago, SOTA LLMs knew about them and could find them (and would suggest some of them).

I might be biased by my experience, because we actually cared about GDPR and AI act and proper medical data processing, and I've spent my fair share of time investigating the options that exist. Still, I'm struggling to imagine how one could possibly screw it up anywhere near as what the article described. Like, I can't think of a way to do it, to the point I might need to ask an LLM to explain it to me.

--

[0] - Not as a means of developing an actual product, but solely to see if we can, plus it was easier to discuss product ideas while having some prototypes to click around.

erelong•about 6 hours ago
To me it just sounds like eventually someone will figure out how to make vibecoding more reasonably secure (with prompts to have apps be looked at for security practices?)

unless cybersecurity is such a dynamic practice that we can't create automated processes that are secured

Essentially a question of what can be done to make vibecoding "secure enough"

zkmon•about 6 hours ago
Technology for greed vs technology for need. Greed has its cost.
high_byte•about 7 hours ago
this is exactly the kind of vibe coding horror stories I asked for just few days ago :)

https://news.ycombinator.com/item?id=47707681

ionwake•about 8 hours ago
Anyone else read the title on HN and shudder not wanting to actually click it?
hamasho•about 5 hours ago
The worst blunder I made was when I explored cloud resources to improve the product's performance.

I created a GCP project (my-app-dev) for exploring how to scale up the cloud service. I added several resources to mock the production, like compute instances/cloud SQL/etc, then populated the data and run several benchmarks.

I changed the specs, number of instances and replicas, and configs through gcloud command.

  $ gcloud compute instances stop instance-1 --project=my-app-dev
  $ gcloud compute instances set-machine-type instance-1 --machine-type=c3-highcpu-176 --project=my-app-dev
  $ gcloud sql instances patch db-1 --tier=db-custom-32-131072 --project=my-app-dev
But for some reason, at one point codex asked to list all projects; I couldn't understand the reason, but it seemed harmless so I approved the command.

  $ gcloud projects list
  PROJECT_ID     NAME       PROJECT_NUMBER
  my-app-test    my app     123456789012
  my-app-dev     my app     234567890123     <- the dev project I was working on
  my-app         my app     345678901234     <- the production (I know it's a bad name)
And after this, for whatever reason it changed the target project from the dev (my-app-dev) to the production (my-app) without asking or me realizing.

Of course I checked every commands. I couldn't YOLO while working on cloud resources, even in dev environment. But I focused on the subommands and its content and didn't even think it had changed the project ID along the way.

It continued to suggest more and more aggressive commands for testing, and I approved them brain-deadly...

  $ gcloud sql instances patch db-1 --database-flags=max_connections=500 --project=my-app
  $ gcloud compute instances delete instance-1 --project=my-app
  $ echo 'DELETE FROM users WHERE username="test";' \
      | gcloud sql connect my-db 
   --user=user --database=my-db --project=my-app
  $ wrk -t4 -c200 -d30s \
      "http://$(gcloud compute instances describe instance-1 \
      --project=my-app \
      --format='get(networkInterfaces[0].accessConfigs[0].natIP)')"
It took a shamefully long time to realize codex was actually operating on production, so I DDoSed and SQL-injected to the production...

Fortunately, it didn't do anything irreversible. But it was one of the most terrifying moments in my career.

EdNutting•about 5 hours ago
This is part of the reason deployments to production cloud environments should:

1. Only be allowed via CI/CD

2. All infra should be defined as code

3. Any deployment to production should be a delayed process that also has a human-approval step in the workflow (at least one, if not more)

(Exactly where that review step is placed depends on your organisation - culture, size, etc.)

And anyone that does need to touch production should do so from an isolated VM with temporary credentials. Developers shouldn't routinely have production access from their terminal. This last aspect is easy and cheap to set up on AWS. I presume it's also possible in Google Cloud.

cmiles8•about 6 hours ago
There’s another version of the Mythos narrative that reads like:

AI companies realized that all this vibe coding has released a shitstorm of security vulnerabilities into the wild and so unless they release a much better model to fix that mess they’ll be found out and nobody will touch AI coding with a 100ft pole for the next 15 years. This article points more towards this narrative.

repeekad•about 8 hours ago
A perfect example of why a product like Medplum exists, as opposed to completely reinventing the wheel from scratch
krater23•about 8 hours ago
The only thing what helps is deleting the database. Every day. Until the thing goes down because the 'developer' thinks he has a bug that he can't find.
Advertisement
zoobab•about 8 hours ago
Avoid javascript like plague, it can be overwritten at the client side.
fakedang•about 7 hours ago
Report them - that right there is 5+ different violations. Only then will they realize their stupidity.
faangguyindia•about 8 hours ago
It's nothing new, dunning kruger existing long before AI entered coding realm.

Several years ago ran into one american company which consulted with me. They had 4000 paying customers and they rolled out their billing solution which accept crypto, paypal and stripe.

They had problem with payment going missing, i migrated them to WHMCs with hardening and they never had any issues after.

Now people may laugh at whmcs but use the right tool for job

U need battle tested billing solution then whmcs does count it can support VAT, taxes, reporting/accounting and pretty all which you'll error while you try to do it all yourself.

Too bad there aren't battle tested opensource solution for this

peyton•about 8 hours ago
Kinda crazy but hopefully the future holds a Clippy-esque thing for people who don’t know to set up CI, checkpoints, reviews, environments, etc. that just takes care of all that.

It sorta should do this anyway given that the user intent probably wasn’t to dump everyone’s data into Firebase or whatever.

I personally would like this as well since it gets tiring specifying all the guardrails and double-checking myself. Using this stuff feels too much like developing a skill I shouldn’t need while not focusing on real user problems.

grey-area•about 8 hours ago
This problem is unrelated to CI and dev practices etc, this is about trusting the output of generative AI without reading it, then using it to handle patient data.

Vibe coding is just a bad idea, unless you’re willing and able to vet the output, which most people doing it are not.

dgb23•about 8 hours ago
Fully agentic development is neat for scripts and utilities that you wouldnā€˜t have the time to do otherwise, where you can treat it as intput/output and check both.

In these cases you don’t necessarily care too much about the code itself, as long as it looks reasonable at a glance.

edwinjm•about 8 hours ago
It is related to CI and dev practices etc. A experienced developer using AI would add security/data protection, even when vibe coding.
mikojan•about 8 hours ago
Hard to believe... This activity should certainly land you in a German prison?!
VanTodi•about 8 hours ago
since its a .ch domain, i believe its in swiss. In germany we have our DSGVO (GDPR), and you can report it too. If a breach happen, you have to inform all your customers. if its a first time and you tried to your own best, the punishment is not that hard, but since these are medical infos they should have known better.

Lets really hope they learned from their mistakes

crvst•about 6 hours ago
Cool story bro. Of course it’s true if it made it to HN. Who needs proofs.
avazhi•about 6 hours ago
You guys realise this is AI slop on AI slop, right?
krapp•about 6 hours ago
This is reality now, what do you want?
avazhi•about 1 hour ago
HN should just have a rule that all content should be human-generated. This post is literally an LLM writing about something an LLM did; it's a bot botting about a bot. Aside from how funny and dystopian that is - just ban it? Just make a rule that submissions require a human author.

I don't think solving this is all that complicated, at least for now. It isn't like it's currently difficult to tell what is and isn't LLM word salad, though that will likely change in the future, but by then the argument will involve whether it really matters or not. But for now, when 80% of the submissions are LLM garbage and it really is garbage, it's pretty jarring.

direwolf20•about 8 hours ago
Some people only care about actual consequences. Download all the data and send it, in the post on a flash drive, to the GDPR regulator's office and another copy to the medical licensing board because why not.
krater23•about 7 hours ago
Hopefully. And I hope he wasn't dumb enough to remove himself.
sajithdilshan•about 7 hours ago
Don't blame the AI for what is clearly gross human negligence. It's like renovating your entire house and then acting surprised when the pipes burst because you used duct tape as a permanent fix.
websap•about 8 hours ago
Do you think if the agency hired a consultant to build this , a consultant couldn’t have made the same mistakes?

Lack of security theater is a good thing for most businesses

grey-area•about 8 hours ago
Usually they would just use an off the shelf product and extend it, so they wouldn’t produce the absolute horror story described in the article, no.

I’m not even sure what your last comment means, are you contending that it is a good thing this company violated multiple laws with sensitive patient data?

rightofcourse•about 7 hours ago
It is possible. If you select consulting that you know nothing about, and they know nothing about programming and vibe coded it for you... and maybe you dont even have a contract to held them responsible and maybe they dont really have a company either... Then I can imagine something like this.
voidUpdate•about 8 hours ago
It is physically possible for a consultant to write bad code. But you'd hope that a consultant could understand that medical data is extremely important to keep secure, and actually write it to have some level of security
ginko•about 8 hours ago
There's lack of security theater and there's:

> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.

They are not the same thing.

chrisjj•about 7 hours ago
You've got to wonder from where did the "AI" parroted that.
Advertisement