DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
60% Positive
Analyzed from 4576 words in the discussion.
Trending Topics
#aws#don#license#thing#own#cloud#still#those#lot#services

Discussion (97 Comments)Read Original on HackerNews
They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.
The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I had to have the two tables open, cross check the specs and price.
If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.
Created a DigitalOcean account, moved everything over. Set up our CI/CDs to deploy there, and spent the next two months on the product, launching one month earlier than promised.
Some years before that I saw a video online where a person digs a hole near a river and puts a pipe connecting the river and the hole. The fishes push themselves hard in the pipe to get to their trap. Choosing the path of least resistance, and never backing off from a mistake: recipes to end up like those fishes. The video left a big impression on me.
I'm not a big fan of a lot of the antiparterns I notice with AWS. But I moved my business over there about 12 years ago after many years of dedicated hosting. I'm extremely judicious with managing my own and my clients' use of services. I get extremely good performance at very low cost on AWS by allocating things cleverly, but it's a lot like hunting for the best airline miles program or the best credit card perks, or the cheapest flights. If you enjoy that sort of thing.
My one caveat is that I only deploy stuff on AWS that would be fairly trivial to move elsewhere. RDS, EC2, storage; snapshot it and bail if necessary.
> The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I don’t want to be the one defending AWS, but I don’t think that this is a valid reason not to like them. I mean, pricing depends on so many factors like reserved/dedicated/spot/on-demand instances have all different prices.
I don’t even think that using the UI to spin up the machine is the right way to do that in an enterprise setting, you should always do that through Infrastructure as Code, to know exactly what you have up and running, just by looking at that as you would with any program. I’d suggest to use the UI for simple testing, for which the costs are often (but not always) negligible.
Jeff Bezos if you see this please send me some cash.
But that's the problem: The complexity of doing that properly is pretty much the same as just doing your own hardware (which is what I'm working with most of the time - handling stuff on physical servers). And at that point the question should be why you're paying AWS so much money and pay your people to automate AWS workflows when you could just pay them to automate workflows on physical hardware, which would be way cheaper to run than the AWS instances.
It should really be a read-only layer for metadata and logs.
If it bothers you that you need to open two tabs for cross-checking the costs, you may want to avoid every cloud provider, not just AWS.
Once you have NAT gateways, CloudFront, S3, auto scaling, loadbalancers, etc, calculating the cost becomes an art rather than an exact science. And if you don't use these, there is no point of using AWS, there are plenty of "cheap" VPS providers.
This is completely backwards, at least with OpenSearch and Valkey. AWS didn't create the forks until after the upstream projects changed their license, so it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes. With Valkey in particular it was members of the former redis core development team that created Valkey.
Why do we apply this standard to MongoDB but not to Apache, Linux, Postgres, or MariaDB? One purpose of an open source license is to allow many providers to provide the service. As I've talked about here previously, Elasticsearch wasn't able to provide the service I needed, so I had to move to AWS.
It's weird to me that the Hacker News community doesn't think that sort of competition is good. The narrative seems to be that all these businesses are somehow victims of AWS, when it seems the truth is much more straightforward: they provided open source software and people used it. The fact that their business had no working plan to actually monetize that foundation should not be taken out on the community.
Many support breaking up Amazon so others could compete not killing small entities and growing Amazon.
Just try a little bit of understanding.
But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects.
Or seen from the other side, these projects chose initial licenses that didn't fit with their wants for how others should use their project, in this mind.
If you use a license that gives people the freedom to host your project as a service and make money that way, without paying you, and your goal was to make money that specific way, it kind of feels like you chose the wrong license here.
What was unsustainable (considering this perspective) was less that outside actors did what they were allowed to do, and more that they chose a license that was incompatible with their actual goals.
1. Vercel Phase My first project used Vercel. Since my project was Next.js, the experience was decent. But as my project gained some users, I found that even for projects under 100 users, I needed to pay $20 per month. Since my service didn't require high performance, this cost felt steep.
2. Self-host Phase (Hetzner + Coolify) Later, I started setting up my own server with Hetzner and deploying with Coolify. Since Coolify is open-source and free, I only had to cover the cost of a VPS (even $5 a month was sufficient). I could deploy PostgreSQL instances and run a web server on it. But later I discovered that even this way, I still had to spend a lot of effort maintaining PostgreSQL and Redis. Even though they were containerized with Docker, managing them was still troublesome. I needed to pass various system and environment variables between services, which was very tedious.
3. Cloudflare Phase So later I switched to Cloudflare. With Cloudflare Workers, I can deploy fullstack applications and use D1 Database and Cloudflare KV to replace Redis. These features can be called directly within the Worker without needing to pass environment variables.
Plus, the local development experience is excellent and the pricing is very reasonable, so I've been using Cloudflare's entire suite ever since.
Use AWS at your own risk, Paul Vixie is not there to save you.
This is always the weird things in those rants. He's complaining that after 4 days his mails are offline.
Now I'm doing a mix of physical servers in rented rackspace, and rented servers - but even there I can have billing mixups where they deactivate servers for no good reason. And to get email working again the limiting factor would be the DNS TTL - new servers would be online somewhere else within hours of it going down. (And yes, I tested that just last year - one hoster threatened cutoff due to non-payment on a paid invoice, which prompted me to move the mail server just in case while getting this resolved).
That he is complaining about his email being down or that he trusted AWS at all with email?
Yeah, no that's not how it works with email. You have to build reputation for weeks or receivers throttle you.
And every time it's a nightmare. I'm just banging out a server for my experimental card game, not setting up an new financial institution. Everything looks as if I'm preparing to scale to infinity tomorrow, with a staff of a thousand and a budget backed by VCs.
Fortunately there's Netlify and similar, who put a gloss on it so that I don't have to boil the ocean. I figure that one of these days I might actually be forced to learn IAM and VPNs and God only knows what else. Meantime, every time I touch it my eyes bug out.
It's a ghost of its former self, but I'd probably still rather use Heroku today than being forced to use Lightsail even once again.
I will bite the bullet and pay for RDS because it adds a lot of value - scalability, a reasonably optimized config, backups I don’t have to worry about.
But Elasticache is exploitatively priced with almost no value add.
It is slower, less optimized, less stable, and only supports one DB compared to a vanilla redis install with zero configuration.
There are some scalability improvements, but it’s extremely rare they’re even required because vanilla redis so wildly outperforms elasticache on a similar instance.
their dashboards are trash & don't work - Google Cloud, AWS Console, Google Ads, Meta Ad manager
I won't even mention the hyped up LLM vendors.
but here we r - people being laid off due to A.I - money being funneled into Gigawatt datacenters
AWS IAM is extremely well designed when you compare it with the spaghetti monster IAM systems of other clouds.
Every time I try the new cool thing supposed to replace these services on some other provider - I understand how mature and polished the AWS ones are.
With that said, the rest 90% of AWS services like WorkMail, Cognito, API Gateway, are absolute hot garbage which no good meaning AWS expert will touch with a 10 meter stick.
DX is simple, integrations between the two, and the stack is well understood by the LLM.
Lovable uses supabase, and is surprisingly easy to eject from too; I've done the lovable to Vercel + supabase a couple of times, even managing to keep it syncing via the Git integration. You can get proper scalable infra and minimal vendor lock in whilst the vibe coder gets to play with the pretty.
Well, besides for the fact that the author's got suspended for no reason, WorkMail is being shut down March 2027 anyway. I recommend checking out Purelymail for a budget, batteries included option. Another option is to run your own server but have it use something like AWS SES to send externally, avoiding the IP reputation issue.
Am I the only one who remembers that VPSes and dedicated hosting services were a thing before AWS came around? Yes you had to pay for a month at a time and scaling wasn’t as instant, but it wasn’t like the only option before cloud computing was having to drive to the datacentre and install your own server.
We had super bursty traffic, and had to go with Google Cloud (very early days! [0]) because you'd need to communicate with AWS and pre-warm the ELB capacity of your expected bursts.
We did a dead launch to 60 million customers (0 to 60 million, no organic growth phase) this way. I wouldn't want to do that on a VPS.
[0] https://cloudplatform.googleblog.com/2013/11/?m=1
The “in minutes” is doing a lot of the work in that sentence above.
I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.
AWS changed that, and the rest of the industry eventually followed.
The virtualised server thing was not a AWS thing, the thing that was were their other services. For example instead of renting a virtual server and installing a database on it. You could rent the database; that was sort of a new thing that AWS made in to thing.
It was never cheaper what you paid for was a promise of fire and forget. You would no longer need to worry about any responsibility to update the server or the database cause the AWS crew took care of that.
VPSes and non-custom configs for dedicated servers were pretty instant as far as I know, I think the advantage of AWS was more that you could scale up and down much more easily since you weren’t locked down in a monthly contract, and that you could automate server provisioning through an API.
I miss the Media Temple days.
I saw some 192 core instances on Vultr, but I haven't tried them yet. What are you doing with all them cores?
I often fantasized about spinning up hundreds of nodes for various projects that needed number crunching. Then realized "wait I can just rent one big box for an hour" haha. It's really cool that we can do that now.
The ancient forgotten art of Vertical Scaling.
I’ve a couple of apps doing a few million a day. I am using Hetzner and before that used DigitalOcean. Mind you, for close to a decade.
People are unnecessarily complicating stuff, and these clouds can go very expensive very quickly.
Recently, I came across a company and they were spending $20k a month on GCP. I am like, are you kidding me, $20K for the kind of stuff you do??? It seems you do not understand how CPU, RAM and Disk work to plaster such "autoscaling hyper solutions" burning money in cloud.
I moved their stuff out of the GCP managed solution and ended up with a $200-400 per month bill. The CEO can still not believe how it's even possible.
I suggested them move to Dedicated servers but they didn't want it, they said they must show they are on Hyperscaling cloud.
OK fine, we'll stay in Hyperscaler but not use any of their service other than VMs.
They racked up a ton of bills by using cloud monitoring, Datastore, and autoscalers (with no proper tuning), Kubernetes.
I replaced all of it with Prometheus, Grafana, Loki, and most stuff from Datastore to Postgres and Mongo with replicas. I added Redis.
I implemented a custom scaler where you can scale off of app metrics, not by just using a random peg on CPU.
I implement hot data reload by packing the data updates in gzip file, uploading to GCS and pulling from autoscaled units. Moved the stuff to Spot VMs.
The complexity of stuff in cloud is high for nothing.
At a previous bigger company, getting procurement to sign up to a new provider requires writing a business case, justifying the spend and then getting multiple competing quotes and speaking to their sales teams. Signing up to a new service takes _months_ even for $10/mo as they’ll negotiate for bulk discounts and the best possible terms for something that will literally cost less per year than one of meetings they hold to discuss the “value”. Meanwhile on AWS I can click a button in the marketplace and it gets thrown in the AWS account which is pre approved spending.
We use it because we don’t want to deal with slow procurement process. It kills all the momentum.
Watched one company end up with a $250k AWS bill when their credits expired (which they could not pay).
I agree that it's overcomplicated. Although having the self-service portal also for assigning IPs is useful. But most of it seems overkill. Although, being able to detach storage from VMs and such is also quite flexible. But still.
Our 64 core spot instances on windows were taking 8-10x longer than our developer machines with the same core count, and there was a bunch of engineering went into the scaling, queue management, etc. if we’d just had a single bare metal machine from hetzner we could have saved money _and_ reduced our iteration times.
You removed all of their logging and all of their redundancy and reliability and replaced it with shitters that will all explode if the small providers one data centre goes down.
And if someone penetrates this mega server, they’ll be able to wipe all your logs or tamper with them, to hide the attack.
If your storage servers go down, everything they have is gone. And these providers don’t offer the finest hardware. How do you know all of those drives aren’t from the same batch? They will be, because they’re a bulk buyer with a single data centre.
By the time I joined, 18 months after development had started, a giant, complex, hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use.
It should have been built on a single Linux box by a single senior developer with Python and Postgres or nodejs or Ruby or whatever.
They went out of business after not too long and I couldn't help wondering if things might have been different if they hadn't spent a fortune building a giant money making machine for AWS, instead of making a web application on a Linux box.
Every AWS project I have worked on has had some significant work put into programming AWS instead of writing business functionality.
To be fair, if they had a AWS Solution Architect involved they heavily push you down this road and if they manage to get in management's ear they'll push the idea that server-less AWS features is vastly cheaper.
If you're only responding to a handful of requests that's true, but once things ramp up you get "nickel and dimed" for everything: API Gateway requests, lambda execution time, DynamoDB read/write units, CloudWatch logs, outgoing data, step function transitions, S3 requests.
I understand all those services cost money and they shouldn't be free, but I question if paying all those micro-transactions is worse then paying for your own VMs, especially once your customers complain about the cold starts and you think you can fix it with "lambda warming"
The writing has been on the wall for a few years now, and this is particularly evident to those thar have worked at AWS: Amazon is in its day-2 era.
Amazon being in its day-2 era means that most of what has been written in the past twenty years about Amazon is bot valid anymore.
“Customer obsession” is literally their first leadership principle, and stellar support was their defining characteristic.
Innovation has ground to a halt of mostly just meh “hey us too” launches. Pricing and design patterns feel increasingly focused on locking you in. AWS folks tell me internally they talk a lot about making sure things are “sticky” with customers. The best engineering talent no longer wants to work there and it shows, especially in places like AI where AWS has just released wave after wave of discombobulated nonsense.
As a core “rent-a-server” concept with a few add on services there’s still a lot of utility, but AWS is gradually becoming a boring baseline utility with a ton of distracting half baked stuff jammed on top. Most companies I talk to are no longer focused on single cloud and increasingly are bringing a lot of workloads back on prem or in colos. Not everything, but for a lot of stuff that just makes more sense and is a heck of a lot cheaper.
The chips business in Annapurna is probably the most interesting thing and that plays to its strength of the boring low level infrastructure stuff. Nearly everything AWS tries to do beyond chips and rent-a-server plays is a hot mess.
AWS isn’t going away, but its future looks a lot less exciting and inspiring than the story that got us to this point.
Lambda is incredibly simple to use, it just runs a function for you.
Not sure how you could burn so much with dynamodb. It’s serverless and incredibly cheap. Must have been doing something insane like a huge dataset where you scan through it over and over.
Being salty that Gary couldn’t sell enough of his paid service and AWS is competing with it isn’t a meaningful complaint. I want something in AWS, not on Gary’s servers.
AWS CLI??? Holy guacamole, what a mess. AWS CLI looks what is now the digital identification to get the basics done.
While GCP CLI is like "sure, here"!
You're also putting your business at risk with Google randomly banning accounts and not providing timely appeals. [1]
[1]: https://news.ycombinator.com/item?id=45798827
Hey good lookin'