DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
62% Positive
Analyzed from 7983 words in the discussion.
Trending Topics
#laptop#laptops#old#power#hardware#servers#battery#server#more#don

Discussion (244 Comments)Read Original on HackerNews
> © 2024 CoLaptop. All rights reserved.
Website copyright is out of date by two years... And the website has been online since then. https://crt.sh/?q=colaptop.pages.dev
> Thank you for your interest. Please submit the form below and we'll get back to you within 2 working days.
> - Team @ CoLaptop.com
Also colaptop.com is not even registered anymore. If I had to guess the pages.dev site stayed up but the domain and email are nowhere.
> Website copyright is out of date by two years... And the website has been online since then. https://crt.sh/?q=colaptop.pages.dev
That's exactly what it should be then. A copyright notice lists the year of publication. Not the current year.
> A proper copyright notice consists of three elements: a © symbol, the year of publication, and the copyright owner’s name.
https://copyrightalliance.org/faqs/what-is-copyright-notice/
``` © <?=$currentYear?> Your Name ```
As many sites do, it may actually invalidate your copyright. You have to put all of the years when you made copyrightable edits to the page. A range like 2010-2025 is only allowed if every single year in that range is included.
In the US, you get copyright on your work automatically, with or without a label.
The only thing a label does in the US is defend against "innocent infringement" defenses. But even that defense doesn't absolve the other party from liability; you just can't recover as much.
There is no reason you can't have `(C) 200X-$currentYear Acme Inc` or whatever.
And honestly, not a terrible idea, I have old laptops that would work as a VPS. $7/month for somebody to host a public server for me, and not on my crappy residential isp? All I have to lose is an old laptop I haven't touched in 5 years? Sign me up
(they do need a real domain before i'll give them money tho, lol)
That gets you, what, 1 "vCPU" with maybe a gig of ram and a couple of dozen gig of disk.
If you (or a friend) work for a company of any size, there's probably a cupboard full of laptops that won't upgrade to Win11 sitting there doing nothing that you could get for free just by asking the right person. It'll have 4 or 8 cores, each of which is more powerful that the "vCPU" in that droplet. It'll have 8 or maybe 16gig of ram, and at least half a TB of disk and depending on that laptop quite likely to be able to be configured with half a TB of fast nVME storage and a few TB of slower spinning rust storage.
If you want 8vCPUs/cores, 16GB of ram, and 500GB of SSD, all of a sudden Digital Ocean looks more like $250/month.
If you are somewhere in that grey area where you need more than ivCPU and 1GB of memory, grabbing the laptop out of the cupboard that your PM or one of the admin staff upgraded from last year and shipping not off to a datacenter with your flavour of linux installed seems like it's worth considering.
Hell, get together with a friend and have two laptops hosted for 14Euro/month between you, and be each others "failing hardware" backup plan...
I bet colos will plug a KVM into your hardware and give you remote access to that KVM. I also bet rachelbythebay has at least one article that talks about the topic.
> ...can't scale if you suddenly had a surge of traffic.
1) If your public server serves entirely or nearly-entirely static data, you're going to saturate your network before you saturate the CPU resources on that laptop.
2) Even if it isn't, computers are way faster than folks give them credit for when you're not weighing them down with Kubernetes and/or running swarms of VMs. [0]
3) <https://www.usenix.org/system/files/conference/hotos15/hotos...> (2015)
[0] These are useful tools. But if you're going to be tossing a laptop in a colo (or buying a "tiny linode or [DO] droplet"), YAGNI.
There's no way to read this without hearing Scottish accent. It's like a sleeper agent activation phrase.
https://www.youtube.com/watch?v=hKfAjlW6E30
Can you explain how a copyright can be "out of date by two years"?
I always thought the copyright notice should reflect the year of creation, and that it's actually bad (from a legal POV) to always show the current year through scripting.
[1] the motor behind it is cost reduction, once that stops it stops because we can't afford it anymore!
A laptop isn’t the way to do that though. And your typical VC-fueled startup isn’t going to know how to do it either. It takes a very narrow slice of competence to be able to do that correctly.
> We might modify your laptop to remove or power down the battery, wireless radios, etc. to ensure it can be used safely in the data center.
It's fixed now.
And someone bought the .com domain: https://crt.sh/?id=25447880244
Colocating itself, though isn't new at all. Lots of different ways to host, including servers, mac minis, laptops are conceivable too because they share the same kinds of parts that mac minis might have.
My off-site backup is a thinkpad x230 with a 1 TB HDD. It's currently at my friends house, and I access it with tailscale. 7 eur/month to colocate this in a datacenter with stable (and fast) Internet + power seems like a pretty good deal.
I can understand some of the concerns with user-provided hardware. Maybe a better model, would be for CoLaptop to offer hardware themselves. This would allow them to standardize on a few models, which opens up many possible improvements such as central DC power, power efficient BIOS settings, enclosures with cooling ducts, etc. They can still follow the "old laptop as a server" model by buying off-lease laptops from the corporate world.
If all the machines were running Windows, the difference would've been even more drastic.
What I dont get is that we have these autoscaling technologies that allow software to be fault tolerant to hardware failure, yet companies still insist on buying expensive server grade HW for everything.
The downside is that if some piece of firmware or hardware has a vulnerability you have a larger attack surface.
But that's still a lot easier than managing laptops, which are unwieldily in a DC for a lot of other reasons.
We have some in house software which runs in k8s. Total throughput peaks at about 1mbit a second of control traffic - it's controlling some other devices which are on dedicated hardware. Total of 24GB of ram.
The software team say it needs to run across 3 different servers for resilience purposes.
The VM team want to use neutronix as their VM platform, so they can live migrate one VM to another.
They insist on 25gbit networking, and for resilience purposes that needs to be mlagged
The network team also have to have multiple switches and routers, again for resilience.
So rather than having 3 $1000 laptops running bare metal kubes hanging off a pair of $500 1G switches eating maybe 200W, we have a $140k BOM sucking up 2kW.
When something goes wrong all those layers of resilience will no doubt fight each other. The hardware drops, so the VM freezes as it restored onto another host, so K8s moves the workloads, then the VM comes back, the k8s gets confused (maybe? I don't know how k8s works).
It's all needlessly overspecced costing 30 times as much as it should.
But from each individual team it makes sense. They don't want to be blamed if it doesn't work, they don't have to find the money. It's different departments.
Even today, a UPS that turns itself back on after power goes out long enough to drain the battery and is then restored is somewhat exotic. Amusingly, even the new UniFi UPSes, which are clearly meant to be shoved in a closet somewhere, supposedly turn off and stay off when the battery drains according to forum posts. There are no official docs, of course.
I hope the words 'web server hosted in Excel VBA' illustrate the magnitude of horrors that can emerge in these situations.
one infra team - provides the entire platform
any other approach and you’re dicking around
Simple: the cost of managing the hardware scales with its heterogenity and reliability. Even just dealing with the dozens of different form factors (air vent placement!) and power units of laptops would be a big headache.
This is basically the same price as the cheapest options on Hetzner: https://snipboard.io/C9epWo.jpg. Sure my old laptop does have more RAM and a bigger SSD, but I bet it's also less reliable than Hetzner's servers, and is likely to suddenly die some day. So is the tradeoff really worth it? It's hard for me to believe that this is a genuine improvement for most things. The only definite winning case I can think of is if I have a process I want to run, but I don't care if it just suddenly stops working. But when would that ever be the case? and to save a couple dollars per month?
Edit: Maybe this is what github is doing :P
I’m a happy Hetzner customer but I have had servers that I rented from them die a couple of times.
I rent physical servers from them that have been previously rented to other customers. At some point hard drives fail.
However, I have solid backup setup in place (ZFS send and recv to other physical hosts in different physical locations) with that in mind, so I haven’t lost data with Hetzner. But if I naively did not have any backup then data would have gotten lost a couple of times.
Just monitor them so you can act proactively.
The comparison in this case is to Hetzner's VPS offerings, which are probably less powerful than the average "old laptop" but have a significant advantage in terms of hardware reliability. It's still possible for the host running the VPS to have problems which result in a crash or the VM equivalent of a hard power off but the VM hosts and their underlying storage should be redundant such that the virtual hardware never fails.
That's not to say rebooting from a crash-consistent state will always work, you should always keep backups even with a high-quality VPS host, but the odds of recovering cleanly from a hardware problem are orders of magnitude better than an old laptop. For the sort of hobby project or personal tinker box that would be reasonable to host on a random laptop shoved in a rack you probably wouldn't even notice the downtime until you saw the event notification email your provider sends you.
You use so many big words for nothing. All you need are backups. When it dies you restore. Nobody will care.
The linked one is VPS, so all trouble fixing is easier.
Announcing the new "mobile" tier on azure.
To support lots of ISPs, universities, and different organizations we have been asking them if they have an old laptop lying around that they can host our software on. Goal is to reach 70,000 probes within the next couple of years.
It is a simple probe software and we share some data or we can pay 20-30 bucks a month for it. We have a couple of NUCs in remote regions but no laptops yet. Basically, we are even happy if an ISP (or any one) hosts our software from a laptop dangling by a charging cable from a socket in some random corner.
We can send over a RPI or NUC, but with remote hands, and setup and all that it can get quite expensive. So, we always first ask if they have an old laptop lying around and can install our software there.
For us, at least, we are not interested in the hardware aspect. We are interested in the network. The old laptop approach only acts as a last resort. We will be more than happy to go with the predictability of a traditional VPS hosted in a traditional data center. Colocation, no matter what form it takes, involves a lot of moving parts.
Managing 70k probes is not going to be super hard.
Managing 1,400 servers is just a normal business operation, not a technical challenge. Each probe has a standard OS-level configuration. Automation and configuration are deployed from a central system. Each probe is actively monitored and troubleshot. Data is dumped to a data warehouse. We make incremental improvements to our network. When servers go down, we talk to vendors.
We do a lot of novel engineering things from the infrastructure, data, and research team. Having a very identical set of servers really allows us to focus on product and performance engineering, not troubleshooting engineering. With application-based probing, I assume it will complicate things quite a bit, as there are different operating systems, different devices, etc.
For us, lately the challenge is not technical. It has been exclusively procurement. This quarter (https://ipinfo.io/blog/probenet-q1-2026-expansion), we exclusively focused on regional diversity which involved outreach to national ISPs or telecoms. Securing servers from telecoms is an extremely bureaucratic and expensive process. So, we are hoping to partner up with eyeball networks and the larger NOG community.
Its a page hosted on CLoudFlare's "pages.dev" service. Their method of contact is a Google Form which does have an email address on this domain "CoLaptop [dot] com", but that as a web address does not work.
I'm not sure they have their act together.
- Mount something in a rack not firmly attached to brackets or a shelf
- Install anything with a battery larger than you'd find in a RAID card
Not to mention all the other ways this is sub-par in terms of airflow, density, serviceability, out-of-band management, etc.
I get the allure of it, but I wouldn't really want my gear anywhere near a bunch of laptops stuck in a cabinet.
> We might modify your laptop to remove or power down the battery
Ah, and obviously you put a claude/codex on it, so your actual work is just ... installing claude, and maybe a linux. The rest is done by the AI - setup, scripting etc.
As a colocated option... I see it work for some people. But it'd be a niche offering, when the whole value proposition is "make my own, with blackjack and hookers".
Otherwise, the effect of memory errors depends on the use case.
If the laptop or mini-PC is used as a router/firewall/Internet gateway, then memory errors are usually not important, because they would result in corrupted network packets that are likely to be detected at the endpoints of a network connection.
If the laptop or mini-PC is used as an e-mail server or a Web server, then a fraction of the memory errors may result in a stored file that becomes corrupted.
At the small amounts of memory typical for a laptop or mini-PC, unless the PC is many years old there should be no more than a few memory errors per year at most, and the majority of the errors might not result in file corruption, but sometimes they may cause weird behavior requiring a computer reboot.
Anecdotally, during the years I have seen on the Internet a non-negligible amount of big files, e.g. movies, which appear to have bit flips that are likely to have been caused by their hosting on servers without ECC memory. Fortunately, in movies a small number of bit flips will not cause severe quality degradation.
With more valuable data, one must use ECC memory to avoid such problems.
But colocation?
Strip away the learning component and add production uptime requirements - why would you even consider using crusty old laptops for this? If you have production grade needs, look to a standard cloud provider or, at the very least, a colo facility where you can put production-grade equipment.
There's no middle ground where you try to run a real business on old laptops. That's insane. You either keep things small/hobby and stay simple, or graduate to production-grade equipment once you have real requirements.
The middle ground, taking on production colocation problems plus the unreliability of random hardware, sounds like the worst of both worlds. There are both simpler and more robust options.
In Australia, for example, we're capping out at 100Mbit/s upload speeds on plans that cost ~US$70/mo and regularly go down for maintenance.
In other countries with cheap symmetrical plans this may make more sense.
Just do the math: for a measly €2000 a month, a salary of a cashier in Amsterdam, you already need to have 285 clients - and this is without taxes and revenue.
Put an openwrt router in front of your fat server, and for each query it sends a WOL packet to the box, and add some delay to an ethernet bridge.
That was the fat server is mostly sleeping, and only woken up when needed.
Some ecommerce software stacks really need gargantuan amounts of RAM and CPU, which gets expensive on the cloud. However, it is possible with some software to have everything massively cached, with the cloud doing that, with the origin server in my basement, only accessible from the allowed cache arrangement, therefore having the setup reasonably secure and cheap.
Downsides to this, having customer details in the basement rather than a secure facility, but how many developers have huge customer databases just casually lying around on USB sticks and whatnot? It happens.
Do you mean a setup like:
Or something else?So this is probably a joke site or a scam.
It’s OKish as a starting point into selfhosted world but overall not ideal. The battery is a fire risk and the entire thermal design isn’t really geared towards 24/7 operation.
Not really something I’d co locate unless it was a DC physically near me so that stopping by is easy
I remember having this old Dell Latitude, where you could easily swap out the battery pack with a button/tab thing on the back, without having to open anything else up - I even got a spare bigger capacity battery, but it would work without one altogether when connected to the power brick.
I unironically think that all laptops should be built like that.
But overall without aggressive throttling these devices work a maximum of half an hour before the components get saturated with heat and performance tanks.
So thermals are specd for both running at same time but you only need CPU for home server so shouldn’t throttle
You don't need any of those.
But powering down battery is not enough against the fire risk. Servers get hot 24/7 and might still overheat the battery.
I wonder if Hetzner knows their aim.
> We might modify your laptop to remove or power down the battery, wireless radios, etc. to ensure it can be used safely in the data center.
Yeah, just use the DC's UPS.
Laptops aren't designed to be servers - peg your laptop CPU and GPU at 100% and see how long it lasts, I've done this before and the answer is about "2 months", yep sure, this effort isn't targeting that workload, but how many bad apples does it take to start a fire? In their page they say "kubernetes server - no problem" kubernetes DOES keep the CPUs busy, not pegged, but busy enough so that they wont step down their frequency.
I admire the effort to reuse old tech, but boy oh boy would I not want to be a sysadmin here!
No fires, no hardware problems. No special cooling other than the mini-split that was in the closet to cool the server rack. They just kept trucking. But modern hardware is much more high strung and I don't doubt you'd have weird failures.
Edit: Back then VMs were how things were done and RAM was seemingly always the bottleneck by a mile, so the cluster did add up to a meaningful amount of extra performance compared to not having it.
Hetzner, DigitalOcean, OVH, Vultr are some of the better-known ones. Personally, I’m very happy with SSD Nodes. Paying $90/yr for a 4 vCPU¹ / 16 GB / 320 GB SSD, had some downtime exactly once in two years (they’ve had to switch their IPv4 space in Tokyo). Affiliate link: https://ale.sh/r/ssdnodes
[1]: Intel Xeon E5-2650 v4 (4) @ 2.199GHz – not great, I know, but to reiterate: that’s for $90 a year.
Right now the closest I can see is that $121/mo gets you 4 Xeon Platinum 8370C cores and 16GiB of RAM [0] (storage not included!).
Somebody Geekbenched that config here [1] 1274 single core 4256 multi core.
Thats kinda terrible ngl. A mini pc with last gen mobile parts like Ryzen 5 7640HS gets 2610 single and 10768 multi core [2].
[0] https://azure.microsoft.com/en-us/pricing/details/virtual-ma...
[1] https://browser.geekbench.com/v6/cpu/17547159
[2] https://browser.geekbench.com/v6/cpu/17541586
My biggest complaint used to be that it would occasionally restart after a system update and I’d have to unlock FileVault in person, but macOS 26 now allows unlocks over ssh.
How would this work when the old hardware inevitably needs to be serviced (mechanical hard drive failure, memory errors, dust buildup, etc)?
Would they have technicians on-site available to service whatever random laptop you send them? If your laptop dies do they ship it back to you so you can fix it and send it back?
Or what if you bork the OS by accident? Will their KVM solution allow you to upload an ISO and plug it in over some USB drive emulation?
+ The usual limiting factor in data centers is power, so laptops could be more optimized for greater cycle efficiency per power than comparable old servers.
+ Laptops are generally compact and so achieve greater rack densities than individual co-lo servers. I'm thinking about 34 or 51 laptops could be stored in 9 or 10U either 2 or 3 rows deep by 17 wide.
+ Shipping a laptop to a co-lo data center is cheaper than a 1U server.
~ Reusing electronics saves e-waste and reduces unnecessary consumption, either old servers or old laptops.
- Laptops lack ECC RAM.
- Laptops typically don't use nearly as fast CPUs or RAM as contemporaneous servers.
- Laptops are limited in their storage options.
- Laptops lack remote, lights-out management of real servers.
- Repairing old failed laptop components is more difficult than old servers.
~ Old laptops tend not to have usable batteries, so there's unlikely to be much an inherently distributed battery backup capability.
- Old laptop batteries of various origins could be a li-ion NMC fire hazard at scale.
~ Reusing old stuff at any sort of scale would prefer standardization, and it's sometimes difficult to amass many of the same discontinued model.
Conclusion: Do it if it works for you. It's kinda cool.
I think normal virtualization approaches are far more power efficient, at a fleet level, than any kind of cluster of laptop scenarios. You can pile in the cores and amortize the costs of memory controllers etc. over a large set of guests.
It is a funny way to get features of both worlds. One reason to want colo (rather than VMs) is for predictability, but laptops still give you the funny throughput problems, because of thermal throttling instead of competing guests.
> - Repairing old failed laptop components is more difficult than old servers.
I think "run it until it's dead" kind of thing.
Typical enterprise server lifecycle is 4-6 years purposefully throws away uncertain remaining potential value because budgets needs to be spent, risk aversion to repairing what's considered "outdated", and possibly acquiring faster and more energy-efficiency equipment. I would guess it's about the same lifecycle length for enterprise and personal laptops too.
There is no way they are partnering with Hetzner, or charging just 7€/month flat rate... they specifically want to know the model of the laptop, and offer to send send a courrier to your door...
Given that the "sign up" link goes to a survey form, my guess is this is just some idea someone had and they made this page to see if anyone actually wants it before they put any effort into making it happen.
It is inviable to colo old laptops, a regulatory nightmare - Hetzner would NEVER accept those in their datacenters. It is also absurd to think they are partnering with Hetzner to begin with.
It makes no sense to believe they will even EXPORT laptops from Europe to the US if you choose the US location. It just makes no sense, so I don't get why I am getting downvoted.
As I said, it is most likely just collecting interest for a potential business idea.
You're being down-voted because it pretty clearly isn't a scam. It has none of the tell tail signs of a scam.
I lose ownership of my laptop, you install whatever software you want on it (with the security risks that it conveys) and in turn "you let me connect to my computer"?
Or look at it another way: you wanted to buy/rent a server you intended to put in datacenter anyway. Now you can do the same with laptop.
Redundancy, I hear you saying! What if you’d have no electricity for an hour? OK then. I’d have another laptop at some else place then, and have two powerful servers for like still one fifth of the price. Can you beat that?
One question I have, in case someone from CoLaptop is reading:
So, one time I had this White "Chiclet" MacBook and I had it powered on all the time, I didn't know at the time but that destroyed the battery and when I unplugged it/plugged it again (because I moved it to a different place after like 4 years) it just didn't power up, fortunately I was able to extract the HDD and its contents.
Now I have one old MacBook Air and one old MacBook Pro as "servers", and I regularly disconnect them from power (but keep them on), I do this for like a day every 3-4 weeks, and haven't had that issue, battery health is still good etc...
So, what do you recommend for this? Or is this something you'll do as part of the CoLo service?
If I was a hetzner customer I'd be pissed if my server burned because someone's 2 minute battery life 10 year old school pc was hosted in the neighbouring rack.
Doesn't seem like a great business idea.
[1]: anecdotally, seems everyone has a laptop lying around with a cursed battery
I have never colo'd my laptop, but I do work off my Windows laptop from my Mac via Parsec (remote viewing software for gaming) and by flipping system settings so my Windows machine never turns off when connected to the power bank and lid is closed. There are obviously hiccups (if internet goes out, if Windows decides to restart from an update, etc.), but it mostly just works and I think I've only had 2 instances in the past 3 months where it's gone offline. I use Tailscale on top to provide a universal mouse server for my 3d mouse, and I'm able to magically CAD from my Mac.
Highly recommend if you need to use one OS/machine for some specific software (especially if it's beefy/heavy) but prefer using another as your daily driver.
The trouble is a lot of laptops won't power-on with the screen closed and have heavy sleep/suspend behaviors in general. Not to mention general airflow in whatever shelving system is used with the laptops, assuming 2-4 laptops per shelf, per 1u. Not to mention, one would probably want/need some means of ensuring appropriate driver support, or an appropriate Linux or other setup for said hardware.
While I can see it working, depending on shipping costs can definitely see some problematic bits.
ISTR one was basically just industrial office space that was running a lower-tier colo, and another was some guys in a metro area that got a rack in a data center and were spreading the cost around with other like-minded folks. At my work I have machines in an Iron Mountain facility, but for personal projects I don't need anything like that, but I'd like something that's more capable than AWS that I'm paying $80/mo for a couple VMs.
Vultr, DigitalOcean, Linode, ... are long established VPS players.
I'm cheap and buy VPSes off deals on lowendtalk.com. e.g. my backups are on a VPS with 3TB disk, 2GB RAM, 1 vCPU, USD7/mo. I suspect your USD80/mo budget would stretch to something amazing, by comparison.
The call-out for colo is largely to save me from having to engineer a setup at my house for getting my Dell R720 with 256GB RAM online (switching to bridging, setting up a firewall/load balancer with backup, segmenting the networks). That does become easier if I decide I'm ok with 1gig rather than bumping it up to 10gig.
Advantage is reasonably secure location and quality Internet connection. Target market is nerds who don't want all that crap in their closets.
So they're going to open the laptop up and make hardware modifications to random laptops sent in? May as well have a VPS at that point.
A far better business offering would have been to offer pre-selected physical devices where such things are well known.
A ton of old batteries in one place. The batteries themselves are probably not a concern, but if something happens to the facility, then you have a ton of problems.
Security of the facility is a concern if someone can get in and walk out with an armful of laptops.
Laptops don’t scale from a stacking stand point. Sure, close the lids and line them up. Then you’ll have a lot of failures. Older laptops are intended to cool through the keyboard and top vents by the screen.
Even then, you’re probably better off with Cloudflare tunnel and using it as a home server.
I have an old Lenovo laptop that works fine with the battery completely removed--but I have to disconnect the power and reconnect it before the soft power-on switch will work. I wonder how they handle powering on finicky laptops with those "soft" power buttons.
But getting some closet case computer with unknown hardware and turning it into a server, at scale, is an impossible scheme.
The only way to make it work would be to buy hundreds of laptops at once and refurb, new storage, and standardize with custom power delivery. Because who wants hundreds of laptop PSU's plugged into power strips. And those do in fact die.
And then there's the horror of manually removing wifi hardware and batteries. Battery disposal is an issue. And having worked on hundreds of laptops, some of them are major pains in the neck to get to the battery. Consumer HP's come to mind. The bottom cover can be difficult to remove without breaking any of the clips.
Point of Reference: 27 years in web hosting
if you do not use their platform specific features, it's better to run on bare metal with redundancy.
Is finally possible!
The advantage I found with a laptop :
1. Only two cables, power and ethernet. Installation and removal is quick.
2. Comes bundled with keyboard and screen so no need for monitor
3. Usually very low power consumption and low heat.
4. Light, it can be stored anywhere.
So to answer your question, I'm guessing all of the above?
Or run the laptop at home and tunnel
Also sounds like a recipe for fire.
I asked ChatGPT to estimate how much darwing 15W continuously in Amsterdam would cost you per month, and it came up with a range of 2.58€-3.41€ per month. So that's potentially more than half of their fee.
If your laptop is particularly power efficient, you'd also be subsidizing higher-powered laptops. As far as I see there's nothing preventing you from sending a 400W gaming laptop and mining crypto or running an LLM agent 24/7.
The cheapest USB KVM-over-IP costs about €50 - that's 8 months of colo fees gone.
Colo 'remote hands' in western countries can cost €120/hour, once all expenses and overheads are taken into account. Admittedly, that's for someone to drop what they're doing and rush to your sever. But getting that laptop unpacked, checked over, labelled, installed in a rack, associated to a customer account, powered up and working is going to cost 3 months of fees at least.
One laptop gets lost or damaged during shipping, or shows up mysteriously broken when the customer claims it worked when they sent it? That's a €200 device gone, 28 months of colo fees. You can argue your way out of it, but the guy doing the arguing is the €120/hour remote hands guy.
It'd be easy to lose money on this.
why we must all spend money on domain to show off our projects?
If your 'project' can't allocate $15 for a domain name then you have a bigger problem with your project. Especially if your project involves taking money from customers.
"Great. Put your laptop in this box, and we'll send it to the DC."
"Done!"
And, if you are lucky, it can compute PI to the fifth decimal, before the thermals kick in. But the battery life is wonderful. /s