Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
83% Positive
Analyzed from 763 words in the discussion.
Trending Topics
#pollen#wasm#seed#etc#cluster#nodes#state#systems#something#cool
Discussion Sentiment
Analyzed from 763 words in the discussion.
Trending Topics
Discussion (18 Comments)Read Original on HackerNews
Have been hacking on a wasm+webtransport stack for distributed simulation workers and found the ceiling on one connection/worker per machine pretty quick. Had to pin adapters/workers to cores to get the latency I was expecting, then needed to use dedicated tx/rx adapters to eliminate jitter. Some bullshit about interrupt scheduling
The real challenge is gating and reserving “slots” for downstream calls. If seed A on one node calls seed B on another, as it stands, Pollen holds that seed A instance up and waiting (with the memory overhead etc) until the response finds its way back across the cluster.
You can probably imagine how latencies then start impacting this (espesh when a node in USW is generating traffic that needs to ultimately land on my laptop in the UK), not to mention all of the contention from other nodes elsewhere generating load too.
In the demo, I see about 2500rps land on my laptop with 4k-5k generated across 4-5 nodes globally, but this is a multi hop scenario. If a call is only invoking a single, light WASM function, I see much higher throughout.
The project is in its infancy, no doubt I’ll have lots of fun figuring out how to optimise as it progresses!
In the first scenario above, memory seems to be the ceiling, in the latter, CPU
What would I use Pollen for?
I'm not sure I understand the "seed" metaphor.
I use it in place of Tailscale for some homelab applications. I’ve started to deploy other experiments on a “prod” cluster. The demo I showed shows how Pollen responds to a multi-step pipeline type application; two WASM seeds and a single egress communicating over the provided RPC mechanism (`pln://seed…` etc) whilst handling routing, back pressure and the like.
Right now, the workloads need to be stateless. I’m coming up with a story for state at the moment, which’ll likely start as some WAL-like convergent structure with thin (KV store etc) abstractions layered over it. Probably not dissimilar from the pattern underpinning the current CRDT gossip state.
It's a single Go binary. Install it on every machine you want in the cluster and they self-organise. Topology is derived deterministically from gossiped state, so workloads land where there's capacity, replicas migrate toward demand, and survivors rehost from failed nodes. The mesh is built on ed25519 identity with signed properties; any TCP or UDP service you pin gets mTLS. Connections punch direct between peers where possible, otherwise they relay through mutually accessible nodes.
I built it because I'm fascinated by local-first, convergent systems, and because I wanted to see if said systems could be applied to flip the traditional workload orchestration patterns on their head. I also _despise_ the operational complexity of modern systems and the thousands of bolted-on tools they demand. So I've attempted to make Pollen's ergonomics a primary concern (two-ish commands to a cluster, etc).
It serves busy, live, globally distributed clusters (per the demo), but it's very early days, so don't be surprised by any rough edges!
Very happy to answer anything in the thread!
Cheers.
Docs: https://docs.pln.sh
We’re building an AWS analogue catalogue of services (Databases, Compute, Auth, etc.) for distributed systems.
Want a job do Pollen-like dev full time?
william.blankenship@webai.com
Either way, would be great to compare notes!
What are the workloads in the runtime capable of?
I'm seeing some functionality that seems like it could replace some personal services I currently host via my tailscale network. Am I understanding this correctly? If so, do you have a feel for what the performance implications would be?
In a potential modern cloud, having a globally named primitives (computer, store, messaging) can unlock very wider applications. Have you come across any such?
If so, I have loose ideas around how I might introduce shared state, it’s an interesting problem that’ll require a lot of thought. Early days yet, though.