ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
64% Positive
Analyzed from 1003 words in the discussion.
Trending Topics
#duckdb#quack#server#ducklake#catalog#client#protocol#data#http#postgres

Discussion (27 Comments)Read Original on HackerNews
RDBMS have always been multi-user concurrent systems. DuckDB is a very fast local engine that has a multitude of use cases because it is a embeddable in other systems.
It’s like saying what does SQLite wanna be? It’s in your phones, your browser, your desktop apps, iot devices and people have extended it in different directions. The only difference here is that this is first party not third party. But to me it’s a very legible move.
The engine is often not the painful part anymore. The pain is the stuff around it: live DBs, S3 paths, Parquet files, credentials, repeatable runs, exports, validation, and the moment a one-off script quietly becomes infrastructure.
Quack makes the remote/server part cleaner, but the bigger trend seems to be DuckDB becoming the SQL layer inside tools, not necessarily the final user-facing tool.
I can't think of many use cases for this and Arrow Flight, other than moving data around.
This is not commercially a terrible idea. Why keep paying Snowflake for bog-standard SQL query workload when SF makes it easy to migrate to Iceberg & commodity engines like MotherDuck?
https://duckdb.org/quack/faq#what-is-the-relationship-betwee...
Of course, in the future MotherDuck can also support Quack, but this is not the only interesting use case for Quack.
> Not yet, but we are working on it!
Seems like a niche use case, but it's the one I'm most interested in.
Our lakehouse uses ducklake with postgres as the catalog. Seems like a DuckDB / Quack catalog would be an excellent alternative.
So you'll be able to test it in a few days.
Because rn even with Postgres as a catalog my client needs access to the underlying storage to use Ducklake.
1. No type mismatches for inlining. If you use a non-DuckDB catalog, many types do not have a 1:1 mapping, which introduces additional overhead when operating on those data types.
2. You get the raw performance of DuckDB analytics (and now transactions) over the catalog. DuckDB reading DuckDB is simply faster than any of our Postgres/SQLite scanners.
3. No round-trip for retries. We can easily(tm) run the full retry logic on the DuckDB server side. Right now, these retries trigger multiple round trips for Postgres, making it a performance bottleneck for high-contention workloads.
Disclaimer: I'm a duckdb/ducklake developer.
I can definitely see exploring this for some homelab use.
This is wrong, HTTP is bad for transferring large amount of data and it is also bad for doing streaming.
It is bad for large amount of data because you have timeout issues on some clients, you hit request/response size limits etc.
It is obviously bad for streaming as there is no concept of streaming in it.
It is comical to go the path of least resistance so lazy people can put a reverse proxy on top of it. And then say HTTP is the only relevant way to do it in 2026.
The benchmark doesn't seem to mean much as TCP can max out 50GB/s on a single thread. Pretty sure it can do more than that even. So you could be using anything that isn't terrible and you should get max performance out of this.
Also the protocol is something else from the format. For example if you are transferring mp4 over ftp and http you can compare that.
If you are transferring different things over different protocols then the comparison means nothing.
The benchmark graph for bulk transfer should show more granularity so it is possible to understand how much of the % of the hardware limit it is reaching. Similar to how BLAS GEMM routines are benchmarked based on the % of theoretical max flops of the hardware.
> 60 million rows (76 GB in CSV format!)
This reads a bit disingenuous.
It is dissappointing to see this instead of something like PostgreSQL protocol with support for a columnar format.
> HTTP also allows the DuckDB-Wasm distribution to speak Quack natively! So DuckDB running in a browser can e.g., directly connect to a DuckDB instance running in an EC2 server using Quack.