Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

86% Positive

Analyzed from 834 words in the discussion.

Trending Topics

#duckdb#more#database#ducklake#query#process#storage#another#using#might

Discussion (19 Comments)Read Original on HackerNews

atombenderabout 1 hour ago
How does this (or DuckLake for that matter) handle sparseness and fragmentation of the differential storage? My experience with B+trees, at least, is that pages get spread all over the place, so if you run a normal query, page 537 may be in layer 1, page 8374 in layer 2, and so on, and a single query might need hundreds or thousands of pages, too scattered to read efficiently in large sequential reads, but requiring a lot of random ones, which in turn means latency is very poor unless you aggressively cache. Neon deals with this through compaction and prewarming, I believe. Maybe DuckDB avoids this because column data tends to be more sequential, and something batches up bigger layers? Or maybe aggressive layer compaction?
pepperoni_pizza8 minutes ago
I think the answer is "all of the above".

Columnar storage is very effectively compressed so one "page" actually contains a lot of data (Parquet rowgroups default to 100k records IIRC). Writing usually means replacing the whole table once a day or appending a large block, not many small updates. And reading usually would be full scans with smart skipping based on predicate pushdown, not following indexes around.

So the same two million row table that in a traditional db would be scattered across many pages might be four files on S3, each with data for one month or whatnot.

But also in this space people are more tolerant of latency. The whole design is not "make operations over thousands of rows fast" but "make operations over billions of rows possible and not slow as a second priority".

herpderperatorabout 6 hours ago
Does this help with DuckDB concurrency? My main gripe with DuckDB is that you can't write to it from multiple processes at the same time. If you open the database in write mode with one process, you cannot modify it at all from another process without the first process completely releasing it. In fact, you cannot even read from it from another process in this scenario.

So if you typically use a file-backed DuckDB database in one process and want to quickly modify something in that database using the DuckDB CLI (like you might connect SequelPro or DBeaver to make changes to a DB while your main application is 'using' it), then it complains that it's locked by another process and doesn't let you connect to it at all.

This is unlike SQLite, which supports and handles this in a thread-safe manner out of the box. I know it's DuckDB's explicit design decision[0], but it would be amazing if DuckDB could behave more like SQLite when it comes to this sort of thing. DuckDB has incredible quality-of-life improvements with many extra types and functions supported, not to mention all the SQL dialect enhancements allowing you to type much more concise SQL (they call it "Friendly SQL"), which executes super efficiently too.

[0] https://duckdb.org/docs/current/connect/concurrency

szarnyasgabout 5 hours ago
Hi, DuckDB DevRel here. To have concurrent read-write access to a database, you can use our DuckLake lakehouse format and coordinate concurrent access through a shared Postgres catalog. We released v1.0 yesterday: https://ducklake.select/2026/04/13/ducklake-10/

I updated your reference [0] with this information.

citguruabout 7 hours ago
This is an attempt to replicate MotherDucks differential storage and implement hybrid query execution on DuckDB
zurferabout 6 hours ago
As someone working in the field I have to admit that I'm not familiar with the terms differential storage nor do I really understand what hybrid execution means. Maybe you could describe it both from a simple technical point of view and what benefits it has to me as a user?
decide1000about 4 hours ago
I built a distributed DuckDB setup using OpenRaft for state replication. Every node holds a full copy of the database. Writes go through Raft consensus, reads are local. It's more like etcd-with-DuckDB than MotherDuck-lite.

OpenDuck takes a different approach with query federation with a gateway that splits execution across local and remote workers. My use case requires every node to serve reads independently with zero network latency, and to keep running if other nodes go down.

The PostgreSQL dependency for metadata feels heavy. Now you're operating two database systems instead of one. In my setup DuckDB stores both the Raft log and the application data, so there's a single storage engine to reason about.

Not saying my approach is universally better. If you need to query across datasets that don't fit on a single machine, OpenDuck's architecture makes more sense. But if you want replicated state with strong consistency, Raft + DuckDB works very well.

nehalemabout 6 hours ago
I have a deep appreciation for DuckDB, but I am afraid the confluence of brilliant ideas makes it ever more complicated to adopt —- and DuckLake is another example for this trend.

When I look at SQLite I see a clear message: a database in a file. I think DuckDb is that, too. But it’s also an analytics engine like Polars, works with other DB engines, supports Parquet, comes with a UI, has two separate warehouse ideas which both deviate from DuckDB‘s core ideas.

Yes, DuckLake and Motherduck are separate entities, but they are still part of the ecosystem.

arpinumabout 4 hours ago
I read the code. It's a good case study of one-shot output from AI when you ask it to replicate a SaaS product. This is probably better than most because MotherDuck has been open about their techniques to build the product.

Obviously not a production implementation.

Lucasoatoabout 6 hours ago
Last week I’ve sent my first PR in duckdb to support iceberg views in catalogs like Polaris! Let’s hope for the best :)
oulipo2about 5 hours ago
Seems cool! But would be nice to have some "real-world" use cases to see actual usage patterns...

In my case my systems can produce "warnings" when there are some small system warning/errors, that I want to aggregate and review (drill-down) from time to time

I was hesitating between using something like OpenTelemetry to send logs/metrics for those, or just to add a "warnings" table to my Timescaledb and use some aggregates to drill them down and possibly display some chunks to review...

but another possibility, to avoid using Timescaledb/clickhouse and just rely on S3 would be to upload those in a parquet file on a bucket through duckdb, and then query them from time to time to have stats

Would you have a recommendation?