Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

71% Positive

Analyzed from 1145 words in the discussion.

Trending Topics

#database#branching#postgres#branch#https#copy#need#using#don#xata

Discussion (30 Comments)Read Original on HackerNews

sgarlandabout 3 hours ago
> Imagine you need to add an index to a table with a few million rows. On a seeded database with 200 rows, the migration runs in milliseconds. Obviously. But on a branch with realistic data, it takes 40 seconds and needs CREATE INDEX CONCURRENTLY to avoid locking the table. The branch is isolated, so locking there isn't the issue — the point is that the rehearsal shows the production migration would need CONCURRENTLY.

A few million rows should take at most, on the most awful networked storage available, maybe 10 seconds. I just built an index locally on 10,000,000 rows in 4 seconds. Moreover, though, there are vanishingly few cases where you wouldn't want to use CONCURRENTLY in prod - you shouldn't need to run a test to tell you that.

IMO branching can be a cool feature, but the use I keep seeing touted (indexes) doesn't seem like a good one for it. You should have a pretty good idea how an index is going to behave before you build it, just from understanding the RDBMS. There are also tools like hypopg [0], which are also available on cloud providers.

A better example would be showing testing a large schema change, like normalizing a JSON blob into proper columns or something, where you need to validate performance before committing to it.

0: https://github.com/HypoPG/hypopg

sastraxiabout 4 hours ago
I’ve done experiments using BTRFS and ZFS for local Postgres copy-on-write. You don’t need anything but vanilla pg and a supported file system to do it anymore; just clone the database using a template and a newish version of Postgres.

Looking at Xata’s technical deep dive, the site claims that we need an additional Postgres instance per replica and proposes a network file system to work around that. But I don’t really understand why that’s needed. Can someone explain to me my misunderstanding here?

tee-es-gee44 minutes ago
> You don’t need anything but vanilla pg and a supported file system to do it anymore; just clone the database using a template and a newish version of Postgres.

Are you referring to `file_copy_method = clone` from Postgres 18? For example: https://boringsql.com/posts/instant-database-clones/

I think the key limitation is:

> The source database can't have any active connections during cloning. This is a PostgreSQL limitation, not a filesystem one.

tee-es-geeabout 1 hour ago
For context for the others, I think you are referring to this blog post: https://xata.io/blog/open-source-postgres-branching-copy-on-... (in particular the "The key is in the storage system" section) right?

What I'm saying there is that if you do Postgres with on top of a local ZFS volume, the child branches Postgres instances need to be on the same server. So you are limited in how many branches you can do. One or two are fine, but if you want to do a branch per PR, that will likely not work.

If you separate the compute from storage via the network, this problem goes away.

zbentley24 minutes ago
ZFS snapshots can be transmitted over the network, with some diff-only and deduplication gains if the remote destination has an older instance of the same ZFS filesystem. It’s not perfect, and the worst case is still a full copy, but the tooling and efficiency wins for the ordinary case are battle-tested and capable.
wadefletchabout 1 hour ago
You can't have any other connections while a Copy-on-Write is happening, not even a logical replication slot. So you keep a read replica that then gets all connections briefly cut for the COW to avoid locking the master instance. Then you re-enable the logical replication slots on both the new, copied instance and the "copyable" read replica to get both back up to date w/ master.
eatonphilabout 3 hours ago
I also don't really understand how being correct under physical branching with ZFS, or physical backups of a filesystem, are different from crash safety in general. As long as you replay the WAL at the point where you branch (or take a physical backup of the filesystem) you should not lose data?

At the same time Postgres people don't seem comfortable with the idea in practice so I'm not sure if this is actually ok to do.

hilariouslyabout 3 hours ago
Crash safety does mean rollbacking all things in progress, but yes, if your database cannot safely do it (even if it is yucky) then you do not have a safe database for any crash situation.
bhoustonabout 2 hours ago
I have tried this before:

https://www.dolthub.com/

It was a lot of work and had poor performance with a lot of complications. I am not using it in my latest projects as a result.

tee-es-geeabout 1 hour ago
Xata is open source btw (open core): https://github.com/xataio/xata
gulcin39 minutes ago
Came here to say this :) Anyone using Xata here?
comrade1234about 4 hours ago
I was on a big team working on a giant oracle database over 25-years ago. I dont remember the term but each developer had their own playground of the giant database that wasn't affected by anyone else. The DB admin would set it up for each developer in just a few minutes so it definitely wasn't a copy. Then when a developer needed to reset and go back to the original db again it just took a few minutes. I just don't remember what it's called but I think Postgres has had it now for a few years.
tremonabout 3 hours ago
You don't actually need to physically copy data, just create a view for every table that does a replacing merge between the original read-only data and the developer's own copy. And you can put a trigger on the view to redirect writes to the same private-copy table, making the whole thing transparent to the user.

Not disputing that Oracle might have had something like this built-in, but it sounds like something that I could have whipped up in a day or so as a custom solution. I actually proposed a similar system to create anonymized datasets for researchers when I worked at a national archive institute.

TheMrZZabout 3 hours ago
Snowflake uses a similar system with their 0-copy cloning. It starts with the original table's partition, and keeps track of the delta created by subsequent operations. Always found that builtin mechanism pretty neat!
hilariouslyabout 3 hours ago
Sounds like a snapshot - a file based diff of the pages changed since the last full backup - easy to revert to for the same reasons.
mwexlerabout 3 hours ago
I thought this was a repost but I was thinking of dolt which has similar capabilities but a different approach. As in https://news.ycombinator.com/item?id=38109880
mininaoabout 2 hours ago
Using neon for this and it's an absolute game changer, would recommend implementing database branching whatever solution you pick
gulcin38 minutes ago
May I ask your use case, I am curious.
efficaxabout 1 hour ago
what if writing a blog post without ai was easy?
theaniketmauryaabout 4 hours ago
i was using neon and they had some similar feature but now using planetscale. would be curious to know how you all are doing it?
tee-es-gee37 minutes ago
We have an overview on how it works here: https://xata.io/blog/open-source-postgres-branching-copy-on-...
miketeryabout 3 hours ago
We used neon at last job. It seemed pretty cool. What made you switch to planetscale?
zwilyabout 1 hour ago
Planetscale has better performance and uptime. But the branching keeps me going back to neon…

Yes planetscale can branch too, but it takes longer and you pay individually for each branch

gulcin37 minutes ago
Xata is open-source now, maybe you can give it a try as an alternative to Neon.
Nihilartikelabout 4 hours ago
This kind of magic is the reason that I'm very itchy to be able to line up real work on Datomic or XTDB someday.
xeubieabout 3 hours ago
Surprisingly, neither Datomic nor XTDB support branching. See: https://blog.danieljanus.pl/datomic-forking-the-past/

I actually built my own immutable database which does support branching (see profile), so it seems like a huge miss that these ones don't. It's pretty much the main reason I would want an immutable database.

camdezabout 2 hours ago
Wait, this statement seems way too strong.

The linked article points out that Datomic doesn't support branching from the past. It absolutely does support branching, and I've built entire test suites that way.

From a cursory glance, I'd say Datomic does exactly what the original parent article is discussing. It works great and it's super convenient.

xeubieabout 1 hour ago
If each "branch" is read only, it's not a branch at all. The entire idea of branching implies that you can make changes on one branch, then switch to another branch and make changes to it. They start from the same point and grow in different directions, as the metaphor of branches on a tree depicts.
nathellabout 2 hours ago
It appears that Datahike [0] is a Datomic workalike that supports branching. I haven’t tried it out myself (yet), but the documentation suggests it’s possible [1].

That said, I’m adding xitdb to the list of tech to try out. Thank you for building it!

Oh, and thanks for linking to my article :-)

[0]: https://github.com/replikativ/datahike

[1]: https://datahike.io/notes/the-git-model-for-databases/