-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Immutable
TL;DR: Mutable systems can do both immutability and mutability, so they're a more powerful and versatile foundation to build on.
This page is an excerpt from a chat about the difficulties we experienced trying to use immutability as our base data layer. If you're looking for an API to actually use, hack, and build stuff on, please see this page instead.
Scalability of immutable/append-only logs. Yeah, to maintain performance, algorithmically, either need to do compaction which generates roughly B1+(B2+B1)...
Bytes of excess storage - for every B
byte delta to a record, you need to regenerate a full snapshot of the final head/state, if you want users to have a fast render time. If you're allowed to delete old snapshots, this saves a lot of space/money, but... if the underlying system is immutable, then you either have to use an external system - proving immutability is not a universal foundation - or you use your same system, and swallow the overhead cost.
If you don't do compaction ahead of time to get O(1)
render, then you're forced to do O(N)
lookups on client reads - yes you can optimize this a little for machines that already have history on it, you just need N-Nx
(items since last seen) but that is still N
not 1
.
These simple Big O complexity adjustments make a multiplier of performance improvements at scale, if you have a large P peer population doing p2p reads, you get O(P*N)
(for every peer that gives you a read, they each have to send you N items). However if you compact you can do better than O(P*1)
= O(P)
, because the compacted snapshot will have the same hash, so if you support network-level hash-based read-deduplication (which DAM does and I assume everybody else does as well), you can reduce your gossip to O(1)
or O(R)
where R is your "replication number" which for many is just a constant (like 3), which in complexity theory any constant is still considered O(1)
time as it is not variable to the size of the data itself (like an append-only log's size constantly changes/grows).
Yes, I was on the event sourcing bandwagon in 2010 doing Operational Transform (google docs) similar stuff on website editing (and rich text editing). Loading the website from a fresh browser required replaying whole log, which is dumb, so I phantomjs cached a snapshot every once in a while (was using a mutable database, so didn't need to worry about old snapshots wasting space), which infinitely sped up performance (fresh browser got instant site) and reduced my server expenses, but also let me do cool incremental stuff - in the editor I'd load the snapshot then subscribe to the tail end of the log so I wouldn't miss any new changes, they'd render live "hydrating" the site. However, if I wanted to reclaim old storage space from the older-than-the-snapshot logs, I'd have to scale everything centrally/vertically (which there is only so far a single machine/RAM/CPU could go back in 2010), but if I distributed it horizontally then I couldn't delete old logs, just like git, unless I had an extremely controlled infrastructure, but I didn't have the money for that.
The simpler solution is to just use mutability. Then all you have to do is replicate the snapshot, not the log. However, it all updated in realtime, so I didn't want to resend whole snapshot each time either, just deltas - so I needed a patch synchronization algorithm:
- (A) The ability to take any 2 logs and merge to output a snapshot.
- (B) Ability to take any 2 snapshots and get a deterministic snapshot.
- (C) The ability to take a log + a snapshot and get a consistent snapshot.
Intuitively, this reminded me of matrix transformations, but the problem is (C) produces mismatching dimensions preventing an operation from completing, but without (C) you simply have 2 different systems, you don't gain anything, (C) would allow you to have the best of both worlds (read scalability & write efficiency). The trick winds up being storing a vector that represents the compaction - yes, that metadata does take up space as a single number, but it lets you save an indefinite amount of space discarding old snapshots & logs, so well worth it. I wish I had coined a phrase for it, cause "CRDTs" weren't named until 2011!
Anyways, that is the backstory (2010 ~ 2013) to what I then put together into an open source micro-library in April 2014 as GUN. Because it is so small (~9KB
core, compare: a rasterized emoji is ~4KB
at 32x32
px) and works in high churn horizontally distributed settings, it's actually lightweight enough to run in a browser and p2p! (Tho browser WebRTC/storage/etc. adapters are another +3KB
)