Using Redis for Event Sourcing and much more
Over the last week I've been thinking about high-scale production setups for event-centric architectures. Something that can handle retail networks in realtime while providing cost-effective solution to deal with business amnesia. Obviously there is Greg's event store (to be released tomorrow), however having multiple deployment options is even better.
Here's a quick overview of implementing event store with Redis. Redis is an Erlang C key-value store with configurable reliability guarantees, master-slave replication and a diverse set of server-side storage primitives.
ServiceStack developers use Redis extensively for caching. They have even developed ServiceStack.Redis for C#
Using immediate persistence (fsync after each change) and eventual replication you can easily get thousands of commits per second on a simple machine. This is way less than specialized event store implementations, but could be good enough for a low-cost production deployment. Besides, you can speed things up by doing fsync after each second. See more benchmarks or check out series of articles on ES with Redis and scala.
Event Storage Primitives
We can use following primitives for event storage persistence:
- Hash - provides fast O(1) get/set retrieval operations for individual events
- List - can store associations of events to the individual streams (fast to add)
Store individual events in hash structure (allows O(1)) operations:
> HSET EventStore e1 Event1
Where:
- EventStore - name of the hash to use for storing events (might as well be one store per riak DB)
- e1 - sequentially incrementing commit id
- Event1 - event data
You can get number of events in the store by
> HLEN EventStore
(integer) 8
In order to enumerate all events in a store, you simply ask Redis to return all hashes given their IDs, for example:
> HMGET EventStore e1 e2 e3 e4
1) "Event1"
2) "Event2"
3) "Event3"
4) "Event4"
Individual event streams are just lists which contain references to individual commit IDs. You can add event(s) to a stream by RPUSH
. For instance, here we add events e2, e4, e7 to list customer-42
> RPUSH customer-42 e2 e4 e7
Version of an individual event stream is a length of corresponding list:
> LLEN customer-42
(integer) 3
In order to get list of commits that are associated with a given list:
> LRANGE customer-42 0 3
1) "e2"
2) "e4"
3) "e7"
In order to achieve fast performance and transactional guarantees, we can run each commit operation as server-side LUA script, which will:
- Provide concurrent conflict detection
- Push event data to hash
- Associate event with a stream
Publishing and replays
Redis provides basic primitive for PUB/SUB. This means, that we can push event notification to zero or more subscribers immediately (in the same tx) or eventually:
> PUBLISH EventStore e1 e2
This means that in order for the projection host (or any event listener) to have the latest events we:
- Get current version of event store:
HLEN
- Enumerate all events from 0 to length by
HMGET
- Subscribe to new events, if there were new events since we started replaying (or read the new batch otherwise):
SUBSCRIBE
Additional side effects
First, since Redis is a key-value store, we can also persist within the same setup:
- Aggregate snapshots
- Projected views
Second, capability for message queues can be handy for load-balancing work commands between multiple servers.
Third, server-side capability for associating events with event streams (individual event stream is just a collection of pointers to event IDs) can be handy for event-sourced business processes.
Published: September 16, 2012.
🤗 Check out my newsletter! It is about building products with ChatGPT and LLMs: latest news, technical insights and my journey. Check out it out