Most applications use a single data model for both reading and writing. You query the same tables you insert into, update the same entities you display on dashboards, and shape your domain objects around the constraints of your relational schema. This works fine for simple CRUD applications. But as domain complexity grows -- as business rules multiply, audit requirements tighten, and read patterns diverge sharply from write patterns -- a unified model starts to crack under its own weight.
Command Query Responsibility Segregation (CQRS) and event sourcing are two architectural patterns that address this tension. They are distinct concepts that can be used independently, but they complement each other so well that they are frequently discussed together. In this guide, we walk through what each pattern is, why they pair naturally, and how to implement them with concrete code examples.
What CQRS Is and Why It Matters
CQRS is the principle of separating your application's write operations (commands) from its read operations (queries) into distinct models. Instead of a single service that handles both inserts and selects against the same database, you build two sides:
- The command side accepts commands, validates them against business rules, and produces state changes. It is optimized for consistency and domain logic correctness.
- The query side serves read requests. It is optimized for the specific shapes of data that your UI, reports, or API consumers need.
This separation matters for several reasons. First, read and write workloads have fundamentally different scaling characteristics. A typical e-commerce system might process 100 writes per second but serve 10,000 reads per second. With a single model, you cannot scale reads and writes independently. Second, the shape of data you write is rarely the shape of data you read. An order aggregate might contain line items, shipping addresses, payment details, and discount codes -- but your order list page only needs order ID, total, status, and date. With CQRS, the read model can be a denormalized projection that exactly matches the query, eliminating complex joins at read time.
Here is a minimal example of the command side in TypeScript:
// Command definition
interface PlaceOrderCommand {
orderId: string;
customerId: string;
items: Array<{ productId: string; quantity: number; price: number }>;
}
// Command handler
class PlaceOrderHandler {
constructor(
private readonly repository: OrderRepository,
private readonly eventBus: EventBus
) {}
async handle(command: PlaceOrderCommand): Promise<void> {
// Validate business rules
if (command.items.length === 0) {
throw new InvalidOrderError("Order must contain at least one item");
}
const total = command.items.reduce(
(sum, item) => sum + item.price * item.quantity,
0
);
// Create the aggregate and apply domain logic
const order = Order.create(
command.orderId,
command.customerId,
command.items,
total
);
// Persist and publish domain events
await this.repository.save(order);
await this.eventBus.publishAll(order.uncommittedEvents);
}
}
And the corresponding query side:
// Read model -- a flat, denormalized structure
interface OrderSummary {
orderId: string;
customerName: string;
totalAmount: number;
status: string;
itemCount: number;
placedAt: Date;
}
// Query handler
class GetOrderSummariesHandler {
constructor(private readonly readStore: ReadStore) {}
async handle(customerId: string): Promise<OrderSummary[]> {
// Direct query against a denormalized read store
// No joins, no complex mapping -- the data is already shaped for this query
return this.readStore.query(
"SELECT * FROM order_summaries WHERE customer_id = $1 ORDER BY placed_at DESC",
[customerId]
);
}
}
Notice that the command handler deals with aggregates, business rules, and domain events. The query handler does a straightforward lookup against a pre-built read model. Neither side needs to compromise for the other.
What Event Sourcing Is
Event sourcing changes how you persist state. Instead of storing the current state of an entity in a row that gets updated in place, you store the sequence of domain events that led to the current state. The order's state is not a row with status = 'shipped'. It is a stream of events: OrderPlaced, PaymentConfirmed, OrderShipped.
To reconstruct the current state of an entity, you replay its event stream from the beginning. The aggregate "rehydrates" by applying each event in order:
class Order {
private id: string;
private status: string;
private items: OrderItem[] = [];
private total: number = 0;
private events: DomainEvent[] = [];
// Reconstruct state from event history
static fromEvents(events: DomainEvent[]): Order {
const order = new Order();
for (const event of events) {
order.apply(event);
}
return order;
}
// Apply a domain event to mutate state
private apply(event: DomainEvent): void {
switch (event.type) {
case "OrderPlaced":
this.id = event.data.orderId;
this.status = "placed";
this.items = event.data.items;
this.total = event.data.total;
break;
case "PaymentConfirmed":
this.status = "paid";
break;
case "OrderShipped":
this.status = "shipped";
break;
case "OrderCancelled":
this.status = "cancelled";
break;
}
}
// Domain operation that produces new events
ship(trackingNumber: string): void {
if (this.status !== "paid") {
throw new InvalidStateError("Can only ship paid orders");
}
const event: DomainEvent = {
type: "OrderShipped",
data: { orderId: this.id, trackingNumber },
timestamp: new Date(),
};
this.apply(event);
this.events.push(event);
}
get uncommittedEvents(): DomainEvent[] {
return [...this.events];
}
}
This approach gives you a complete audit log by default. You never lose information because you never overwrite data. If a bug corrupts a read model, you can rebuild it by replaying the event stream. If a new business requirement emerges that needs historical data you previously discarded, the events still contain it.
Why CQRS and Event Sourcing Pair Well Together
CQRS and event sourcing are independent patterns, but they solve each other's weaknesses. Event sourcing makes writes simple and audit-friendly, but reading from an event stream is expensive -- you must replay potentially thousands of events to answer a query. CQRS solves this by maintaining separate read models (projections) that subscribe to domain events and maintain denormalized views of the data.
Conversely, CQRS introduces the challenge of keeping read models synchronized with the write model. Event sourcing provides the synchronization mechanism: the event stream is the single source of truth, and projections consume that stream to stay current.
The data flow looks like this:
- A command arrives and is handled by the command side.
- The aggregate produces domain events.
- Events are appended to the event store (the write side's persistence).
- Event handlers (projections) consume the events and update read models.
- Queries are served directly from the read models.
Here is a projection that maintains the order_summaries read model:
class OrderSummaryProjection {
constructor(private readonly readStore: ReadStore) {}
async handle(event: DomainEvent): Promise<void> {
switch (event.type) {
case "OrderPlaced":
await this.readStore.execute(
`INSERT INTO order_summaries (order_id, customer_id, total_amount, status, item_count, placed_at)
VALUES ($1, $2, $3, $4, $5, $6)`,
[
event.data.orderId,
event.data.customerId,
event.data.total,
"placed",
event.data.items.length,
event.timestamp,
]
);
break;
case "OrderShipped":
await this.readStore.execute(
`UPDATE order_summaries SET status = $1 WHERE order_id = $2`,
["shipped", event.data.orderId]
);
break;
case "OrderCancelled":
await this.readStore.execute(
`UPDATE order_summaries SET status = $1 WHERE order_id = $2`,
["cancelled", event.data.orderId]
);
break;
}
}
}
Each projection is a simple, focused event handler. If you need a new read model -- say, a dashboard showing revenue by region -- you create a new projection, replay the existing event stream through it, and the new read model is fully populated from historical data. No schema migration, no backfill scripts.
Event Store Design and Infrastructure
The event store is the heart of an event-sourced system. At its core, it is an append-only log where each entry contains a stream ID (typically the aggregate ID), an event type, the event payload, a sequence number, and a timestamp. The critical invariant is that appending to a stream must be atomic and respect optimistic concurrency -- if two processes try to append to the same stream simultaneously, one must fail.
A simple PostgreSQL-based event store works well for many applications:
CREATE TABLE event_store (
id BIGSERIAL PRIMARY KEY,
stream_id VARCHAR(255) NOT NULL,
event_type VARCHAR(255) NOT NULL,
payload JSONB NOT NULL,
metadata JSONB DEFAULT '{}',
sequence_number INT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE (stream_id, sequence_number)
);
CREATE INDEX idx_event_store_stream ON event_store (stream_id, sequence_number);
The unique constraint on (stream_id, sequence_number) provides optimistic concurrency control. When saving events, your repository loads the current stream, calculates the expected next sequence number, and attempts an insert. If another process has appended an event in the meantime, the unique constraint violation tells you to reload and retry.
For production systems with high event throughput, purpose-built event stores offer significant advantages. EventStoreDB is the most mature option. It provides native event stream semantics, built-in projections, subscription support, and optimistic concurrency out of the box. Marten is an excellent choice for .NET teams -- it uses PostgreSQL as its backing store but provides a full event sourcing API on top, including inline and async projections, stream aggregation, and event archiving.
// Marten event sourcing example in C#
public class OrderAggregate
{
public Guid Id { get; private set; }
public string Status { get; private set; }
public decimal Total { get; private set; }
public void Apply(OrderPlaced e)
{
Id = e.OrderId;
Status = "placed";
Total = e.Total;
}
public void Apply(OrderShipped e) => Status = "shipped";
public void Apply(OrderCancelled e) => Status = "cancelled";
}
// Usage with Marten's document session
await using var session = store.LightweightSession();
// Append events to a stream
var orderId = Guid.NewGuid();
session.Events.StartStream<OrderAggregate>(orderId,
new OrderPlaced(orderId, customerId, items, total),
new PaymentConfirmed(orderId, paymentId)
);
await session.SaveChangesAsync();
// Rehydrate aggregate from events
var order = await session.Events.AggregateStreamAsync<OrderAggregate>(orderId);
Handling Eventual Consistency
CQRS with event sourcing introduces eventual consistency between the write side and read side. After a command is processed and events are stored, there is a delay -- typically milliseconds, but potentially longer under load -- before projections update the read models. This means a user who places an order and immediately navigates to their order list might not see the new order yet.
Several strategies address this:
Read-your-writes consistency. After processing a command, return the expected state directly from the command handler rather than redirecting to a query. The command response includes enough data for the UI to display the result without querying the read model.
Causal consistency tokens. The command side returns a position token (the sequence number of the last written event). The query side accepts this token and waits until the read model has caught up to at least that position before returning results.
// Command handler returns a consistency token
async handle(command: PlaceOrderCommand): Promise<CommandResult> {
const events = order.uncommittedEvents;
const position = await this.eventStore.append(order.id, events);
return {
success: true,
orderId: command.orderId,
consistencyToken: position,
};
}
// Query handler respects the token
async handle(customerId: string, afterPosition?: number): Promise<OrderSummary[]> {
if (afterPosition) {
await this.readStore.waitForPosition(afterPosition, { timeout: 5000 });
}
return this.readStore.query("SELECT * FROM order_summaries WHERE customer_id = $1", [customerId]);
}
Polling with optimistic UI. The UI optimistically shows the expected result immediately and then polls or subscribes to updates to confirm. This is the approach most modern single-page applications use -- it provides an instant user experience while the backend converges.
When CQRS and Event Sourcing Are Appropriate
CQRS and event sourcing are powerful but not universally applicable. They introduce complexity -- separate read and write models, eventual consistency, event schema evolution, and more infrastructure to maintain. Apply them when:
- Audit requirements are strict. Financial systems, healthcare, and compliance-heavy domains benefit enormously from event sourcing's inherent audit trail.
- Read and write patterns diverge significantly. If you have many different views of the same data, maintaining separate read models is cleaner than building complex queries against a normalized write model.
- The domain is genuinely complex. If your business logic involves workflows, state machines, or multi-step processes, modeling them as event streams is more natural than tracking state in mutable columns.
- You need temporal queries. Event sourcing lets you reconstruct the state of an entity at any point in time, which is invaluable for debugging, compliance, and analytics.
Avoid them when your domain is primarily CRUD. If your application is a content management system where users create, edit, and delete records with minimal business logic, the overhead of CQRS and event sourcing will slow you down without providing meaningful benefits. A well-structured monolith with a clean service layer will serve you better.
Also avoid premature adoption. You can introduce CQRS for a single bounded context within a larger system without committing the entire architecture to it. Start with the domain where the complexity justifies the investment.
Moving Forward With CQRS and Event Sourcing
CQRS and event sourcing represent a significant shift in how you think about data persistence and application architecture. The command side focuses on business rule enforcement and produces an immutable stream of domain events. The query side consumes those events to maintain read-optimized projections. Together, they provide a system that is auditable, scalable, and aligned with complex domain logic.
The key is to start small. Pick a bounded context with genuine complexity -- one where you are already fighting your ORM or struggling with competing read and write requirements. Implement CQRS there, validate the approach, and expand only when the pattern proves its value.
If your team is evaluating CQRS and event sourcing for a new system or considering refactoring an existing one, Maranatha Technologies can help you assess the fit, design the event model, and implement a production-ready solution. Visit our software architecture services or get in touch to discuss your project.