Previously we uncovered some limitations of certain data modeling techniques which scatter state, behaviour and logic in layers. I introduced the Actor Model as a potential approach to better encapsulate our domain. Now, we'll dive deeper into the fundamental pillars that make actors a powerful paradigm and explore techniques to bring your domain models to life.
The Three Pillars of Actor Modeling
1 - Ownership
In some systems (especially anemic ones) you find nobody really owns the state. Any part of the application can just modify something. For example, the OrderService
and the OverdueOrderService
might both be allowed to modify the expiryDate
, creating a complex web of potential conflicts and race conditions.
The Actor Model introduces complete state ownership. No external component can directly manipulate an actor's internal state. Protected from concurrency the state becomes a fully controlled resource - the complexity is localized within each actor's boundaries.
Example: An OrderActor
that completely controls the order lifecycle. External systems and API’s don't modify the order directly; instead, they invoke methods like Create
, AddItem
, and Cancel
. The actor decides how to respond, maintaining its internal consistency.
Ownership ensures that actors operate independently, which simplifies reasoning about the system’s behavior and enhances modularity.
2 - Lifecycle
Actors aren’t just static objects. They have lifecycle hooks managed by the runtime. The virtual actor pattern has hooks such as:
Activation — Actors come to life on demand based on their identity. When an actor is reactivated it comes back with its previous state.
Deactivation — When idle (or prompted) they gracefully deactivate, freeing up resources. Systems with potentially millions of actors can exist without exhausting memory or processing power.
State Persistence — State can be automatically saved and can be restored across different nodes or after system restarts. It can be stored in simple blob storage or in a database of your choice.
Timers — Timers allow actors to schedule repeated or one-time internal actions. They are useful for things like sessions, timeouts, polling, retries and delays.
Reminders — A persisted scheduling mechanism. They survive actor deactivations and system restarts, ensuring critical time-based actions are never missed. They are useful for notifications, grace periods, renewals and expiration.
Example: A UserSessionActor
activates when a user logs in or starts a session. When the session expires or the user logs out the actor is deactivated. A timer fires to cleanup the session if the user did not logout. Events like session.created
, session.completed
and session.timeout
are emitted.
3 - Transactions
The Actor Model reimagines transactions through message processing:
Sequential Message Processing: Each actor processes messages one at a time, eliminating race conditions and out of order processing.
Atomic State Changes: State modifications within an actor are inherently atomic.
Idempotent Design: Messages can be safely retried without unintended side effects.
Some actor frameworks include ACID features such as Microsoft Orleans. This is useful to create inter-actor transactions.
Example: When processing a payment, a PaymentActor
ensures that the transaction is completed successfully before updating the order status, maintaining consistency even if multiple payment attempts occur. The transaction could also itself be an actor, allowing for more granular control and isolation of payment processes.
Everything is an Operation: Workflow or Process
Look over your codebase, look at the requests hitting your API. Almost everything can be written as an operation - either as a workflow or a process. Patterns like Command Query Separation (CQS) and Command Query Responsibility Segregation (CQRS) reinforce this by clearly delineating between the actions that change state (commands) and those that retrieve state (queries), often adopting an RPC (Remote Procedure Call) style for communication. Some codebases clearly separate the DTO’s for commands, queries and responses so that they become CreateOrderCommandDto
for example. This perspective shifts the focus from data manipulation to managing sequences of actions that achieve specific business objectives.
Think of actors not as database rows, but as independent, self-contained processes.
Adopting an RPC-style interface for actors aligns with treating interactions as operations. Instead of modelling against entity/data mutations each API request can correspond to a specific operation that an actor performs. For example, an API endpoint like POST /orders
can translate to calling the Create
method on an OrderActor
, which handles the entire creation process, including validation, state and even database updates.
If the actors are decoupled from external dependencies and operations are clearly defined and encapsulated within them it's easier to write tests that cover specific workflows or processes. Additionally, it becomes easy to track the flow of named operations such as Order.Create.
Example: In a billing system, viewing invoice generation as an operation allows you to encapsulate the entire workflow within an InvoiceGenerationActor
. Now the OrderActor
isn’t concerned about invoices. By treating each step as part of the operation and creating actors for them, you find that everything is handled cohesively and reliably - remember the three pillars of Ownership, Lifecycle and Transactions - the invoice generation has its own lifecycle that is separate to the order.
State Representation
Properly modeling state transitions is critical for creating reliable and predictable actor-based systems. Imagine your software system as a living, breathing collection of things, with a clear set of states, precise rules for transitions, and an intrinsic understanding of its own lifecycle.
Don’t treat your state as some passive mutable collection of properties. Flips the paradigm into an active, intelligent construct with its own rules and behaviors.
The core principles of state representation are:
Explicitly defined — Events produce transitions. Entry and exit conditions. Associated behaviors and constraints.
Controlled transitions — No arbitrary transitions, only valid transitions allowed. Transitions are logged, traceable and predictable.
Owned by one thing — Only one gatekeeper.
Actors can host multiple interconnected state machines, creating complex yet manageable system behaviors. They are the protected enclave of order. You don’t need to bake the state directly into the actor, states can be their own defined objects - completely unit testable. You can even model an actor to encapsulate a single state machine, like a controller of sorts.
Modelling Techniques
Effective actor modelling requires employing skills derived from classic Object-Oriented Programming (OOP), Domain-Driven Design (DDD), Functional and Event Driven styles. Here are five essential modelling techniques to help you design robust and scalable actor-based systems:
1 - Jobs to be done
The “jobs to be done” framework is an approach where you need to understand the customer’s specific goal (or operation), with the premise that the customer will go and “hire” a product to complete this job. Think of it as developing the job description for a role you need to fill. "What specific job is this actor being 'hired' to do?"
Each actor becomes an employee with:
A clear purpose (with a raison d'etre)
Specific responsibilities (doesn’t need to worry about anything else)
Defined boundaries (won’t overstep)
Unique capabilities (skills, traits and passions)
Actors modelled in this manner tend to have suffixes that are nouns such as Manager, Coordinator, Controller and Warden.
Example: Inventory Management.
Job Title: Inventory Control Specialist
Job Description:
Maintain accurate real-time inventory tracking
Manage stock levels across multiple product categories and warehouses
Prevent empty stock
Assist with stocktake where required
Key Responsibilities:
Track product quantities
Process inventory updates (when stock leaves)
Generate restocking recommendations to ensure no understock occurs
Handle inventory transfers between warehouses
Interactions:
Receive updates from sales
Notify purchasing for restock
Provide inventory status for order fulfilment
2 - Digital Twins
Digital Twins are virtual replicas of real-world things, enabling more lifelike and accurate simulations. You begin by realistically modelling your actors and their interactions to reflect their physical counterparts. Establish more natural and detailed relationships between actors, instead of thinking of things are just “related” or a “join”.
A digital twin becomes a self-contained entity that:
Mirrors the real-world object's characteristics
Understands its own lifecycle and how it is fed new information
Can predict and respond to potential scenarios
Maintains its own internal state and rules of interaction
Example: The ThermostatActor
identified by its serial number tracks the temperature. It stores the last 5 minutes of data, as well as the average temperature. It choses to persist on the current value, and in memory keeps a buffer of previous values to detect anomalies using statistical analysis. It also keeps the alarm set points and tells the AlarmManager
actor when the state has been entered into. The ThermostatActor
is owned by the RoomActor
, who receives the temperature reading only once a minute via a timer.
3 - Personification
Personification involves modeling processes, procedures, or tasks as if they were individual personas with distinct behaviors and characteristics. It is similar to “jobs to be done” but it differs slightly in that you can use it to reflect concepts like importance, value and criticality.
Different personas can encapsulate diverse behaviors and responses, adding flexibility and richness to the system's operations. The persona can be a transient wrapper around the management of a task or process, where it can be given traits that work to achieve the desired outcome.
Example: The DataAnalyzerActor
personifies a data analysts’ capability to inspect and analyze a stream of data. The actor is fed events, and a timer runs as a watchdog to ensure events are still coming in.
4 - Workflow and Process Orchestration
Here we’re mapping the orchestration from the user's or business's perspective. This ensures that the system aligns closely with real-world operations. Mapping out the necessary steps, interactions and transactions that constitute a complete workflow.
Because we’re using a stateful Actor for this orchestration we can hold the initial request, act idempotently and of course retry any failed steps. It might also be useful to model the actor in such a way that it drives the presentation layer - making it easy to change the flow without also editing the UI.
Example: DeliveryRequestActor
incorporates the several steps involved in preparing a delivery, validating it, and sending it off to an external system to be accepted and processed. This actor models the wizard like flow that we’ve presented to the user in the mobile application. The actor collects all of the form data and provides the next available steps back to the front end. If the user closes the mobile application, they can resume where they left off as the Actor can be brought back to life with the previous state. When the user is ready to submit their request, the actor can provide the feedback to say it was successful. Note: Consider how useful this can be in real-time multi user systems as well, where two people are working on the same workflow.
5 - Aggregations
This is probably the hardest technique to grasp, but once you get it, you’ll love it. Aggregation actors draw from DDD and statistical concepts.
Aggregation involves thinking of your actors as part of a graph, where smaller, downstream actors notify upstream actors about important events, metrics, or states. This information flows into an aggregation-style actor, which can store the information, relay it, or perform statistical analysis. These aggregation actors can also aggregate information further up the actor graph (aggregateception), creating layers of data consolidation and processing. They're like the executive dashboards of a complex organizational system, constantly synthesizing information from various sources.
Their core characteristics are:
Downstream actors (more specific, granular actors) generate events and metrics
Upstream aggregation actors collect, process, and redistribute this information
Data flows like a hierarchical communication network
These actors have hierarchical intelligence. Unlike traditional data collection methods, aggregation actors aren't passive repositories. They don’t need to be queried; they have all the data hot and live. They’re active and intelligent nodes that:
Understand the exact context of their data
Can apply statistical reasoning an analysis to detect patterns and anomalies
Can make adaptive decisions, emit events or call other actors immediately
Example: The CustomerOrderAggregatorActor
manages the order specific metrics for each customer, its key is the customer ID. Each individual metric stores the last 10 values to provide simple trend information (up or down). When new orders for this customer change state these metrics are tracked so that the customer dashboard loads in 1ms. Active orders, overdue orders, shipped orders, orders in the last 30 days; every metric is tracked. The RegionOrderAggregatorActor
has the same role, except it tracks only the metrics for a specific region (in its ID) and this drives the regional sales internal dashboard. Finally, we have the OrderAggregatorActor
, whose identity also includes the year and month, for example 2024-01. This aggregator ensures the values also persisted into the Clickhouse database for future analytical queries.
Thoughts
Several trends influence how we model systems. Some of them technical, fairly logical, and others more philosophical. The basis of the techniques I’ve just described are derived from:
Domain Driven Design — alignment between software design and business domains.
Event-Driven Systems — asynchronous communication and decoupled components
Distributed Systems — data streaming, in-memory data grids and large-scale analytics.
Not all actors need to persist their state to an external storage provider (e.g. database or blob storage). You can choose to create ephemeral actors, who rely solely on memory - e.g. routers, buffers, reducers, or worker processors.
In the next post we will use some of these techniques to map out an actor system for real-time aircraft tracking.
TLDR
Ownership: Actors exclusively manage their internal state, preventing external modifications. They’re not supposed to map to your entities, but rather your business processes and customer interactions.
Lifecycle Management: Actors have dynamic lifecycles with activation, deactivation, state persistence, timers, and reminders, enabling scalable systems that efficiently use resources and maintain state in a fault tolerant manner. No need for CRON jobs or queries to lookup who needs a reminder notification email - let the actor wake up on demand and run its own show!
Transactional Integrity: Through sequential message processing, atomic state changes, and idempotent designs, actors ensure consistent and reliable operations, eliminating race conditions and handling retries gracefully.
State Representation: Actors should actively manage state transitions with explicitly defined events and controlled transitions, ensuring predictable behaviors and encapsulated state machines for enhanced reliability. Embed finite state machines into your actor’s state for ultimate control.
Modeling Techniques: Employ strategies like “Jobs to be Done”, Digital Twins, Personification, Workflow Orchestration, and Aggregations to create robust, scalable, and fully domain aligned actor systems that mirror real-world business processes and workflows.
My experience with actors in embedded telecom contradicts several points of your article:
> Idempotent Design: Messages can be safely retried without unintended side effects.
- This is going to be very hard to achieve. You cannot synchronize states of actors. One actor cannot know the current state of another actor. And if it queries the state, it may change any moment. Thus, for something like money transfer, you cannot write transfer(42, source, destination). You cannot do destination.Set(destination.Get() + 42). You are left with destination.Add(42) which is not idempotent.
> Actors can host multiple interconnected state machines, creating complex yet manageable system behaviors. They are the protected enclave of order.
- It's just the opposite. Actors are chaos incarnate. There is no way to make them behave as you want like them to because unrelated events sneak in and change their states to nobody knows what while you were running your complex use case. And an average actor gets scores of event handlers, each of which implements pieces of logic from several unrelated use cases. The code is unintelligible because every use case is scattered all around the whole source file.
> Aggregation involves thinking of your actors as part of a graph, where smaller, downstream actors notify upstream actors about important events, metrics, or states.
- There was an old pattern Presentation-Abstraction-Control. It's long dead and buried, probably not for no reason. You have just outlined it.