Why Commands
Why commands
Every state-changing operation is modeled as a command — a named, typed object with a UUID and a paper trail. This applies to HTTP endpoints, background jobs, message handlers, scheduled tasks — anything that changes data. The tracing framework supports both HTTP and non-HTTP execution paths.
This gives you a consistent foundation for tracing, auditing, and understanding what your system is doing.
Every change is recorded
Every command execution is persisted with its full context: who triggered it, which tenant it belongs to, when it happened, whether it succeeded or failed, and how long it took.
Combined with cmdSourceRef chains, you can trace how one command propagates changes across services.
Unlike sampling-based tracing such as OpenTelemetry, every command is recorded — the log is a complete history rather than a sample.
The trade-off is storage.
The importance and retention fields let you manage this: low-importance commands can be marked as short-lived and cleaned up automatically, while high-importance commands are kept permanently.
A standard across applications
When multiple applications use the same command model and the same command_log schema, you get insights that span the entire platform — not just one service.
You don’t need to correlate logs from different systems with different formats.
The same table structure, the same state model, the same query patterns work everywhere.
A clean contract between frontend and backend
A command defines exactly the data needed to perform a change — nothing more. The structure can be flat and form-friendly, adding nesting only where it serves the command. It doesn’t need to mirror the domain model or the API read representation.
This separation is powerful: the read representation of a resource can change freely — different shapes for different contexts, different domains, different API versions. The command to edit that resource stays the same. Writes and reads evolve independently.
In classic CRUD, the same representation is used for both reading and writing — change the shape, and every client that creates or updates must change too. Commands break that coupling.
And because commands are stored with their full payload, the frontend can retrieve a previous command and use it to re-populate a form — for editing, copying, or retrying a failed submission.
Debugging
When something breaks, the command log tells you exactly what happened: the request payload, the HTTP status, the problem code, and the duration. The full request and outcome live in one row instead of being stitched together from log lines.
Performance monitoring
Commands are timestamped with millisecond duration. Spot regressions by comparing duration over time — a command that used to take 200ms now takes 2 seconds.
Failure analysis
Combined with problem+json responses, every failure is categorized by problem code. Group by problem code and sort by count to surface new error codes, codes that have started spiking, and long-standing codes that have gone unnoticed. The data lets you prioritize fixes by frequency and impact rather than by what happens to be visible in recent logs.
Every error code can be mapped to user-facing messages with i18n support. The command’s attributes and problem context provide the placeholders for meaningful, specific feedback rather than generic error messages.
Audit trail
The command log is a system of record for all data changes. It can power user-facing features like activity journals — showing users what changed, when, and by whom.
Cross-tenant insights
In multi-tenant applications with tenant-isolated schemas, running queries across all tenants is slow and complex. The command log lives in a shared schema with enough data to answer platform-level questions with a single query: how many orders were placed yesterday, how many payments failed last month, how many new tenants were onboarded this week.
Dry-run support
Commands that implement DryRunEnabled can be executed without side effects.
The tracing framework records dry-run executions separately, so they don’t pollute your production metrics.
This enables preview and validation flows where users can see what a command would do before committing.
Building on the command log
The library traces and persists commands — it does not execute, queue, or schedule them. But because the command log stores the full payload alongside the outcome, it becomes a foundation you can build on:
-
Retry — a failed command’s payload is in the log. Your application can pick it up and resubmit without the client having to resend.
-
Replay — re-execute a command from its stored payload to reproduce or verify behavior.
-
Drafts — persist a command without executing it, apply it in-memory to preview the result, and commit on approval. Replay’s mirror twin.
-
Activity feeds — query the log to show users what happened to their data, when, and by whom.
-
Alerting — monitor the log for failure spikes or performance regressions.
Companion tables
The command log captures the generic action; a companion table captures the domain-specific detail.
For example, an API that issues access tokens can pair each RequestAccessTokenCommand in the command log with a row in a token_log table, linked by cmd_uuid.
The command log records who requested a token, when, and whether it succeeded.
The token log records which scopes were granted, which integration was used, and when the token expires.
Together they form a complete audit trail: the generic part is queryable across all command types, while the domain part carries the detail that only makes sense for this specific operation.
This pattern works for any command where the outcome has domain-specific data worth keeping — payment receipts, submission confirmations, generated documents.
What a command is not
A command is a concrete, atomic change — one mutation, one outcome, one log entry.
It is not a workflow, a task, or an action plan. It does not orchestrate multi-step processes, manage approval chains, or track progress toward a goal. And the command log is not an event sourcing log — the database stays authoritative.
But commands play well with things that do.
Commands and tasks
A task might say "generate the monthly report." That task involves validation, calculation, and delivery — each of which is a command. The task tracks progress. The commands track what actually happened.
Commands and approval workflows
An approval workflow might require a manager to sign off before a payment is executed. The workflow tracks who needs to approve, who has approved, and whether the threshold is met. But each step is a command: creating the request is a command, approving it is a command, and executing it is a command. The workflow owns the process. The commands own the individual changes.
Commands and scheduled jobs
A scheduled job might say "sync all accounts nightly." The job decides when and what to sync. Each sync operation is a command. The job knows the plan. The commands know the results.
Commands and events
An event might say "the import file was processed." Downstream listeners react by creating commands — one per record, one per reconciliation. The event describes what triggered the work. The commands describe the work itself.
Commands and event sourcing
Event sourcing treats the event log as the source of truth: current state is rebuilt by replaying events from the beginning.
The command log works differently. The database is the authoritative state, and the command log is a parallel record of what was requested and what happened. Commands do tell the story of how the system reached its current state, but the log does not carry the implementation that ran each one — business rules, schema, and dependent data all change over time. Replaying the log from zero will not reconstruct the database, because the implementation behind each command has changed.
Replay, where offered, re-executes a single stored command through the current implementation and records a new entry.