Database Per Service: Pros, Cons, and Implementation
In a monolithic application, all modules share one database. When you move to microservices, the instinct is to keep that shared database. After all, it's simple — services can join across tables, maintain referential integrity, and share schemas. But a shared database creates tight coupling that defeats the purpose of microservices.
The database-per-service pattern gives each service its own private data store. No other service can access it directly — only through the owning service's API. This is the foundational data pattern for microservices, and it comes with real trade-offs.
Why Each Service Needs Its Own Database
1. Independent Deployability
With a shared database, a schema change in the Orders table can break the Users service. With database-per-service, the Order team can change their schema without coordinating with anyone else.
2. Right Database for the Job (Polyglot Persistence)
Not every service has the same data access pattern. The User service might use PostgreSQL for relational data, the Product Catalog might use DynamoDB for key-value lookups, and the Search service might use Elasticsearch. A shared database forces everyone onto one technology.
3. Independent Scaling
The Orders service handles 10x more traffic than the Users service. With separate databases, you scale (and pay for) each independently. A shared database must be sized for the combined load.
4. Fault Isolation
A runaway query in the Orders service won't exhaust connections for the Users service. Each database failure is contained to one service.
Implementation Strategies
Private Tables in a Shared Server
The simplest starting point: use one database server (e.g., one RDS instance) but give each service its own schema or database within it. Services must not access each other's tables.
- Pros — lower cost, simpler operations, easy to start
- Cons — shared server resources, temptation to cross-access schemas
- Best for — early-stage microservices, teams migrating from monolith
Separate Database Instances
Each service gets its own RDS instance, DynamoDB table, or cluster. Full isolation at the infrastructure level.
- Pros — complete isolation, independent scaling, polyglot persistence
- Cons — higher cost, more operational overhead
- Best for — production microservices with distinct workloads
The Hard Part: Cross-Service Queries
In a monolith, getting "orders with customer names" is a simple SQL JOIN. With database-per-service, that data lives in two different databases owned by two different services. How do you handle this?
1. API Composition
The simplest approach: the caller queries multiple services and joins the results in memory. The API gateway or a BFF layer fetches user data from the User Service and order data from the Order Service, then combines them.
// API Composition in the gateway
public OrderWithCustomer getOrderDetails(String orderId) {
Order order = orderService.getOrder(orderId);
Customer customer = userService.getUser(order.getCustomerId());
return new OrderWithCustomer(order, customer);
}
2. CQRS with Materialized Views
Maintain a pre-joined read model that's updated asynchronously via events. When a user is updated, the User Service publishes a UserUpdated event. The Order Read Service consumes it and updates its local denormalized view.
3. Data Replication via Events
Services subscribe to events from other services and maintain local copies of the data they need. The Order Service keeps a local copy of customer names (not the full customer record) updated via events.
When Shared Database Is Acceptable
Despite the strong recommendation for database-per-service, there are situations where a shared database is pragmatic:
- Early-stage startups — when you have 2-3 services and one team. The overhead of distributed data isn't worth it yet.
- Tightly coupled domains — services that always change together may share a database temporarily during migration.
- Reporting/analytics — it's common to replicate data into a shared data warehouse (Redshift, BigQuery) for cross-domain analytics.
Migration Strategy: Monolith to DB-Per-Service
- Identify data ownership — map each table to the service that should own it
- Create schema boundaries — move tables into separate schemas within the same database
- Enforce access through APIs — stop cross-service direct DB queries. Route through service APIs.
- Separate database instances — when ready, move each schema to its own RDS instance or DynamoDB table
- Implement event-based sync — replace cross-service joins with events and local read models
Conclusion
Database-per-service is a fundamental microservices pattern that enables independent scaling, deployment, and technology choice. The trade-off is increased complexity for cross-service queries. Start with API composition for simple cases, introduce CQRS for read-heavy dashboards, and use event-based replication for reference data. Don't try to implement all patterns at once — evolve your data architecture as your system grows.
At TechTrailCamp, data patterns for microservices are a core part of our Architecture and AWS tracks. You'll implement database-per-service with real event-driven sync through hands-on, 1:1 mentoring.
Want to design microservices data architecture?
Join TechTrailCamp's 1:1 training and master distributed data patterns on AWS.
Start Your Learning Journey
TechTrailCamp