Monolith vs Microservices: The Honest Answer
Architecture Patterns — Part 22 of 30
In March 2023, Amazon's Prime Video team published a quiet little blog post that detonated a grenade in the software architecture world. The headline: they had migrated a critical video monitoring service from microservices back to a monolith, and cut infrastructure costs by 90%.
Let that sink in. Amazon. The company that literally invented modern service-oriented architecture at scale. That Amazon published a case study admitting their distributed system was over-engineered.
The distributed setup had components talking to each other over AWS Step Functions, passing video frames through S3 as temporary storage. It hit scaling limits at 5% of expected load. Step Functions charged per state transition and the system was doing multiple transitions per second per stream. They rearchitected into a single process with in-memory communication, deployed on EC2 and ECS — and the whole thing got faster, cheaper, and easier to debug.
This wasn't a fluke or a cautionary tale about a rogue team. It's a data point in a pattern that the CNCF 2025 Annual Survey confirmed at industry scale: 42% of organizations that adopted microservices are actively consolidating services back into larger deployable units. Service mesh adoption dropped from 18% in Q3 2023 to 8% in Q3 2025.
The pendulum is swinging. Not because microservices are bad — but because they were massively misapplied.
What a Monolith Actually Is
"Monolith" became a dirty word sometime around 2015. If you deployed a single process, you were unsophisticated. You were "not web-scale." You weren't Netflix.
Here's the thing: Shopify runs on a modular Ruby on Rails monolith — 2.8 million lines of code — and it processes billions in Black Friday GMV without breaking a sweat. Stack Overflow serves millions of requests daily from what is, architecturally, a very large monolith running on a surprisingly small number of servers. GitHub ran on a Rails monolith for years.
A monolith is not a ball of mud. A monolith means a single deployable unit. What lives inside that unit can be disciplined or chaotic — that's an implementation choice, not an architectural sentence.
The well-structured monolith has real advantages that get dismissed:
- Transactional integrity: You get ACID transactions across the whole domain for free
- Zero network tax: In-process function calls, not HTTP round-trips
- Unified debugging: One log stream, one stack trace, one process to attach a debugger to
- Fast onboarding: A new engineer can clone, run, and understand the whole system
- Simple deployment: One artifact, one pipeline, one rollback target
What Microservices Actually Are (and What They Cost)
Microservices are not a free lunch. They are a distributed systems problem you have chosen to take on voluntarily in exchange for specific benefits.
The real costs are rarely on the conference slide:
- Operational overhead: Every service needs its own CI/CD pipeline, deployment config, logging setup, health checks, and monitoring dashboards
- Network tax: Inter-service calls add latency and introduce failure modes (timeouts, retries, partial failures, cascading failures)
- Distributed tracing: Debugging a bug that spans 4 services requires tooling (Jaeger, Zipkin, Datadog) and expertise that a monolith never needs
- Data consistency: You lose cross-service ACID transactions. Welcome to eventual consistency, sagas, and distributed locks
- Conway's Law coordination: Every service boundary is a team boundary is a communication overhead
One team I know described the debugging experience after microservices adoption: "Check service A logs (15 minutes). Realize error originated in service B (20 minutes). Discover service B was responding to service C (25 minutes). Find service C had stale cache from service D (45 minutes). Fix requires coordinating 4 teams (2 days)." What used to be a 2-hour bug fix became a 2-day distributed detective story.
The Honest Framework: 5 Questions Before You Choose
Stop asking "should we use microservices?" and start asking these five questions. They determine the answer.
1. How many engineers do you have?
This is Conway's Law in practice. Your architecture will mirror your organization. Microservices require teams that can own services end-to-end. As a rough rule: you need 3–5 engineers per service to sustain it — to own on-call, do the deployments, maintain the API contracts.
With 10 developers, you can sustainably operate 2–3 services. With 50+, microservices start making organizational sense.
| Team Size | Recommended Approach |
|---|---|
| 1–10 devs | Monolith (possibly modular) |
| 10–30 devs | Modular monolith |
| 30–50 devs | Modular monolith + 1–2 extracted services for genuine need |
| 50+ devs | Microservices make organizational sense |
2. Do you have multiple teams deploying independently?
The primary benefit of microservices is independent deployability — Team A ships their service without coordinating with Team B. If you don't have multiple teams with genuinely separate release cadences, you're paying the microservices tax without collecting the benefit.
Ask yourself: do our teams actually need to deploy independently right now, or is that a future state we're optimistically planning for?
3. Are different parts of your system scaling differently?
Microservices shine when one component needs 100x the resources of another. If your image processing queue needs GPU instances but your auth service needs tiny containers, separation makes sense.
If everything scales roughly uniformly — or you're not at the scale where that matters yet — you're adding architectural complexity to solve a resource problem you don't have.
4. Do you have the operational maturity for distributed systems?
This one kills the most teams. Distributed systems require: container orchestration (Kubernetes), service discovery, distributed tracing, circuit breakers, API gateways, secrets management, distributed logging, and engineers who understand all of it deeply enough to debug it at 2am.
Gartner data from 2025 showed 90% of organizations that adopted microservices prematurely failed with the architecture — not because microservices are bad, but because the teams didn't have the platform engineering foundation to run them.
Do you have dedicated SRE capacity? A mature internal developer platform? If no, you're not ready.
5. Is your domain complexity genuinely high enough to justify it?
Microservices make sense when you have a large, complex domain with genuinely separate bounded contexts that different teams own. If your domain is "a SaaS app that manages projects," you probably don't have 30 bounded contexts that need separate deployment pipelines.
The temptation is to invent service boundaries before your domain is well-understood. That's backwards. Service boundaries should be discovered through domain understanding, not imposed through architectural fashion.
The Default for Early-Stage Builders: Start Monolith
If you're an early-stage team — especially if you're using AI coding tools to build fast — start with a monolith. Not reluctantly. Confidently.
Here's why AI coding tools have actually strengthened the case for monoliths: tools like GitHub Copilot, Cursor, and Claude can navigate large codebases remarkably well. What used to be the "unmaintainable spaghetti" fear of the large monolith is now much more manageable when AI can refactor across files, find usages across the whole codebase, and suggest structural improvements at scale.
The dirty secret of microservices is that they were partly a social solution to a human problem: large teams couldn't coordinate changes across a shared codebase, so they drew hard lines. AI tooling reduces that friction significantly.
The Modular Monolith: The Sweet Spot
The answer that experienced architects increasingly land on: build a modular monolith from day one.
A modular monolith means you enforce strong internal boundaries within a single deployable unit:
my-app/
├── modules/
│ ├── billing/
│ │ ├── api/ # Public interface only
│ │ ├── domain/ # Internal business logic
│ │ └── infra/ # DB, external calls
│ ├── catalog/
│ │ ├── api/
│ │ ├── domain/
│ │ └── infra/
│ └── orders/
│ ├── api/
│ ├── domain/
│ └── infra/
├── shared/ # Cross-cutting: auth, logging, events
└── app.ts # Single entry point
Rules: modules only communicate through their api/ layer. No direct DB access across modules. No importing domain internals from another module. The boundary discipline is the same as microservices — but without the network tax, distributed tracing complexity, or deployment overhead.
When you genuinely need to extract a service, the boundary is already clean. Extraction becomes a refactor, not a rewrite.
Signs It's Time to Extract a Service
Don't extract preemptively. Extract when you have evidence:
- A specific component needs radically different scaling than the rest
- A team of 8+ engineers owns a bounded context and is blocked by shared deployments
- You need a different language/runtime for a specific workload (ML inference in Python, data pipeline in Go)
- Compliance or security isolation requirements demand physical separation
- You have actual performance data showing a specific bottleneck that distribution would solve
None of these are: "we want to be like Netflix" or "the CTO saw a conference talk."
The Decision Checklist
Before committing to microservices, answer these honestly:
- We have 30+ engineers and multiple independent teams
- Different parts of our system have measurably different scaling needs
- We have 2+ dedicated DevOps/SRE engineers with distributed systems experience
- We have mature CI/CD pipelines, container orchestration, and observability tooling already in place
- Our domain boundaries are well-understood after 12+ months of product development
- We have actual performance data that a monolith can't address
If you can't check at least 4 of these, start with a modular monolith and revisit in 6 months.
Ask The Guild
What architecture are you running today — and what drove the decision? Have you experienced the "microservices tax" firsthand, or made the opposite mistake of letting a monolith turn into a big ball of mud?
Drop your story in the Guild Discord #architecture channel. The most honest architectural lessons come from the ones that went sideways — share what you learned.
Next up in Architecture Patterns: Part 23 — Event-Driven Architecture: When to Publish, When to Request