Simple Enough Blog logo
  • Home 
  • Projects 
  • Tags 

  •  Language
    • English
    • Français
  1.   Blogs
  1. Home
  2. Blogs
  3. Why Is Docker Cache Insufficient for a Monorepo?

Why Is Docker Cache Insufficient for a Monorepo?

Posted on March 18, 2026 • 5 min read • 930 words
Cache   Docker   Devops   Ci-Cd   Helene  
Cache   Docker   Devops   Ci-Cd   Helene  
Share via
Simple Enough Blog
Link copied to clipboard

Docker cache is powerful for packaging and reproducibility, but it quickly reaches its limits in a monorepo. This article explains why its linear model cannot prevent unnecessary work, and why more granular caching becomes essential.

On this page
I. Why Is Docker Cache Insufficient for a Monorepo?   II. What Docker Cache Does Very Well   III. The Fundamental Problem: Docker Cache Is Linear   IV. Why This Approach Breaks Down in a Monorepo   V. Multi-Stage Builds: A Real Improvement, but Still Insufficient   VI. What About BuildKit? Parallelism Does Not Change the Model   VII. The Real Mismatch: Docker Caches Layers, Not Work   VIII. The Typical Symptom: A Cache That Becomes Fragile   IX. The Right Place for Docker Cache   X. Conclusion   🔗 Useful Links  
Why Is Docker Cache Insufficient for a Monorepo?
Photo by Helene Hemmerter

I. Why Is Docker Cache Insufficient for a Monorepo?  

Docker is everywhere.
And with it, a widely shared idea:

“If we structure our Dockerfiles well, Docker cache will speed up our CI.”

That is true… but only up to a certain point.

As soon as you work in a monorepo — with multiple projects, multiple languages, and multiple logical pipelines — Docker cache quickly shows its limits.

This article explains why, without dismissing Docker, and above all what needs to be understood to avoid using it in the wrong place.


II. What Docker Cache Does Very Well  

Let’s start by being fair:
Docker cache is excellent in its own domain.

Docker caches:

  • Dockerfile layers
  • each instruction (RUN, COPY, etc.) creates a layer
  • if the instruction and its context have not changed → the layer is reused

This works very well for:

  • installing system dependencies
  • building an image reproducibly
  • packaging an application
  • ensuring that “what works in CI will work in production”

Docker excels as a packaging and distribution tool.


III. The Fundamental Problem: Docker Cache Is Linear  

Docker cache is based on a linear chain of layers.

Layer 1 → Layer 2 → Layer 3 → Layer 4

If one layer changes:

  • all subsequent layers are invalidated
  • even if they have no logical relationship with the change

Typical example:

COPY package.json .
RUN pnpm install

COPY src/ ./src
RUN pnpm build

A change in package.json invalidates everything after it, which is normal.

But a minor change in src/ also invalidates the entire end of the chain, even if:

  • only part of the code is affected
  • only one project in the monorepo is touched

Docker cannot reason any differently.


IV. Why This Approach Breaks Down in a Monorepo  

A monorepo is not:

  • one application
  • one single build

It is:

  • multiple projects
  • multiple dependency graphs
  • multiple independent units of work

Examples:

  • web frontend
  • mobile application
  • backend API
  • GraphQL code generation
  • Solidity contracts
  • shared libraries

Docker does not understand these boundaries.

For Docker:

  • everything copied into the build context is seen as one single mass
  • everything built produces one single result

The result:

  • a small change in one project invalidates unrelated builds
  • “the world” gets rebuilt far too often
  • CI times explode as the repository grows

V. Multi-Stage Builds: A Real Improvement, but Still Insufficient  

Multi-stage builds are often presented as the solution.

They make it possible to:

  • separate dependencies / build / runtime stages
  • reduce the size of final images
  • avoid recompiling certain parts unnecessarily

Example:

FROM node:20 AS deps
RUN pnpm install

FROM deps AS build
RUN pnpm build

FROM nginx AS runtime
COPY --from=build /dist /usr/share/nginx/html

This is a very good practice.

But be careful:

  • each stage is still linear
  • inside a stage, the exact same problem remains
  • Docker still does not know what a “project” or a “task” is

Multi-stage builds optimize packaging, not orchestration.


VI. What About BuildKit? Parallelism Does Not Change the Model  

Docker BuildKit brings real improvements:

  • parallel execution of some stages
  • smarter caching
  • better overall performance

But:

  • the mental model remains the same
  • BuildKit optimizes how layers are executed
  • not what should be executed

BuildKit speeds up a linear model.
It does not turn it into a granular one.


VII. The Real Mismatch: Docker Caches Layers, Not Work  

This is the key point.

Docker caches:

  • instructions
  • intermediate states
  • files inside an image

But in a monorepo, what we want to cache is:

  • the result of a task
  • conditioned by explicit inputs
  • producing identified outputs

Examples of truly useful cache units:

  • web frontend build
  • backend tests
  • GraphQL generation
  • Solidity contracts build

These units do not exist in Docker.


VIII. The Typical Symptom: A Cache That Becomes Fragile  

When Docker is used as the main cache mechanism in a monorepo, you often observe:

  • increasingly complex Dockerfiles
  • COPY instructions fragmented to the extreme
  • implicit rules
    (“don’t touch this file or everything rebuilds”)
  • unpredictable CI times
  • developers disabling the cache “temporarily”

This is not a skills problem.
It is the wrong tool for this level of abstraction.


IX. The Right Place for Docker Cache  

Docker absolutely has its place in a monorepo pipeline, but not as the main orchestration engine.
Its role is to ensure reproducibility and portability of environments.

It excels when it comes to installing system dependencies,
building reliable images, packaging final artifacts, and guaranteeing strict consistency
between CI and production.

However, Docker is not designed to decide what should be rebuilt after a change.
It does not understand the boundaries between projects, nor the internal dependency graphs of a monorepo.
Its model is based on successive layers, not on independent units of work.

In other words, Docker is an excellent packaging and execution tool, but a poor orchestration tool.
In a monorepo, it should remain complementary: it executes what has been decided elsewhere,
but it should never be responsible for determining what deserves to be rebuilt.


X. Conclusion  

Docker cache is not bad.
It is insufficient.

The simple rule to remember:

Docker knows “how to build an image.”
It does not know “which work deserves to be redone.”

In a monorepo, that decision is central.

Not because Docker is slow,
but because it operates at the wrong level of abstraction for a monorepo.

If your pipeline is trying to:

  • avoid redoing unnecessary work
  • understand the real impact of a change
  • keep CI times stable as the repository grows

Then Docker cannot be your main cache.

It must be complementary, not central.


🔗 Useful Links  

  • Docker Documentation — Use cache effectively

  • Docker BuildKit — Overview and cache behavior

 Nx Is Not a JavaScript Tool: It Is a Work Orchestrator
What Should You Really Cache in a CI/CD Pipeline? 
  • I. Why Is Docker Cache Insufficient for a Monorepo?  
  • II. What Docker Cache Does Very Well  
  • III. The Fundamental Problem: Docker Cache Is Linear  
  • IV. Why This Approach Breaks Down in a Monorepo  
  • V. Multi-Stage Builds: A Real Improvement, but Still Insufficient  
  • VI. What About BuildKit? Parallelism Does Not Change the Model  
  • VII. The Real Mismatch: Docker Caches Layers, Not Work  
  • VIII. The Typical Symptom: A Cache That Becomes Fragile  
  • IX. The Right Place for Docker Cache  
  • X. Conclusion  
  • 🔗 Useful Links  
Follow us

We work with you!

   
Copyright © 2026 Simple Enough Blog All rights reserved. | Powered by Hinode.
Simple Enough Blog
Code copied to clipboard