Why Is Docker Cache Insufficient for a Monorepo?
Posted on March 18, 2026 • 5 min read • 930 wordsDocker cache is powerful for packaging and reproducibility, but it quickly reaches its limits in a monorepo. This article explains why its linear model cannot prevent unnecessary work, and why more granular caching becomes essential.

Docker is everywhere.
And with it, a widely shared idea:
“If we structure our Dockerfiles well, Docker cache will speed up our CI.”
That is true… but only up to a certain point.
As soon as you work in a monorepo — with multiple projects, multiple languages, and multiple logical pipelines — Docker cache quickly shows its limits.
This article explains why, without dismissing Docker, and above all what needs to be understood to avoid using it in the wrong place.
Let’s start by being fair:
Docker cache is excellent in its own domain.
Docker caches:
RUN, COPY, etc.) creates a layerThis works very well for:
Docker excels as a packaging and distribution tool.
Docker cache is based on a linear chain of layers.
Layer 1 → Layer 2 → Layer 3 → Layer 4
If one layer changes:
Typical example:
COPY package.json .
RUN pnpm install
COPY src/ ./src
RUN pnpm buildA change in package.json invalidates everything after it, which is normal.
But a minor change in src/ also invalidates the entire end of the chain, even if:
Docker cannot reason any differently.
A monorepo is not:
It is:
Examples:
Docker does not understand these boundaries.
For Docker:
The result:
Multi-stage builds are often presented as the solution.
They make it possible to:
dependencies / build / runtime stagesExample:
FROM node:20 AS deps
RUN pnpm install
FROM deps AS build
RUN pnpm build
FROM nginx AS runtime
COPY --from=build /dist /usr/share/nginx/htmlThis is a very good practice.
But be careful:
Multi-stage builds optimize packaging, not orchestration.
Docker BuildKit brings real improvements:
But:
BuildKit speeds up a linear model.
It does not turn it into a granular one.
This is the key point.
Docker caches:
But in a monorepo, what we want to cache is:
Examples of truly useful cache units:
These units do not exist in Docker.
When Docker is used as the main cache mechanism in a monorepo, you often observe:
COPY instructions fragmented to the extremeThis is not a skills problem.
It is the wrong tool for this level of abstraction.
Docker absolutely has its place in a monorepo pipeline, but not as the main orchestration engine.
Its role is to ensure reproducibility and portability of environments.
It excels when it comes to installing system dependencies,
building reliable images, packaging final artifacts, and guaranteeing strict consistency
between CI and production.
However, Docker is not designed to decide what should be rebuilt after a change.
It does not understand the boundaries between projects, nor the internal dependency graphs of a monorepo.
Its model is based on successive layers, not on independent units of work.
In other words, Docker is an excellent packaging and execution tool, but a poor orchestration tool.
In a monorepo, it should remain complementary: it executes what has been decided elsewhere,
but it should never be responsible for determining what deserves to be rebuilt.
Docker cache is not bad.
It is insufficient.
The simple rule to remember:
Docker knows “how to build an image.”
It does not know “which work deserves to be redone.”
In a monorepo, that decision is central.
Not because Docker is slow,
but because it operates at the wrong level of abstraction for a monorepo.
If your pipeline is trying to:
Then Docker cannot be your main cache.
It must be complementary, not central.