Simple Enough Blog logo
  • Home 
  • Projects 
  • Tags 

  •  Language
    • English
    • Français
  1.   Blogs
  1. Home
  2. Blogs
  3. Karpenter: The Intelligent Autoscaler for EKS

Karpenter: The Intelligent Autoscaler for EKS

Posted on November 24, 2025 • 6 min read • 1,146 words
Aws   Kubernetes   Autoscaling   Devops   Helene  
Aws   Kubernetes   Autoscaling   Devops   Helene  
Share via
Simple Enough Blog
Link copied to clipboard

Discover Karpenter, the modern autoscaler designed for EKS — faster, more flexible, and more cost-efficient than the traditional Cluster Autoscaler.

On this page
I. What is Karpenter?   II. Why does Karpenter exist?   III. What Karpenter does better   1. It dynamically chooses the best instance type   2. It launches nodes in 10 to 30 seconds   3. It automatically handles unused nodes   4. It handles Spot interruptions gracefully   IV. How does it work?   Provisioner   V. Minimal Provisioner Example   Karpenter vs Cluster Autoscaler   VI. Common issues with Karpenter   1. Over-provisioning risks (scaling too aggressively)   2. Deprovisioning that is “too” effective   3. Required integration with VPC, IAM, and AWS tags   4. Unexpected behavior with heavy DaemonSets   5. Spot + Karpenter = excellent… but sensitive to shortages   6. It can break an existing ASG strategy   7. Unexpected costs due to oversized instances   8. Dangerously permissive Provisioners   9. More frequent pod interruptions   The 9 Karpenter Pitfalls to Watch   Conclusion   🔗 Useful Links  
Karpenter: The Intelligent Autoscaler for EKS
Photo by Helene Hemmerter

I. What is Karpenter?  

Karpenter is an open-source autoscaler for Kubernetes, created by AWS.
Its purpose is to automatically adjust the EC2 node capacity of an EKS cluster based on real workload demand.

In short:

  • When your cluster is short on resources, Karpenter adds new nodes.
  • When nodes become unnecessary, it removes them.

But more importantly:

It does this faster, more intelligently, and more efficiently than Kubernetes’ traditional autoscaler (Cluster Autoscaler).

In this article, we explore:

  • why Karpenter exists
  • what it does better than Cluster Autoscaler
  • how it works
  • the pitfalls to avoid in production
  • and why it has become a must-have component for EKS

II. Why does Karpenter exist?  

The native Kubernetes Cluster Autoscaler (CA):

  • can only manage Auto Scaling Groups (ASG)
  • launches instances from a fixed, predefined list
  • often takes several minutes to react
  • does not evaluate all EC2 instance options
  • performs poorly with Spot capacity

Karpenter fixes all of these issues.


III. What Karpenter does better  

1. It dynamically chooses the best instance type  

No need to manually list dozens of instance types inside an ASG.

Karpenter can decide automatically:

  • t3 or c6i?
  • a large instance or multiple small ones?
  • Spot or On-Demand?
  • which AZ still has available capacity?

It selects the instance at the moment of need, based on:

  • Spot availability
  • price
  • CPU/RAM/GPU requests from pods
  • constraints (architecture, disk size, labels, taints…)

2. It launches nodes in 10 to 30 seconds  

Much faster than:

  • Cluster Autoscaler (acts through ASGs)
  • the ASG engine itself (which has its own delays)

Karpenter creates nodes directly via the EC2 API — extremely fast.


3. It automatically handles unused nodes  

If a node becomes empty (no useful pods), Karpenter:

  • cordons it
  • drains it
  • then terminates it

Result: lower costs and a cleaner cluster


4. It handles Spot interruptions gracefully  

When AWS announces a Spot interruption:

  • Karpenter receives the signal
  • evicts pods and reschedules them
  • spins up a replacement node immediately

Very helpful during difficult periods… like December (see the article on Spot availability in December).


IV. How does it work?  

Karpenter relies on a Custom Resource:

Provisioner  

This object defines:

  • allowed instance types
  • usable availability zones
  • maximum resource quotas
  • Spot / On-Demand preferences
  • minimum / maximum node sizes

Then Karpenter watches Pending pods and creates the perfect node to host them.


V. Minimal Provisioner Example  

apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  requirements:
    - key: karpenter.sh/capacity-type
      operator: In
      values: ["spot", "on-demand"]
  limits:
    resources:
      cpu: 500
  provider:
    subnetSelector:
      kubernetes.io/cluster/my-cluster: owned
    securityGroupSelector:
      kubernetes.io/cluster/my-cluster: owned

Karpenter vs Cluster Autoscaler  

FeatureCluster AutoscalerKarpenter
Intelligent instance choicelimiteddynamic
Speedslowvery fast
Spot handlingaverageexcellent
ASG dependencyrequirednone
Removal of unused nodesstandardoptimized
Simplicitymediumhigh

VI. Common issues with Karpenter  

1. Over-provisioning risks (scaling too aggressively)  

Karpenter is much faster than Cluster Autoscaler — sometimes too fast.

Symptoms:

  • more nodes created than necessary
  • nodes becoming unused shortly after creation
  • AWS costs temporarily spiking
  • nodes removed later — but still billed

Cause:
Karpenter reacts instantly to pods in Pending, even when those pods should not actually be scheduled.

How to avoid:

  • enable consolidation
  • configure maxPods and realistic CPU/RAM requests
  • set strict limits in the Provisioner
limits:
resources:
cpu: 200

2. Deprovisioning that is “too” effective  

Karpenter removes nodes that appear unused.

Possible issues:

  • some DaemonSets or system pods make a node appear “empty” but not removable
  • temporary workloads cause oscillations
  • scale-in may be too aggressive, causing immediate recreation → ping-pong effect

Solutions:

  • increase ttlSecondsAfterEmpty
  • use taints to protect certain nodes
  • enable progressive consolidation

3. Required integration with VPC, IAM, and AWS tags  

Karpenter cannot operate without:

  • a proper IAM Role
  • correctly tagged Subnets
  • tagged Security Groups
  • advanced EC2 permissions

Mandatory tags:

kubernetes.io/cluster/<cluster-name>: owned
karpenter.sh/discovery: <cluster-name>

Without these tags, no node can be created.


4. Unexpected behavior with heavy DaemonSets  

Examples:

  • Prometheus Node Exporter
  • Falco
  • Cilium
  • Heavy logging agents (Datadog, New Relic…)
  • CSI EBS/EFS (depending on the version)

Why?

Karpenter chooses node sizes based on actual pod resource consumption.

If DaemonSets consume too much:

  • small nodes → crashloops
  • large nodes → unnecessary cost

Solutions:

  • define realistic resources.requests
  • exclude some DaemonSets using taints
  • use dedicated Provisioners (e.g., system-nodes)

5. Spot + Karpenter = excellent… but sensitive to shortages  

During high-demand periods (e.g., December), Karpenter may:

  • test multiple instance families
  • reschedule repeatedly
  • try different AZs
  • loop until it finds capacity

This may cause:

  • instability
  • Spot node flapping
  • increased latency

Key advice: Never rely on 100% Spot. Always maintain an On-Demand baseline, e.g.:

on-demand: 20–40%

6. It can break an existing ASG strategy  

Many infrastructures use:

  • an On-Demand ASG + a Spot ASG
  • autoscaling policies
  • lifecycle hooks
  • warm pools

Karpenter ignores ASGs entirely.
It creates instances directly through the EC2 API.

If your architecture relies on:

  • EC2-based scaling
  • lifecycle hooks
  • warm pools

You will lose these mechanisms when migrating to Karpenter.


7. Unexpected costs due to oversized instances  

Karpenter optimizes for:

  • pod scheduling
  • availability
  • speed

…but not necessarily for the lowest cost.

Classic example:

“I requested 2 CPU and 4 GB…
Why did Karpenter launch an m6i.2xlarge?”

Possible reasons:

  • small Spot sizes unavailable
  • larger instance temporarily cheaper
  • DaemonSets forcing a minimum size

Solution:
restrict instance families/types in your requirements.


8. Dangerously permissive Provisioners  

Example:

requirements:
  - key: karpenter.sh/capacity-type
    operator: In
    values: ["spot"]

If Spot capacity disappears:

  • no nodes can be created
  • cluster down

Always:

  • include On-Demand
  • limit instance families
  • set fallback AZs

9. More frequent pod interruptions  

Example:

  • Karpenter consolidates
  • removes an “empty” node
  • a DaemonSet schedules right after
  • the node was actually needed

Result:

  • unnecessary pod rescheduling → micro-interruptions
  • temporary instability → degraded performance

Solution: increase TTLs or disable consolidation at certain hours.


The 9 Karpenter Pitfalls to Watch  

#RiskExplanationHow to Avoid / Fix
1Over-provisioningScaling too fast → too many nodesLimits, consolidation, maxPods
2Aggressive scale-inNodes removed too soon → ping-pongLonger TTL, taints, progressive consolidation
3VPC / IAM / Tag misconfigNo nodes createdCorrect tags, IAM roles, subnet tagging
4Heavy DaemonSetsPoor node size selectionRequests, taints, dedicated Provisioners
5Spot instabilityShortages → flapping nodesOn-Demand baseline (20–40%), fallback AZs
6Loss of ASG featuresNo hooks, warm pools, EC2 scalingArchitecture adaptation
7Oversized instancesAggressive optimization → unexpected costsRestrict instance families/types
8Too-permissive ProvisionersSpot-only → cluster downInclude On-Demand, restrict types
9Unpredictable consolidationToo many pod relocationsAdjust TTLs, disable consolidation

Conclusion  

Karpenter is an outstanding tool, especially for:

  • optimizing costs
  • accelerating scaling
  • using Spot instances effectively
  • simplifying node lifecycle management

But because it is extremely powerful,
it must be configured carefully.

Well-managed, it is one of the best tools available for EKS today.


🔗 Useful Links  

  • Officiel Karpenter Documentation
  • Getting Started Guide
  • Key Concepts: Provisioners, Consolidation, Spot Handling
  • Karpenter vs Cluster Autoscaler (FAQ)
  • Bonnes pratiques Spot sur EKS (AWS)
 Why AWS Spot Instances Become Impossible to Get in December
How to Handle Optional Parameters in Go 
  • I. What is Karpenter?  
  • II. Why does Karpenter exist?  
  • III. What Karpenter does better  
  • IV. How does it work?  
  • V. Minimal Provisioner Example  
  • VI. Common issues with Karpenter  
  • The 9 Karpenter Pitfalls to Watch  
  • Conclusion  
  • 🔗 Useful Links  
Follow us

We work with you!

   
Copyright © 2026 Simple Enough Blog All rights reserved. | Powered by Hinode.
Simple Enough Blog
Code copied to clipboard