How to set resource limits on Docker containers in Linux
How to Set Resource Limits on Docker Containers in Linux
Docker containers provide excellent isolation and portability for applications, but without proper resource management, they can consume unlimited system resources, potentially affecting the host system and other containers. Setting resource limits is crucial for maintaining system stability, ensuring fair resource distribution, and optimizing performance in production environments.
This comprehensive guide will teach you how to effectively set and manage resource limits on Docker containers in Linux, covering CPU, memory, storage, and network constraints. Whether you're a beginner learning Docker fundamentals or an experienced administrator optimizing production workloads, this article provides the knowledge and practical examples you need.
Table of Contents
- [Prerequisites and Requirements](#prerequisites-and-requirements)
- [Understanding Docker Resource Management](#understanding-docker-resource-management)
- [Setting Memory Limits](#setting-memory-limits)
- [Configuring CPU Limits](#configuring-cpu-limits)
- [Managing Storage Limits](#managing-storage-limits)
- [Network Resource Control](#network-resource-control)
- [Advanced Resource Management](#advanced-resource-management)
- [Monitoring Resource Usage](#monitoring-resource-usage)
- [Practical Examples and Use Cases](#practical-examples-and-use-cases)
- [Common Issues and Troubleshooting](#common-issues-and-troubleshooting)
- [Best Practices](#best-practices)
- [Conclusion](#conclusion)
Prerequisites and Requirements
Before diving into Docker resource limits, ensure you have the following prerequisites:
System Requirements
- Linux Distribution: Ubuntu 18.04+, CentOS 7+, RHEL 7+, or similar
- Docker Engine: Version 20.10+ recommended
- Kernel Version: Linux kernel 3.10+ with cgroup support
- Root or Sudo Access: Required for Docker operations
- Minimum RAM: 2GB (4GB+ recommended for testing)
Required Knowledge
- Basic Linux command-line operations
- Fundamental Docker concepts (containers, images, commands)
- Understanding of system resources (CPU, memory, storage)
- Basic knowledge of cgroups (helpful but not mandatory)
Installation Verification
Verify your Docker installation and system capabilities:
```bash
Check Docker version
docker --version
Verify Docker daemon is running
sudo systemctl status docker
Check cgroup support
ls /sys/fs/cgroup/
Verify memory cgroup is available
cat /proc/cgroups | grep memory
Check CPU cgroup support
cat /proc/cgroups | grep cpu
```
Understanding Docker Resource Management
Docker uses Linux control groups (cgroups) to limit and monitor resource usage. Cgroups provide a mechanism to allocate resources such as CPU time, system memory, network bandwidth, and storage I/O among user-defined groups of processes.
Key Concepts
Control Groups (cgroups): Linux kernel feature that limits and isolates resource usage of process collections.
Resource Controllers: Subsystems that manage specific resource types:
- Memory Controller: Manages RAM and swap usage
- CPU Controller: Controls CPU time allocation
- Block I/O Controller: Manages storage bandwidth
- Network Controller: Handles network bandwidth
Container Runtime: Docker Engine manages resource allocation through the container runtime interface.
Default Behavior
By default, Docker containers have no resource constraints and can use:
- All available CPU cores
- All available system memory
- Unlimited storage I/O
- Full network bandwidth
This unrestricted access can lead to resource contention and system instability in multi-container environments.
Setting Memory Limits
Memory limits prevent containers from consuming excessive RAM and protect the host system from out-of-memory conditions.
Basic Memory Limit Syntax
```bash
Set memory limit during container creation
docker run -m
docker run --memory=
Memory limit formats
docker run -m 512m nginx # 512 megabytes
docker run -m 1g nginx # 1 gigabyte
docker run -m 1024m nginx # 1024 megabytes
docker run -m 2147483648 nginx # 2GB in bytes
```
Memory Limit Examples
```bash
Run nginx with 512MB memory limit
docker run -d --name web-server -m 512m nginx
Run MySQL with 1GB memory limit
docker run -d --name database -m 1g \
-e MYSQL_ROOT_PASSWORD=password mysql:8.0
Run application with 2GB limit and custom name
docker run -d --name app-container -m 2g \
--restart unless-stopped my-application:latest
```
Memory Swap Configuration
Control swap usage alongside memory limits:
```bash
Set memory limit with swap disabled
docker run -d --memory=1g --memory-swap=1g nginx
Set memory limit with additional swap
docker run -d --memory=1g --memory-swap=2g nginx
Disable swap completely (memory-swap = memory)
docker run -d --memory=512m --memory-swap=512m nginx
Unlimited swap (default behavior)
docker run -d --memory=1g --memory-swap=-1 nginx
```
Memory Reservation
Set soft memory limits that allow bursting:
```bash
Set memory reservation (soft limit)
docker run -d --memory-reservation=512m --memory=1g nginx
Reservation only (no hard limit)
docker run -d --memory-reservation=256m nginx
```
Out-of-Memory (OOM) Kill Behavior
Configure how containers handle memory exhaustion:
```bash
Disable OOM killer (use with caution)
docker run -d --memory=512m --oom-kill-disable nginx
Default behavior: OOM killer enabled
docker run -d --memory=512m nginx
```
Warning: Disabling OOM killer without setting memory limits can cause system-wide memory exhaustion.
Configuring CPU Limits
CPU limits control processor usage and ensure fair CPU time distribution among containers.
CPU Limit Methods
Docker provides several approaches to limit CPU usage:
1. CPU Shares: Relative weight-based allocation
2. CPU Period/Quota: Absolute time-based limits
3. CPU Sets: Specific CPU core assignment
4. CPU Count: Fractional CPU allocation
CPU Shares
CPU shares provide relative CPU priority (default: 1024):
```bash
High priority container (2x default)
docker run -d --cpu-shares=2048 --name high-priority nginx
Low priority container (0.5x default)
docker run -d --cpu-shares=512 --name low-priority nginx
Normal priority (default)
docker run -d --name normal-priority nginx
```
CPU Period and Quota
Set absolute CPU limits using period and quota:
```bash
Limit to 50% of one CPU core
docker run -d --cpu-period=100000 --cpu-quota=50000 nginx
Limit to 1.5 CPU cores
docker run -d --cpu-period=100000 --cpu-quota=150000 nginx
Limit to 25% of one CPU core
docker run -d --cpu-period=100000 --cpu-quota=25000 nginx
```
Simplified CPU Limits
Use the `--cpus` flag for easier CPU allocation:
```bash
Limit to 0.5 CPU cores
docker run -d --cpus="0.5" nginx
Limit to 1.5 CPU cores
docker run -d --cpus="1.5" nginx
Limit to 2 CPU cores
docker run -d --cpus="2" nginx
```
CPU Set Assignment
Assign containers to specific CPU cores:
```bash
Use only CPU cores 0 and 1
docker run -d --cpuset-cpus="0,1" nginx
Use CPU cores 0 through 3
docker run -d --cpuset-cpus="0-3" nginx
Use specific cores with memory nodes
docker run -d --cpuset-cpus="0,1" --cpuset-mems="0" nginx
```
Combined CPU and Memory Example
```bash
Web server with combined limits
docker run -d --name production-web \
--memory=1g \
--cpus="1.0" \
--restart unless-stopped \
nginx:alpine
Database with specific resource allocation
docker run -d --name production-db \
--memory=2g \
--memory-swap=2g \
--cpuset-cpus="2,3" \
--restart unless-stopped \
-e POSTGRES_PASSWORD=secretpassword \
postgres:13
```
Managing Storage Limits
Storage limits control disk space usage and I/O bandwidth for containers.
Storage Driver Configuration
Different storage drivers support various limiting mechanisms:
```bash
Check current storage driver
docker info | grep "Storage Driver"
Common drivers: overlay2, aufs, devicemapper, btrfs, zfs
```
Device Mapper Storage Limits
For devicemapper storage driver:
```bash
Set storage limit during daemon configuration
Edit /etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.basesize=10G"
]
}
Restart Docker daemon
sudo systemctl restart docker
```
Block I/O Limits
Control storage bandwidth and IOPS:
```bash
Limit read/write bandwidth (bytes per second)
docker run -d --device-read-bps /dev/sda:1mb nginx
docker run -d --device-write-bps /dev/sda:1mb nginx
Limit read/write IOPS (operations per second)
docker run -d --device-read-iops /dev/sda:100 nginx
docker run -d --device-write-iops /dev/sda:100 nginx
Combined I/O limits
docker run -d --name limited-io \
--device-read-bps /dev/sda:10mb \
--device-write-bps /dev/sda:5mb \
--device-read-iops /dev/sda:1000 \
--device-write-iops /dev/sda:500 \
nginx
```
Temporary Filesystem Limits
Limit tmpfs usage:
```bash
Mount tmpfs with size limit
docker run -d --tmpfs /tmp:rw,size=100m nginx
Multiple tmpfs mounts with limits
docker run -d \
--tmpfs /app/temp:rw,size=50m \
--tmpfs /app/cache:rw,size=200m \
my-application
```
Network Resource Control
Network bandwidth limiting requires additional tools and configuration.
Traffic Control Setup
Install traffic control utilities:
```bash
Ubuntu/Debian
sudo apt-get update
sudo apt-get install iproute2
CentOS/RHEL
sudo yum install iproute
```
Container Network Bandwidth
Use `tc` (traffic control) for network limiting:
```bash
Create container with custom network
docker network create --driver bridge limited-network
Run container on custom network
docker run -d --name bandwidth-limited \
--network limited-network \
nginx
Apply bandwidth limits to container interface
(requires additional scripting and tc commands)
```
Docker Compose Network Limits
```yaml
docker-compose.yml example
version: '3.8'
services:
web:
image: nginx
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
networks:
- limited-net
networks:
limited-net:
driver: bridge
```
Advanced Resource Management
Using Docker Compose for Resource Limits
Create comprehensive resource management with Docker Compose:
```yaml
docker-compose.yml
version: '3.8'
services:
web:
image: nginx:alpine
container_name: web-server
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
restart: unless-stopped
database:
image: postgres:13
container_name: database
environment:
POSTGRES_PASSWORD: secretpassword
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '1.0'
memory: 1G
volumes:
- db_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:alpine
container_name: cache
deploy:
resources:
limits:
cpus: '0.5'
memory: 256M
reservations:
cpus: '0.1'
memory: 128M
restart: unless-stopped
volumes:
db_data:
```
Runtime Resource Updates
Modify resource limits for running containers:
```bash
Update memory limit
docker update --memory=2g container-name
Update CPU limits
docker update --cpus="1.5" container-name
Update multiple containers
docker update --memory=1g --cpus="1.0" web-server database
Update with CPU shares
docker update --cpu-shares=512 low-priority-container
```
Systemd Integration
Create systemd service files with resource limits:
```ini
/etc/systemd/system/my-app.service
[Unit]
Description=My Application Container
After=docker.service
Requires=docker.service
[Service]
Type=forking
ExecStart=/usr/bin/docker run -d \
--name=my-app \
--memory=1g \
--cpus="1.0" \
--restart=unless-stopped \
my-application:latest
ExecStop=/usr/bin/docker stop my-app
ExecStopPost=/usr/bin/docker rm my-app
TimeoutStartSec=0
Restart=on-failure
StartLimitIntervalSec=60s
StartLimitBurst=3
[Install]
WantedBy=multi-user.target
```
Monitoring Resource Usage
Real-time Monitoring
Monitor container resource usage:
```bash
View resource usage for all containers
docker stats
Monitor specific containers
docker stats web-server database
One-time statistics (no streaming)
docker stats --no-stream
Format output
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
```
Detailed Resource Information
```bash
Inspect container resource configuration
docker inspect container-name | grep -A 20 "Resources"
Check memory usage details
docker exec container-name cat /sys/fs/cgroup/memory/memory.usage_in_bytes
Check CPU usage
docker exec container-name cat /sys/fs/cgroup/cpu/cpu.stat
```
Logging and Alerting
Create monitoring scripts:
```bash
#!/bin/bash
monitor-resources.sh
THRESHOLD_CPU=80
THRESHOLD_MEM=90
while true; do
docker stats --no-stream --format "{{.Container}} {{.CPUPerc}} {{.MemPerc}}" | \
while read container cpu mem; do
cpu_val=$(echo $cpu | sed 's/%//')
mem_val=$(echo $mem | sed 's/%//')
if (( $(echo "$cpu_val > $THRESHOLD_CPU" | bc -l) )); then
echo "ALERT: Container $container CPU usage: $cpu"
fi
if (( $(echo "$mem_val > $THRESHOLD_MEM" | bc -l) )); then
echo "ALERT: Container $container Memory usage: $mem"
fi
done
sleep 30
done
```
Practical Examples and Use Cases
Web Application Stack
Complete example of a web application with proper resource limits:
```bash
Frontend web server
docker run -d --name frontend \
--memory=512m \
--cpus="0.5" \
--restart unless-stopped \
-p 80:80 \
nginx:alpine
Backend API server
docker run -d --name api-server \
--memory=1g \
--cpus="1.0" \
--restart unless-stopped \
-p 3000:3000 \
--link database:db \
node:16-alpine
Database server
docker run -d --name database \
--memory=2g \
--memory-swap=2g \
--cpus="1.5" \
--restart unless-stopped \
-e POSTGRES_PASSWORD=secretpassword \
-v db_data:/var/lib/postgresql/data \
postgres:13
Redis cache
docker run -d --name cache \
--memory=256m \
--cpus="0.25" \
--restart unless-stopped \
redis:alpine redis-server --maxmemory 200mb
```
Development Environment
Resource-limited development setup:
```yaml
docker-compose.dev.yml
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
environment:
- NODE_ENV=development
db:
image: postgres:13
environment:
POSTGRES_DB: myapp_dev
POSTGRES_PASSWORD: devpassword
deploy:
resources:
limits:
cpus: '0.5'
memory: 256M
volumes:
- dev_db:/var/lib/postgresql/data
volumes:
dev_db:
```
Microservices Architecture
Resource allocation for microservices:
```bash
User service
docker run -d --name user-service \
--memory=300m \
--cpus="0.3" \
--restart unless-stopped \
user-service:latest
Order service
docker run -d --name order-service \
--memory=500m \
--cpus="0.5" \
--restart unless-stopped \
order-service:latest
Payment service
docker run -d --name payment-service \
--memory=400m \
--cpus="0.4" \
--restart unless-stopped \
payment-service:latest
API Gateway
docker run -d --name api-gateway \
--memory=200m \
--cpus="0.2" \
--restart unless-stopped \
-p 8080:8080 \
api-gateway:latest
```
Common Issues and Troubleshooting
Memory-Related Issues
Issue: Container killed due to OOM (Out of Memory)
```bash
Check container exit status
docker ps -a
Look for exit code 137 (OOM killed)
Check system logs
sudo dmesg | grep -i "killed process"
sudo journalctl -u docker.service | grep OOM
```
Solution:
```bash
Increase memory limit
docker update --memory=2g container-name
Check application memory usage patterns
docker exec container-name free -h
docker exec container-name ps aux --sort=-%mem | head
```
Issue: Container using more memory than expected
```bash
Monitor memory usage over time
docker stats container-name
Check for memory leaks
docker exec container-name cat /proc/meminfo
```
CPU-Related Issues
Issue: Container consuming too much CPU
```bash
Check CPU usage
docker stats --format "{{.Container}}: {{.CPUPerc}}"
Identify high-CPU processes
docker exec container-name top
```
Solution:
```bash
Apply CPU limits
docker update --cpus="1.0" container-name
Use CPU shares for relative priority
docker update --cpu-shares=512 container-name
```
Issue: Application performance degradation
```bash
Check if CPU limits are too restrictive
docker stats container-name
Monitor CPU throttling
docker exec container-name cat /sys/fs/cgroup/cpu/cpu.stat | grep throttled
```
Storage-Related Issues
Issue: Container running out of disk space
```bash
Check container disk usage
docker exec container-name df -h
Check Docker system usage
docker system df
```
Solution:
```bash
Clean up unused resources
docker system prune -a
Increase storage limits (if using devicemapper)
Edit /etc/docker/daemon.json and restart Docker
```
Network-Related Issues
Issue: Network connectivity problems with resource limits
```bash
Check container network configuration
docker inspect container-name | grep -A 10 "NetworkSettings"
Test network connectivity
docker exec container-name ping google.com
```
General Troubleshooting Commands
```bash
Check cgroup support
cat /proc/cgroups
Verify memory cgroup is mounted
mount | grep cgroup
Check Docker daemon configuration
docker info
View container resource constraints
docker inspect container-name | grep -A 20 "Resources"
Check system resource availability
free -h
nproc
df -h
```
Best Practices
Resource Planning
1. Baseline Measurement: Always measure resource usage before setting limits
2. Gradual Implementation: Start with generous limits and gradually tighten
3. Environment-Specific: Use different limits for development, staging, and production
4. Documentation: Document resource requirements for each service
Memory Management
```bash
Best practices for memory limits
Set memory limits slightly higher than peak usage
docker run -d --memory=1.2g my-app # If peak usage is 1GB
Always set swap limits to prevent swap thrashing
docker run -d --memory=1g --memory-swap=1g my-app
Use memory reservations for critical services
docker run -d --memory=2g --memory-reservation=1g critical-service
```
CPU Optimization
```bash
Use fractional CPU limits for better resource utilization
docker run -d --cpus="0.5" lightweight-service
Assign CPU sets for CPU-intensive applications
docker run -d --cpuset-cpus="0,1" cpu-intensive-app
Use CPU shares for relative prioritization
docker run -d --cpu-shares=2048 high-priority-service
docker run -d --cpu-shares=512 low-priority-service
```
Monitoring and Alerting
1. Continuous Monitoring: Implement automated monitoring of resource usage
2. Threshold Alerts: Set up alerts for resource usage thresholds
3. Historical Analysis: Keep historical data for capacity planning
4. Regular Reviews: Periodically review and adjust resource limits
Production Deployment
```bash
Production-ready container with comprehensive limits
docker run -d --name production-app \
--memory=2g \
--memory-swap=2g \
--memory-reservation=1g \
--cpus="2.0" \
--cpu-shares=1024 \
--restart unless-stopped \
--log-driver=json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
--health-cmd="curl -f http://localhost:8080/health || exit 1" \
--health-interval=30s \
--health-timeout=10s \
--health-retries=3 \
my-application:latest
```
Security Considerations
1. Principle of Least Privilege: Allocate minimum required resources
2. Resource Isolation: Prevent resource exhaustion attacks
3. Monitoring: Log resource usage for security analysis
4. Regular Updates: Keep Docker and host system updated
Automation and Infrastructure as Code
```yaml
Ansible playbook example
- name: Deploy application with resource limits
docker_container:
name: "{{ app_name }}"
image: "{{ app_image }}"
memory: "{{ app_memory | default('1g') }}"
cpus: "{{ app_cpus | default('1.0') }}"
restart_policy: unless-stopped
published_ports:
- "{{ app_port }}:8080"
env:
DATABASE_URL: "{{ database_url }}"
```
Conclusion
Setting resource limits on Docker containers is essential for maintaining system stability, ensuring predictable performance, and optimizing resource utilization in both development and production environments. This comprehensive guide has covered all aspects of Docker resource management, from basic memory and CPU limits to advanced monitoring and troubleshooting techniques.
Key Takeaways
1. Always Set Limits: Never run containers without resource constraints in production
2. Monitor Continuously: Implement comprehensive monitoring and alerting
3. Plan Capacity: Base limits on actual usage patterns and performance requirements
4. Test Thoroughly: Validate resource limits in staging environments before production deployment
5. Document Everything: Maintain clear documentation of resource requirements and limits
Next Steps
To further enhance your Docker resource management skills:
1. Explore Kubernetes: Learn about Kubernetes resource management for orchestrated environments
2. Study cgroups: Deepen understanding of Linux control groups
3. Implement Monitoring: Set up comprehensive monitoring solutions like Prometheus and Grafana
4. Practice Optimization: Continuously optimize resource allocation based on monitoring data
5. Learn Security: Study container security best practices related to resource management
Additional Resources
- Docker Official Documentation: Resource Management
- Linux cgroups Documentation
- Container Monitoring Best Practices
- Kubernetes Resource Management
- Docker Compose Resource Configuration
By implementing the techniques and best practices outlined in this guide, you'll be able to effectively manage Docker container resources, ensuring optimal performance, stability, and resource utilization across your containerized applications. Remember that resource management is an ongoing process that requires continuous monitoring, analysis, and optimization based on your specific use cases and requirements.