Docker Cheatsheet
What it is
Docker packages an application and its dependencies into a portable image, which runs as an isolated container on any host. For web teams it solves "works on my laptop, breaks on the server" — the container ships your runtime, libraries, and config together, so the staging box and the production box run the same artifact bit-for-bit.
This sheet covers the commands you actually run when building, shipping, and deploying containerised web apps — plus a deployment-workflows section for the patterns that matter once a build leaves CI.
Quick reference
Containers
docker run -d --name api -p 8080:80 myapp:latest # run detached, port-mapped
docker run -it --rm alpine sh # interactive, auto-cleanup on exit
docker run -v $(pwd):/app -w /app node:20 npm test # mount cwd, set workdir
docker ps # running containers
docker ps -a # include stopped
docker logs -f api # tail logs
docker exec -it api sh # shell into running container
docker stop api && docker rm api # graceful stop + remove
docker rm -f api # force kill + remove
docker restart api # in-place restart
docker stats # live CPU/mem/IO per container
Images
docker build -t myapp:1.4.2 . # build from Dockerfile in cwd
docker build -t myapp:1.4.2 -f docker/api.Dockerfile .
docker build --no-cache -t myapp:1.4.2 . # bypass layer cache
docker build --build-arg NODE_ENV=production .
docker images # list local images
docker pull nginx:1.27-alpine # fetch from registry
docker push registry.example.com/myapp:1.4.2 # publish (requires auth)
docker tag myapp:1.4.2 registry.example.com/myapp:1.4.2
docker rmi myapp:old # delete image
docker image prune -a # delete unused images
docker history myapp:1.4.2 # inspect layers + sizes
Volumes and bind mounts
docker volume create pgdata
docker volume ls
docker volume inspect pgdata
docker volume rm pgdata
docker volume prune # delete all unused volumes
docker run -v pgdata:/var/lib/postgresql/data postgres:16 # named volume
docker run -v /host/path:/container/path myapp # bind mount
docker run --mount type=bind,src=$(pwd),dst=/app myapp # explicit syntax
docker run --read-only --tmpfs /tmp myapp # read-only fs
Networks
docker network create app-net
docker network ls
docker network inspect app-net
docker run --network app-net --name db postgres:16
docker run --network app-net --name api myapp # api can reach db at hostname "db"
docker network connect app-net existing-container
docker network disconnect app-net existing-container
docker network prune
Docker Compose
docker compose up -d # start in background
docker compose up --build # rebuild before starting
docker compose down # stop and remove
docker compose down -v # also remove volumes (destructive)
docker compose logs -f api # tail one service
docker compose ps # service status
docker compose exec api sh # shell into a service
docker compose run --rm api npm run migrate # one-off task with full deps
docker compose pull # refresh images before deploy
docker compose config # validate + render merged config
System cleanup
docker system df # disk usage breakdown
docker system prune # containers, networks, dangling images
docker system prune -a --volumes # nuclear option (destructive)
docker builder prune # build cache only
Inspection and debugging
docker inspect api # full JSON state
docker inspect -f '{{.NetworkSettings.IPAddress}}' api
docker top api # processes inside container
docker port api # published ports
docker diff api # filesystem changes since image
docker cp api:/app/log.txt ./log.txt # copy file out
Deployment workflows
The reference above is portable — every Docker tutorial has those commands. The patterns below are the ones that matter once an image leaves CI and has to survive production.
Multi-stage builds: ship the runtime, leave the toolchain behind
A naive build copies your whole node_modules, build tools, and source into the final image. A multi-stage build compiles in one stage and copies only the artifact into a minimal runtime stage — typical result: 1.2GB → 120MB, faster pulls on every deploy, and less attack surface in production.
# syntax=docker/dockerfile:1.7
# --- build stage ---
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci # lockfile-strict install
COPY . .
RUN npm run build # produces /app/dist
# --- runtime stage ---
FROM node:20-alpine AS runtime
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev # prod deps only
COPY --from=build /app/dist ./dist
USER node # never run as root in prod
EXPOSE 3000
CMD ["node", "dist/server.js"]
Keep it lean:
- Pin base image versions (node:20-alpine, not node:latest) — latest floats and breaks reproducibility.
- Order layers from least-changing to most-changing. package.json before source means dependency installs hit the cache on every code-only deploy.
- One concern per stage. If your image needs Python for a build script, that goes in build, never runtime.
We use the same pattern internally — the PageSpeed by DeployHQ architecture writeup walks through a real multi-stage build and the deploy pipeline that ships it.
Image tagging for rollback
If your deploy pipeline pushes myapp:latest and that's the only tag, you cannot roll back without rebuilding. The fix is two tags per build: an immutable SHA-pinned tag and a moving latest/stable pointer.
# in CI, after a successful build
docker tag myapp:build-${CI_COMMIT_SHA} myapp:latest
docker push myapp:build-${CI_COMMIT_SHA} # immutable
docker push myapp:latest # moving pointer
On the production host, the deploy script always pulls the SHA tag, never latest:
docker pull myapp:build-abc123def
docker tag myapp:build-abc123def myapp:current
docker stop api && docker rm api
docker run -d --name api -p 8080:80 myapp:current
Rollback is then a matter of re-running the same script with the previous SHA — no rebuild, no surprise dependency drift. DeployHQ's one-click rollback does this automatically across both code and image deploys.
Docker Compose for stateful production stacks
Compose isn't just a dev tool — for single-host production deployments (small VPS, internal tools, side projects), a versioned compose.yaml is often the sanest deploy unit.
# compose.yaml — checked into git, deployed verbatim to prod
services:
api:
image: registry.example.com/myapp:${RELEASE_TAG}
restart: unless-stopped
env_file: .env.production
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
ports:
- "127.0.0.1:3000:3000" # bind to localhost; Nginx fronts it
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: myapp
POSTGRES_USER_FILE: /run/secrets/db_user
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER"]
interval: 10s
retries: 5
secrets:
- db_user
- db_password
volumes:
pgdata:
secrets:
db_user:
file: ./secrets/db_user.txt
db_password:
file: ./secrets/db_password.txt
Deploy step:
export RELEASE_TAG=build-abc123def
docker compose pull # fetch new images
docker compose up -d # recreate changed containers only
docker compose ps # verify all healthy
Two non-obvious wins:
- restart: unless-stopped survives host reboots without you wiring systemd separately.
- Healthchecks plus depends_on: condition: service_healthy mean the API doesn't start serving traffic until Postgres is actually accepting connections — the difference between a clean rolling deploy and a 30-second window of 502s.
For a real-world example, see How To Deploy Metabase with Docker Compose and DeployHQ.
Registry authentication in CI
Pushing images from CI without leaking credentials is the part most tutorials skip. Use short-lived credentials and --password-stdin, never inline:
# GHCR (GitHub Container Registry)
echo "$GITHUB_TOKEN" | docker login ghcr.io -u "$GITHUB_ACTOR" --password-stdin
# AWS ECR
aws ecr get-login-password --region us-east-1 \
| docker login --password-stdin "$ECR_REGISTRY"
# Docker Hub with an access token (NOT your password)
echo "$DOCKERHUB_TOKEN" | docker login -u "$DOCKERHUB_USER" --password-stdin
In production hosts, the ~/.docker/config.json written by docker login lives until you explicitly docker logout. Rotate registry tokens on a schedule and audit which hosts hold credentials.
Docker on a fresh VPS
A repeatable provisioning script for production hosts:
# Ubuntu 22.04 / 24.04 — install Docker Engine + Compose plugin
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER # log out + back in to take effect
sudo systemctl enable --now docker
# verify
docker --version
docker compose version
# (optional) limit log file growth
sudo tee /etc/docker/daemon.json > /dev/null <<'EOF'
{
"log-driver": "json-file",
"log-opts": { "max-size": "10m", "max-file": "3" }
}
EOF
sudo systemctl restart docker
The default JSON log driver will fill /var/lib/docker/containers/ until the host runs out of disk if you don't cap it — set max-size and max-file on every prod box. We've covered the full provisioning workflow in How to deploy from Windows using WSL2, Docker, and DeployHQ for Windows-based teams, and the Linux equivalent is essentially the same script.
Common errors and fixes
| Error | Cause | Fix |
|---|---|---|
permission denied while trying to connect to the Docker daemon socket |
User not in docker group |
sudo usermod -aG docker $USER, then log out and back in. Don't sudo docker as a workaround — your ~/.docker/config.json ends up owned by root. |
port is already allocated on docker run |
Another container or process holds the host port | docker ps to find it, or lsof -i :8080 if it's a host process. Map a different port or stop the conflicting container. |
manifest unknown on docker pull |
Tag doesn't exist in the registry, or you're authenticated as the wrong user | Verify the tag with docker manifest inspect <image:tag>. For private registries, re-run docker login. |
no space left on device mid-build |
Docker layer cache and dangling images filled /var/lib/docker |
docker system df to confirm. docker system prune -a to reclaim (destructive — kills unused images). Move data-root to a bigger disk if it recurs. |
Container exits with 137 |
OOM killed by the kernel | Raise the container memory limit (--memory=2g), or fix the leak. docker inspect <container> shows OOMKilled: true. |
Container exits with 139 |
Segfault inside the container | Usually a binary built for a different architecture. Check docker inspect --format '{{.Architecture}}' of the image vs. the host (Apple Silicon hosts pulling x86_64 images is the common one — pass --platform linux/arm64). |
Compose volumes wiped after docker compose down |
You ran down -v (the -v flag removes named volumes) |
Recover from backups; never use -v on prod compose stacks. Make down an alias that strips -v. |
| Slow builds on every push | Layer cache invalidated by COPY . . before npm ci |
Reorder the Dockerfile: copy package*.json, run npm ci, then copy the rest. Lockfile-only changes will keep hitting the cache. |
error during connect: Get "http+docker://...": EOF after host reboot |
Docker daemon didn't auto-start | sudo systemctl enable --now docker. Verify with systemctl status docker. |
Companion: full DeployHQ workflow for containerised apps
A typical DeployHQ pipeline for a Docker app looks like this:
- Push to your
mainbranch on GitHub or GitLab. - DeployHQ runs your build pipeline —
docker build, run tests, push the SHA-tagged image to your registry. - DeployHQ SSHes into your production hosts and runs your deploy hook (
docker compose pull && docker compose up -d). - The release is pinned to a SHA tag so zero-downtime rolling deploys work — and rolling back to any prior SHA is one click in the DeployHQ dashboard.
If you're starting from scratch, the deploy from GitHub to your server guide walks through the GitHub-side setup; the same pipeline works for GitLab-hosted repos.
If Docker isn't the right fit — for example, if you're on shared hosting that doesn't expose root or a container runtime — the same DeployHQ pipeline ships plain SCP/SFTP releases with atomic deployments and automatic rollback instead. Different artifact, same workflow.
Related cheatsheets
- SSH cheatsheet — the transport layer for every remote
dockercommand and SCP-based fallback deploy. - rsync cheatsheet — for the deploys where you ship files instead of images.
- curl cheatsheet — smoke-testing your
/healthendpoint after every deploy. - Bash scripting cheatsheet — for the deploy hooks that wrap
docker compose pull && up -dwith proper error handling. - kubectl cheatsheet — when single-host Compose isn't enough and you need orchestration.
For a comparison with non-Docker container runtimes, Understanding Podman: Docker's Open Source Alternative covers the rootless drop-in replacement.
Ship containerised apps with DeployHQ
DeployHQ deploys Docker workloads, plain file-based releases, or anything in between — from Git to your server, with build pipelines, atomic releases, and one-click rollback. Start a free trial or read the pricing tiers.
Need help? Email support@deployhq.com or follow us on @deployhq on X.