CI/CD Pipelines: The Complete Guide

Devops & Infrastructure and Tutorials

CI/CD Pipelines: The Complete Guide

If you already know what CI/CD is, the next question is practical: how do you actually build a pipeline that works for your team? This guide walks through designing, configuring, securing, and optimizing a CI/CD pipeline from scratch. No enterprise-scale assumptions — just actionable steps you can implement today.

Anatomy of a CI/CD Pipeline

Every CI/CD pipeline follows the same fundamental flow, regardless of the tools you use. Code moves through a series of automated stages, each acting as a quality gate before the next.

flowchart LR
    A[Source] --> B[Build]
    B --> C[Test]
    C --> D[Deploy]
    D --> E[Monitor]
    E -->|Rollback| D

Source is where everything starts. A commit or pull request triggers the pipeline. Your CI system detects the change, checks out the code, and begins processing.

Build compiles your application, installs dependencies, and produces deployable artifacts. For interpreted languages like Python or PHP, this might just be dependency installation and asset compilation. For compiled languages like Go or Java, it includes the actual compilation step. For a deeper look at what a build pipeline is and how to structure one, start there. The build pipelines feature in DeployHQ lets you run build commands as part of the deployment process.

Test runs your automated test suite against the built artifacts. This is the most critical gate — if tests fail, the pipeline stops and nothing reaches production.

Deploy pushes the tested artifacts to your target environment. This could be a staging server for manual review or production for fully automated continuous deployment. DeployHQ handles this stage with support for automatic deployments triggered by webhooks or API calls.

Monitor watches the deployed application for errors, performance degradation, or unexpected behavior. Good monitoring closes the feedback loop — if something breaks, you catch it before your users do.


Designing Your Pipeline

Before writing any configuration, make three decisions that shape everything else.

Branch Strategy

Trunk-based development works best for small-to-medium teams. Everyone commits to main (or short-lived feature branches that merge within a day or two). The pipeline runs on every push to main, and deployments happen frequently.

GitFlow uses long-lived develop, release, and hotfix branches. It adds ceremony but gives you more control over what reaches production and when. If your release cycle is weekly or longer, GitFlow might make sense.

For most teams, trunk-based development with feature flags is the simpler, faster option. You deploy more often, which means smaller changes, which means fewer things break.

Environment Strategy

A typical setup uses three environments:

Environment Purpose Deploys from Audience
Development Integration testing Feature branches Developers
Staging Pre-production validation main branch QA, stakeholders
Production Live application Tagged releases or main Users

You can start with just staging and production. Add environments only when you have a concrete reason — each one adds maintenance overhead and slows your feedback loop.

Delivery vs. Deployment

Continuous delivery means every commit that passes the pipeline is ready to deploy, but a human clicks the button. Continuous deployment means every passing commit goes to production automatically, with no manual gate. The comparison between these approaches is worth reading if you are deciding which to adopt.

Start with continuous delivery. Move to continuous deployment once your test suite is comprehensive enough that you trust it to catch regressions without human review.


Pipeline Configuration Example

Here is a real GitHub Actions workflow that builds a Node.js application, runs tests in parallel, and triggers a DeployHQ deployment on success.

name: CI/CD Pipeline

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

env:
  NODE_VERSION: '20'

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - run: npm ci

      - run: npm run build

      - uses: actions/upload-artifact@v4
        with:
          name: build-output
          path: dist/
          retention-days: 1

  test-unit:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - run: npm ci

      - run: npm run test:unit -- --shard=${{ matrix.shard }}
    strategy:
      matrix:
        shard: [1/3, 2/3, 3/3]

  test-integration:
    needs: build
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_DB: test
          POSTGRES_PASSWORD: test
        ports:
          - 5432:5432
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - run: npm ci

      - run: npm run test:integration
        env:
          DATABASE_URL: postgres://postgres:test@localhost:5432/test

  deploy:
    needs: [test-unit, test-integration]
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'
    runs-on: ubuntu-latest
    steps:
      - name: Trigger DeployHQ deployment
        run: |
          curl -s -X POST \
            "${{ secrets.DEPLOYHQ_WEBHOOK_URL }}" \
            -H "Content-Type: application/json" \
            -d '{"branch": "main"}'

A few things to notice in this configuration:

  • Dependency caching (cache: 'npm') avoids re-downloading packages on every run. This alone can cut 30-60 seconds off each build.
  • Test sharding splits unit tests across three parallel runners. A test suite that takes 6 minutes sequentially finishes in roughly 2 minutes.
  • DeployHQ as the CD step keeps your deployment logic out of CI. The webhook triggers DeployHQ, which handles the actual file transfer, build commands, and server configuration. This separation means you can change your CI provider without touching your deployment setup.

If you are deploying from GitHub, DeployHQ can also detect pushes directly without the webhook — but using a webhook gives you the option to deploy only after CI passes.


Testing in Your Pipeline

Not all tests belong in every pipeline run. The testing pyramid helps you decide what to run and when.

flowchart TB
    subgraph Pyramid["Testing Pyramid"]
        direction TB
        E2E["E2E Tests\n(few, slow, high confidence)"]
        INT["Integration Tests\n(moderate count, moderate speed)"]
        UNIT["Unit Tests\n(many, fast, focused)"]
    end
    E2E --- INT
    INT --- UNIT

Unit tests run on every commit. They are fast (seconds to low minutes), test individual functions in isolation, and catch logic errors early. If your unit tests take longer than 3 minutes, split them into parallel shards.

Integration tests run on every push to main and on pull requests. They verify that components work together — database queries return expected results, API endpoints respond correctly, services communicate as expected.

End-to-end tests run before production deployments, ideally on a staging environment. They simulate real user workflows through a browser. E2E tests are slow and brittle, so keep the count low — cover critical paths (signup, checkout, core features) and nothing more.

The key principle: fast feedback first. A developer should know within 2-3 minutes whether their change broke something. Push expensive tests later in the pipeline where they do not block the inner development loop.


Deployment Strategies

How code reaches production matters as much as how it is tested. The right strategy depends on your risk tolerance, infrastructure, and team size. For a broader look at what software deployment involves, start there.

Direct Deployment

The simplest approach: upload new files and replace the old ones. This works for static sites and small applications where a few seconds of downtime during the swap is acceptable.

Zero-Downtime Deployment

Uses symlink switching to swap between releases atomically. The new version is uploaded to a fresh directory, and once ready, the web server's document root symlink is flipped to point at it. There is no moment where the application is unavailable. DeployHQ supports this natively — see zero-downtime deployments with DeployHQ for the setup walkthrough.

Blue-Green Deployment

Maintains two identical production environments. Traffic routes to blue while green receives the new deployment. After validation, traffic switches from blue to green.

flowchart LR
    LB[Load Balancer]
    LB -->|Active| Blue[Blue Environment\nv1.2.3]
    LB -.->|Standby| Green[Green Environment\nv1.2.4]
    Green -->|Validated| Switch{Switch Traffic}
    Switch -->|Cutover| LB

The advantage is instant rollback — just switch traffic back to blue. The downside is cost: you are running two full environments.

Canary Deployment

Routes a small percentage of traffic (typically 5-10%) to the new version while the rest continues hitting the current version. If error rates stay flat, you gradually increase the percentage until 100% of traffic reaches the new version.

Canary deployments catch issues that only surface under real traffic patterns — race conditions, performance regressions at scale, edge cases your test suite missed. They require a load balancer that supports weighted routing.

Rolling Deployment

Updates servers one at a time (or in small batches) behind a load balancer. At any point during the rollout, some servers run the old version and some run the new. This works well for stateless applications but can cause issues if the old and new versions are not compatible with each other (different database schemas, changed API contracts).

For most small-to-medium teams, zero-downtime deployment via symlink switching hits the sweet spot — no downtime, simple rollback, and no extra infrastructure cost.


Security in CI/CD Pipelines

Your pipeline has access to production credentials, source code, and deployment infrastructure. Treat it as a high-value attack surface.

Secrets Management

Never hardcode secrets in pipeline configuration files or repository code. Use your CI provider's encrypted secrets store (GitHub Actions Secrets, GitLab CI Variables, etc.) and inject them as environment variables at runtime.

# Good: secrets injected at runtime
env:
  DATABASE_URL: ${{ secrets.DATABASE_URL }}
  API_KEY: ${{ secrets.API_KEY }}

# Bad: secrets in the repository (never do this)
env:
  DATABASE_URL: "postgres://admin:password123@db.example.com/prod"

Rotate secrets on a schedule. If a secret is ever exposed in logs or a commit, rotate it immediately.

Dependency Scanning

Automated tools like npm audit, Dependabot, or Snyk scan your dependency tree for known vulnerabilities. Run these on every pull request — they add seconds to the pipeline and catch issues before they merge.

- name: Audit dependencies
  run: npm audit --audit-level=high

Static Analysis (SAST)

Static Application Security Testing scans your source code for common vulnerabilities: SQL injection, XSS, insecure deserialization, hardcoded credentials. Tools like Semgrep, SonarQube, or CodeQL integrate directly into CI workflows.

Supply Chain Security

Lock files (package-lock.json, Gemfile.lock, poetry.lock) pin exact dependency versions. Always commit them and use deterministic install commands (npm ci instead of npm install) to prevent supply chain attacks where a compromised package publishes a malicious minor version.

Deployment Permissions

Use role-based access control (RBAC) for deployment. Not everyone who can merge code should be able to deploy to production. DeployHQ supports team permissions that let you restrict who can trigger deployments to specific environments.


Monitoring and Rollback

Deploying is only half the job. You need to know whether the deployment actually worked.

Post-Deployment Health Checks

After every deployment, run automated checks:

  • HTTP health endpoint returns 200 with expected response body
  • Database connectivity confirmed via a lightweight query
  • External service dependencies are reachable
  • Key business metrics (error rate, response time) remain within normal bounds

If any check fails within the first 5 minutes, trigger a rollback automatically.

Automated Rollback

Define clear rollback triggers:

Signal Threshold Action
HTTP 5xx error rate >5% for 2 minutes Auto-rollback
Response time p95 >2x baseline for 5 minutes Alert + manual rollback
Health check failure Any check fails Auto-rollback
Critical exception spike >10x normal rate Auto-rollback

DeployHQ provides one-click rollback to any previous deployment, which makes recovery a matter of seconds rather than minutes. For a deeper look at how deployment automation reduces recovery time, that guide covers the specifics.

Error Monitoring

Integrate error tracking (Sentry, Bugsnag, Honeybadger) into your application. These tools catch unhandled exceptions, group them by root cause, and alert your team. The deployment marker feature in most error trackers lets you correlate error spikes with specific deployments.


Pipeline Performance Optimization

A slow pipeline kills developer productivity. If your pipeline takes 20 minutes, developers context-switch while waiting, and feedback arrives too late to be useful. Target under 10 minutes for the full pipeline — under 5 minutes is excellent.

Cache Dependencies

Every major CI platform supports dependency caching. Use it.

Language Cache target Typical savings
Node.js ~/.npm or node_modules 30-90 seconds
Python ~/.cache/pip 20-60 seconds
Ruby vendor/bundle 30-90 seconds
Go ~/go/pkg/mod 15-45 seconds

Parallelize Test Suites

Split tests across multiple runners. GitHub Actions supports matrix strategies that shard your test suite automatically. Three shards typically give you a 2.5-3x speedup for minimal additional cost.

Incremental Builds

Only rebuild what changed. Monorepo tools like Nx and Turborepo track file dependencies and skip unchanged packages. For simpler projects, check whether source files changed before running expensive steps:

- name: Check if build needed
  id: changes
  run: |
    if git diff --name-only HEAD~1 | grep -q '^src/'; then
      echo "build_needed=true" >> $GITHUB_OUTPUT
    fi

- name: Build
  if: steps.changes.outputs.build_needed == 'true'
  run: npm run build

Artifact Reuse

Build once, deploy the same artifact everywhere. Upload build artifacts in the build job and download them in subsequent jobs instead of rebuilding. This ensures what you tested is exactly what you deploy — no it worked on CI surprises.


Measuring Pipeline Effectiveness

The DORA (DevOps Research and Assessment) metrics are the industry standard for measuring how well your delivery process performs. Track these four metrics to know whether your pipeline is actually helping.

Metric Elite High Medium Low
Deployment frequency On-demand (multiple/day) Weekly to monthly Monthly to 6-monthly Fewer than once per 6 months
Lead time for changes Less than 1 hour 1 day to 1 week 1 week to 1 month More than 1 month
Mean time to recovery Less than 1 hour Less than 1 day 1 day to 1 week More than 1 week
Change failure rate 0-15% 16-30% 31-45% 46-60%

Source: DORA State of DevOps Report

You do not need to be elite across all four metrics. Focus on the ones that hurt most. If your change failure rate is high, invest in testing. If your lead time is long, look for pipeline bottlenecks and manual approval gates that could be automated.

The relationship between these metrics matters too. Deploying more frequently reduces change failure rate because each deployment is smaller and easier to reason about. Faster mean time to recovery comes from having reliable rollback and good monitoring — not from deploying less often.


CI/CD for Small Teams

Most CI/CD content assumes you have a dedicated platform team, a Kubernetes cluster, and a budget for enterprise tools. If that is not you, here is a pragmatic approach.

Start With Two Tools

GitHub Actions for CI handles building, testing, and code quality checks. The free tier includes 2,000 minutes per month for private repositories — enough for most small teams. GitLab CI/CD and Bitbucket Pipelines offer similar free tiers.

\DeployHQ for CD handles getting your code onto servers. It connects to your GitHub repository, runs build commands on deployment, and transfers files to your servers over SSH, SFTP, or to cloud storage. The separation between CI and CD tools means you can swap either one independently.

For a comparison of deployment tools available, the best software deployment tools guide covers the landscape.

What You Do Not Need (Yet)

  • Jenkins: Powerful but requires its own server, maintenance, and plugin management. Overkill for teams under 10.
  • ArgoCD/Flux: GitOps controllers for Kubernetes. If you are not running Kubernetes, skip these entirely.
  • Custom deployment scripts: They work until they do not. A managed tool like DeployHQ eliminates an entire class of it works on my machine deployment bugs.
  • Microservices: A monolith with a clean CI/CD pipeline ships faster than microservices with manual deployments.

Cost Considerations

Tool Free tier Paid from
GitHub Actions 2,000 min/month (private) $0.008/min overage
GitLab CI/CD 400 min/month $10/month for 10,000 min
\DeployHQ 1 project, 5 deploys/day From $4/month
Bitbucket Pipelines 50 min/month $15/month for 2,500 min

A small team can run a complete CI/CD pipeline for under $20/month. Start there. Add complexity — and cost — only when you have evidence that you need it.


Get Started

A good CI/CD pipeline does not have to be complicated. Start with automated tests on every push, a single staging environment, and a reliable deployment tool. Add deployment strategies, security scanning, and performance optimization as your team and application grow.

Sign up for \DeployHQ to handle the deployment side of your pipeline. Connect your repository, configure your server, and deploy with confidence.


If you have questions about setting up your pipeline or need help with your deployment configuration, reach out to us at support@deployhq.com or find us on Twitter at @deployhq.