Last updated on 26th February 2026

Deploy to Kubernetes with kubectl

This guide walks you through deploying to Kubernetes using DeployHQ's Custom Action protocol. Custom Actions run CLI tools inside Docker containers during your deployment pipeline, letting you apply Kubernetes manifests and manage rollouts directly from DeployHQ.

Prerequisites

Before you begin, ensure you have:

  • Beta Features Enabled: Custom Actions require beta features to be enabled in your account settings
  • A Repository with Kubernetes Manifests: Your deployment YAML files (Deployments, Services, ConfigMaps, etc.) stored in your repository
  • Kubeconfig Credentials: Access credentials for your Kubernetes cluster (from EKS, GKE, AKS, or a self-managed cluster)
  • A Container Image: Your application image should be pre-built and pushed to a container registry before deploying with kubectl

Setting Up the Server

  1. Open your project in DeployHQ and navigate to Servers.
  2. Click New Server and select Custom Action as the protocol.
  3. Under Image Source, select Curated Template.
  4. Choose Kubernetes from the template dropdown. This sets the Docker image to bitnami/kubectl:latest.
  5. Enter your deployment commands in the Commands field:
   kubectl apply -f /data/k8s/
   kubectl rollout status deployment/my-app -n production --timeout=300s
  1. Set Halt on error to stop the deployment if kubectl fails.
  2. Click Create Server.

Configuring Credentials

Kubectl needs a kubeconfig to connect to your cluster. There are two approaches for providing this in DeployHQ.

Option A: Kubeconfig via Environment Variable

  1. Base64-encode your kubeconfig file locally:
   base64 -w 0 ~/.kube/config
  1. Add an environment variable to your DeployHQ server:

| Variable | Value | |----------|-------| | KUBECONFIG_BASE64 | Your base64-encoded kubeconfig |

  1. Decode it in your commands before running kubectl:
   mkdir -p ~/.kube
   echo "$KUBECONFIG_BASE64" | base64 -d > ~/.kube/config
   kubectl apply -f /data/k8s/
   kubectl rollout status deployment/my-app -n production --timeout=300s

Option B: Kubeconfig via Configuration File

  1. In your DeployHQ project, add a configuration file with the path .kube/config containing your kubeconfig YAML.
  2. Enable Copy configuration files when creating your deployment.
  3. Set the KUBECONFIG environment variable to point to the config file:

| Variable | Value | |----------|-------| | KUBECONFIG | /data/.kube/config |

  1. Your commands can then use kubectl directly:
   kubectl apply -f /data/k8s/
   kubectl rollout status deployment/my-app -n production --timeout=300s

Cloud Provider Authentication

For managed Kubernetes services, you can authenticate directly with cloud provider credentials instead of a kubeconfig. Use a custom Docker image that includes both kubectl and the cloud provider CLI.

EKS (AWS): Set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION, then use aws eks update-kubeconfig.

GKE (Google Cloud): Set GOOGLE_CREDENTIALS with your service account JSON, then use gcloud container clusters get-credentials.

AKS (Azure): Set ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_SUBSCRIPTION_ID, and ARM_TENANT_ID, then use az aks get-credentials.

Example Commands

Apply All Manifests

Apply every manifest in a directory:

mkdir -p ~/.kube
echo "$KUBECONFIG_BASE64" | base64 -d > ~/.kube/config
kubectl apply -f /data/k8s/

Apply and Wait for Rollout

Apply manifests and wait for the deployment to become ready:

mkdir -p ~/.kube
echo "$KUBECONFIG_BASE64" | base64 -d > ~/.kube/config
kubectl apply -f /data/k8s/
kubectl rollout status deployment/my-app -n production --timeout=300s

Update Container Image

If your manifests reference a fixed image tag and you want to update it per deployment:

mkdir -p ~/.kube
echo "$KUBECONFIG_BASE64" | base64 -d > ~/.kube/config
kubectl set image deployment/my-app my-app=myregistry/my-app:%revision% -n production
kubectl rollout status deployment/my-app -n production --timeout=300s

The %revision% text variable is replaced with the commit hash being deployed.

Apply with Kustomize

If you use Kustomize overlays for different environments:

mkdir -p ~/.kube
echo "$KUBECONFIG_BASE64" | base64 -d > ~/.kube/config
kubectl apply -k /data/k8s/overlays/production/
kubectl rollout status deployment/my-app -n production --timeout=300s

Namespace Management

Deploying to a Specific Namespace

Include the namespace in your commands:

kubectl apply -f /data/k8s/ -n production

Or define the namespace in your manifest files:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  namespace: production

Dynamic Namespace Selection

Use DeployHQ text variables to deploy to different namespaces per server:

kubectl apply -f /data/k8s/ -n %server%
kubectl rollout status deployment/my-app -n %server% --timeout=300s

Troubleshooting

"error: the server doesn't have a resource type":

  • Ensure your kubeconfig points to the correct cluster
  • Verify the Kubernetes API server is accessible from the DeployHQ build server

"error: You must be logged in to the server (Unauthorized)":

  • Check that your kubeconfig credentials are valid and not expired
  • For cloud providers, verify the IAM or service account has the correct RBAC roles

"Unable to connect to the server":

  • The cluster API server must be accessible from the public internet for DeployHQ to reach it
  • If your cluster is private, consider using a bastion or VPN tunnel

"deployment has timed out":

  • Increase the --timeout value in the kubectl rollout status command
  • Check your pod events with kubectl describe pod to identify why pods are not becoming ready
  • Common causes include image pull errors, resource limits, and failing health checks

"The Deployment does not have minimum availability":

  • Review pod logs and events to identify the root cause
  • Ensure your container image exists in the registry and is accessible
  • Check resource requests and limits match what is available in the cluster

Config file not found:

  • Verify the kubeconfig path matches where you wrote or mounted the file
  • When using configuration files, ensure Copy configuration files is enabled on the deployment