Using DeployHQ's API: Automating Your Deployment Workflows With Scripts and Webhooks

Devops & Infrastructure and Tips & Tricks

Using DeployHQ's API: Automating Your Deployment Workflows With Scripts and Webhooks

While DeployHQ's cloud-based deployment platform is the best choice for straightforward and error-free workflows, there are times when more complex applications and projects call for more complex solutions. With the DeployHQ API, you can integrate DeployHQ into part of a larger, automated DevOps pipeline or build fully automated workflows.

In this article, we will explore how this API can transform DeployHQ from a standalone deployment tool into a fully integrated part of your ecosystem. We will show you how to create complex scripts and supplement them with webhooks. Read on to learn all the details and streamline deployment processes across your apps.

Understanding the DeployHQ API

DeployHQ dashboard vs. DeployHQ API: Which is best for your project?

DeployHQ offers a set of tools to comfortably handle your deployments from the moment you push your code until release. You can deploy directly from your Git repository, set specific and multiple deployment targets, create ad-hoc templates, build deployment pipelines, and automate your deployments in minutes.

When working on simpler, straightforward applications, the DeployHQ dashboard might be your best choice. Every feature is contained and manageable within an intuitive UI. However, when working on more complex, layered projects, such as multi-service web applications or microservices architectures, DeployHQ supports your deployment by letting you hook directly into their platforms via their API.

In short, the DeployHQ API gives you programmatic control over your deployments. Using simple JSON over HTTP with API key authentication, you can trigger deployments, manage configurations, integrate with third-party tools like Slack or Jira, and automate pipelines— basically taking care of everything that you can do from the dashboard. If your project comes with a certain degree of complexity, DeployHQ can become a flexible piece of your larger, scalable workflow.

Key benefits of API integration

Before diving into implementation, it's worth understanding why API integration can be a game-changer for your deployment process:

Scalability: As your application grows, manual deployments become bottlenecks. The API lets you scale your deployment processes alongside your application architecture.

Consistency: Programmatic deployments eliminate human error and ensure that every deployment follows the same process, reducing the risk of configuration drift.

Integration flexibility: The API plays nicely with your existing tools, whether that's CI/CD platforms like Jenkins or GitHub Actions, monitoring solutions, or project management tools.

Audit trails: API calls can be logged and tracked, giving you better visibility into who deployed what and when.

Core capabilities of the DeployHQ API integration

Setting up is easy. Here are the elements to keep in mind.

Authentication and basic request format

DeployHQ users can find their API key on the dashboard (go to "Settings", then "Security". Under "API Credentials", select "Create API Key"). To authenticate, simply use the API key in combination with your username. Here is a simple example request for when to want to see all your projects:

curl -H "Content-type: application/json" \  
-H "Accept: application/json" \   
--user username@email.com:YourAPIKey \  
https://username.deployhq.com/projects/  

The result should be something like this:

Basic <a href='https://www.deployhq.com' class='brandLink brandLink--deployhq' data-blog='/blog/deploy'>DeployHQ</a> API call

Security best practices

When working with API keys, keep these security considerations in mind:

  • Never commit API keys to your repository. Use environment variables or secure secret management systems instead.
  • Rotate your API keys regularly, especially if team members change or if you suspect a key might be compromised.
  • Use different API keys for different environments (development, staging, production) to limit the blast radius if a key is compromised.
  • Monitor API usage through DeployHQ's logging to detect any unusual activity.

Rate limits and error handling

The DeployHQ API implements reasonable rate limits to ensure service stability. While the exact limits aren't publicly documented, it's good practice to:

  • Implement exponential backoff in your scripts when you receive rate limit responses (HTTP 429).
  • Handle common HTTP status codes appropriately:
    • 401: Invalid authentication
    • 404: Resource not found
    • 422: Validation errors
    • 500: Server errors

Here's a quick example of robust error handling in your API calls:

import time
import requests
from requests.auth import HTTPBasicAuth

def make_api_request(url, auth, payload=None, max_retries=3):
    for attempt in range(max_retries):
        try:
            if payload:
                response = requests.post(url, json=payload, auth=auth, headers=headers)
            else:
                response = requests.get(url, auth=auth, headers=headers)

            if response.status_code == 429:  # Rate limited
                wait_time = 2 ** attempt  # Exponential backoff
                time.sleep(wait_time)
                continue

            response.raise_for_status()  # Raises exception for bad status codes
            return response.json()

        except requests.exceptions.RequestException as e:
            if attempt == max_retries - 1:  # Last attempt
                raise e
            time.sleep(2 ** attempt)

    return None

API endpoints and core capabilities

The DeployHQ API has a number of endpoints that allow you full access to your deployments, including for projects, servers, server groups, templates, config files, and more. You can find a complete list of endpoints in the docs.

Through these endpoints, developers can:

And a lot more! Check out the API docs.

DeployHQ API in action: custom deployment scripts

If you want even more tailored automations, scripts extend the API functionalities to give you total control.

Use case: schedule a deployment

Let's see a real custom deployment script in action. If you wanted to schedule a deployment, you could use a Python script such as this:

import requests  
from requests.auth import HTTPBasicAuth

account = "username"   
project = "my-project"  

url = f"https://{account}.deployhq.com/projects/{project}/deployments/"

auth = HTTPBasicAuth(  
    "username@email.com",  
    "YourAPIKey"  
)

# JSON payload for a weekly deployment every Monday at 9:00  
payload = {  
    "deployment": {  
        "parent_identifier": "xxxxxxxxxxxxxxxxxxxx",  # you can find this under https://username.deployhq.com/projects/my-project/servers  
        "start_revision" : "e84b5937f1132932dd56026db26a76f406555c19",  
        "end_revision" : "e84b5937f1132932dd56026db26a76f406555c19",  
        "mode": "queue",  
        "copy_config_files": 1,  
        "email_notify": 1  
    },  
    "schedule": {  
        "frequency": "weekly",  
        "weekly": {  
            "weekday": "Monday",  
            "hour": "9",  
            "minute": "0"  
        }  
    }  
}

headers = {  
    "Accept": "application/json",  
    "Content-Type": "application/json"  
}

response = requests.post(url, json=payload, auth=auth, headers=headers)  
print(response.json())

And you'd get something like this:

Script to schedule a deployment - response

Use case: automated rollback on deployment failure

Building resilient deployment pipelines means preparing for when things go wrong. Here's a script that monitors deployment status and automatically triggers a rollback to the last successful deployment if the current one fails:

import requests
import time
from requests.auth import HTTPBasicAuth

def monitor_and_rollback(account, project, deployment_id, auth):
    """Monitor a deployment and rollback if it fails"""

    # Check deployment status
    status_url = f"https://{account}.deployhq.com/projects/{project}/deployments/{deployment_id}"

    while True:
        response = requests.get(status_url, auth=auth)
        deployment = response.json()

        status = deployment['status']

        if status == 'completed':
            print("Deployment completed successfully!")
            break
        elif status == 'failed':
            print("Deployment failed! Initiating rollback...")

            # Get the last successful deployment
            deployments_url = f"https://{account}.deployhq.com/projects/{project}/deployments"
            response = requests.get(deployments_url, auth=auth)
            deployments = response.json()

            # Find last successful deployment
            last_successful = None
            for dep in deployments:
                if dep['status'] == 'completed' and dep['id'] != deployment_id:
                    last_successful = dep
                    break

            if last_successful:
                # Trigger rollback
                rollback_payload = {
                    "deployment": {
                        "parent_identifier": deployment['parent_identifier'],
                        "start_revision": last_successful['end_revision'],
                        "end_revision": last_successful['end_revision'],
                        "mode": "queue"
                    }
                }

                rollback_url = f"https://{account}.deployhq.com/projects/{project}/deployments/"
                requests.post(rollback_url, json=rollback_payload, auth=auth)
                print(f"Rollback initiated to revision {last_successful['end_revision']}")

            break

        time.sleep(30)  # Check every 30 seconds

Event-driven automation: Webhooks

Let's take a look at webhooks. With them, you can receive real-time HTTP POST payloads whenever a deployment event is detected. Use cases for automated webhooks could be:

  • Notifying you over a messaging app you integrated (Slack, for example) on deployment start or failure.
  • Triggering a rollback script automatically on failure.
  • Keep an eye on error monitoring dashboards (such as Sentry).

While scripts are usually executed by the developer, webhooks are set up within DeployHQ to listen for specific events (like code pushes). DeployHQ notifies your system to perform specified actions by sending a request to the configured URL whenever an event occurs.

When you combine both approaches, you can create a responsive and automated deployment pipeline. For instance, a webhook can notify your system of a failed deployment, prompting a custom script to execute a rollback or alert the team via Slack.

Understanding webhook payloads

When DeployHQ sends a webhook, it includes detailed information about the event. Here's what a typical deployment webhook payload looks like:

{
  "event": "deployment.completed",
  "deployment": {
    "id": "12345",
    "status": "completed",
    "start_revision": "abc123",
    "end_revision": "def456",
    "created_at": "2025-09-26T10:00:00Z",
    "completed_at": "2025-09-26T10:05:00Z",
    "server": {
      "name": "Production",
      "hostname": "prod.example.com"
    },
    "project": {
      "name": "My Project",
      "repository": "git@github.com:user/repo.git"
    }
  }
}

This rich payload allows your webhook handlers to make intelligent decisions about what actions to take.

Building a webhook receiver

If you're building custom webhook handlers, here's a simple example of a webhook receiver that could handle DeployHQ events:

from flask import Flask, request, jsonify
import hmac
import hashlib

app = Flask(__name__)
WEBHOOK_SECRET = "your_webhook_secret_here"

@app.route('/webhook/deployhq', methods=['POST'])
def handle_webhook():
    # Verify webhook signature (if configured)
    signature = request.headers.get('X-Webhook-Signature')
    if signature and not verify_signature(request.data, signature):
        return jsonify({'error': 'Invalid signature'}), 401

    event_data = request.get_json()
    event_type = event_data.get('event')

    if event_type == 'deployment.completed':
        handle_successful_deployment(event_data)
    elif event_type == 'deployment.failed':
        handle_failed_deployment(event_data)
    elif event_type == 'deployment.started':
        handle_deployment_started(event_data)

    return jsonify({'status': 'received'}), 200

def verify_signature(payload, signature):
    expected = hmac.new(
        WEBHOOK_SECRET.encode(),
        payload,
        hashlib.sha256
    ).hexdigest()
    return hmac.compare_digest(f"sha256={expected}", signature)

def handle_successful_deployment(data):
    # Your custom logic here
    deployment = data['deployment']
    print(f"Deployment {deployment['id']} completed successfully!")

def handle_failed_deployment(data):
    # Your custom logic here
    deployment = data['deployment']
    print(f"Deployment {deployment['id']} failed!")
    # Could trigger alerts, rollbacks, etc.

Use case: get a notification on Slack when a deployment finishes

With the previous script, we have scheduled a deployment to run every Monday morning at 9 AM. To get notified of a completed deployment over Slack, you can do the following:

  1. Go to your project
  2. Inside your project, in the left menu, select "Integrations" and select "New Integration"
  3. Give it a name and select Slack
  4. Complete the setup according to your preference. Under "Trigger integration when…", make sure to toggle "A deployment finishes successfully?"
  5. Allow DeployHQ to message you in your dev channel or even your own

Now, whenever your scheduled deployment goes through, you will get a nifty message

DeployHQ Slack integration message

Advanced integration patterns

CI/CD pipeline integration

One of the most powerful uses of the DeployHQ API is integrating it into your existing CI/CD pipeline. Here's how you might use it with GitHub Actions:

name: Deploy to Production
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Trigger DeployHQ deployment
        run: |
          curl -X POST \
            -H "Content-Type: application/json" \
            -H "Accept: application/json" \
            --user ${{ secrets.DEPLOYHQ_EMAIL }}:${{ secrets.DEPLOYHQ_API_KEY }} \
            -d '{
              "deployment": {
                "parent_identifier": "${{ secrets.DEPLOYHQ_SERVER_ID }}",
                "start_revision": "${{ github.sha }}",
                "end_revision": "${{ github.sha }}",
                "mode": "queue"
              }
            }' \
            https://${{ secrets.DEPLOYHQ_ACCOUNT }}.deployhq.com/projects/${{ secrets.DEPLOYHQ_PROJECT }}/deployments/

Multi-environment deployment orchestration

For complex applications, you might need to deploy to multiple environments in sequence. Here's a script that handles staging-to-production promotion:

def deploy_to_environments(environments, auth, account, project, revision):
    """Deploy to multiple environments in sequence"""

    for env in environments:
        print(f"Deploying to {env['name']}...")

        payload = {
            "deployment": {
                "parent_identifier": env['server_id'],
                "start_revision": revision,
                "end_revision": revision,
                "mode": "queue"
            }
        }

        url = f"https://{account}.deployhq.com/projects/{project}/deployments/"
        response = requests.post(url, json=payload, auth=auth)
        deployment = response.json()

        # Wait for deployment to complete before moving to next environment
        wait_for_deployment(deployment['id'], auth, account, project)
        print(f"Successfully deployed to {env['name']}")

# Usage
environments = [
    {"name": "Staging", "server_id": "staging_server_id"},
    {"name": "Production", "server_id": "production_server_id"}
]
deploy_to_environments(environments, auth, account, project, latest_commit_sha)

Troubleshooting common issues

Debug mode and logging

When developing your API integrations, enable verbose logging to troubleshoot issues:

import logging

# Enable debug logging
logging.basicConfig(level=logging.DEBUG)

# Your API calls will now show detailed request/response information

Common API response codes and solutions

422 Unprocessable Entity: Usually indicates validation errors in your payload. Check that all required fields are present and properly formatted.

404 Not Found: The resource (project, server, deployment) doesn't exist or you don't have permission to access it.

401 Unauthorized: Check your API credentials and ensure the API key is active.

Testing your integrations

Before deploying to production, test your API integrations thoroughly:

  1. Use a dedicated testing project in DeployHQ to avoid disrupting live deployments
  2. Test error scenarios by intentionally triggering failures
  3. Validate webhook payloads with sample data before going live
  4. Monitor API rate limits during high-volume operations

Conclusion

With the DeployHQ API, your deployment workflow doesn't have to stay manual or fragmented. You can schedule deployments, manage servers, create custom scripts, integrate with tools like Slack or Jira, and much more, all programmatically. Pair that with event-driven webhooks, and you've got a hands-off pipeline that keeps your projects on schedule and yourself with a finger on the pulse of your deployments.

The key to successful API integration is starting simple and gradually building complexity. Begin with basic deployment triggers, add error handling and monitoring, then expand to multi-environment orchestration and advanced automation patterns. With the foundation we've covered in this guide, you're well-equipped to build robust, scalable deployment pipelines that grow with your applications.

Find out more about DeployHQ's offer here.

FAQs

When should I use the DeployHQ dashboard versus the API?

For simpler apps, the dashboard is perfect: everything is in one intuitive place. For more complex projects, the API lets you integrate DeployHQ into a bigger pipeline, giving you full programmatic control.

What can I do with custom deployment scripts?

Scripts let you go beyond the dashboard. You can schedule deployments, manage servers, and automate tasks however you need, basically taking full control over your workflow.

How do webhooks work in DeployHQ?

Webhooks listen for deployment events and notify your system when they are detected. You can use them to trigger scripts, send Slack messages, or update monitoring dashboards whenever a deployment starts, finishes, or fails.

How do I secure my API integrations?

Store API keys as environment variables or in secure secret management systems, never in code. Rotate keys regularly, use different keys for different environments, and implement proper error handling and rate limiting in your scripts.

Can I use the API with languages other than Python?

Absolutely! The DeployHQ API is a standard REST API that works with any language that can make HTTP requests. The examples in this guide use Python, but you can implement the same patterns in Ruby, JavaScript, Go, or any other language.

What should I do if my deployment fails?

Implement robust error handling in your scripts, set up webhook notifications for failures, and consider automated rollback mechanisms. The API provides detailed error information that can help you diagnose and fix issues quickly.

A little bit about the author

Marta | Software Developer | DeployHQ As a software developer on the DeployHQ team, Marta contributes to building and improving features that help users deploy faster and more reliably. When she's not coding, she enjoys exploring new cuisines and discovering unique flavors.