Manage your deployments using natural language with Claude Code, OpenAI Codex, Gemini CLI, and any MCP-compatible AI assistant.
The way developers interact with their tools is changing. Instead of switching between dashboards, terminals, and documentation, what if you could simply ask your AI assistant to check a deployment status, debug a failed release, or push the latest changes to production?
Today, we're excited to announce the DeployHQ MCP Server — a new integration that brings the power of natural language to your deployment workflow. Whether you're using Claude Code, OpenAI Codex, Google's Gemini CLI, or any other MCP-compatible assistant, you can now manage your DeployHQ deployments without ever leaving your coding environment.
What is MCP?
The Model Context Protocol (MCP) is an open standard that allows AI assistants to interact with external tools and services in a structured, secure way. Think of it as a universal adapter that lets AI models understand and use your favourite developer tools.
Instead of copying and pasting between your AI assistant and various dashboards, MCP creates a direct bridge. Your AI can discover available tools, understand their parameters, and execute actions on your behalf — all through natural conversation.
What Can You Do With the DeployHQ MCP Server?
The DeployHQ MCP Server exposes seven powerful tools that cover your entire deployment workflow:
- List Projects — View all projects in your DeployHQ account with their repository information and deployment status
- Get Project Details — Dive deep into a specific project's configuration
- List Servers — See all servers configured for any project
- List Deployments — Browse deployment history with pagination support
- Get Deployment Details — Examine specific deployment information
- Get Deployment Log — Retrieve complete logs for debugging failed deployments
- Create Deployment — Queue new deployments with full control over options like branch, build commands, and caching
Real-World Use Cases
Debugging a Failed Deployment
Instead of clicking through the dashboard to find logs, simply ask:
"Why did the last deployment fail for my-website?"
Your AI assistant will automatically find the failed deployment, retrieve the logs, and analyse what went wrong — often suggesting fixes before you even ask.
Quick Status Checks
During a busy day of shipping features, stay informed without context switching:
"What's the status of my latest deployment for api-backend?"
Deploying with Confidence
When you're ready to ship:
"Deploy the latest changes to production for my-website"
Your AI assistant will find the project, identify the production server, determine the correct revisions, and queue the deployment — then report back with the status.
Monitoring Multiple Projects
For agencies or teams managing multiple sites:
"List all my DeployHQ projects and show which ones have pending deployments"
Setting Up with Claude Code
Claude Code is Anthropic's command-line tool for agentic coding that runs directly in your terminal. Here's how to connect it to DeployHQ:
- Ensure you have Node.js 18 or higher installed
- Get your DeployHQ credentials: your login email, API key (found in Settings → Security), and account name from your DeployHQ URL
- Add the following to your
.claude.jsonfile in your project directory:
{
"mcpServers": {
"deployhq": {
"command": "npx",
"args": ["-y", "deployhq-mcp-server"],
"env": {
"DEPLOYHQ_EMAIL": "your-email@example.com",
"DEPLOYHQ_API_KEY": "your-api-key",
"DEPLOYHQ_ACCOUNT": "your-account-name"
}
}
}
}
Now you can ask Claude Code to manage your deployments while you're coding. Spotted a bug in production? Ask Claude to check the deployment logs. Ready to ship a fix? Ask it to deploy — all without leaving your terminal.
Setting Up with OpenAI Codex
OpenAI's Codex CLI is a lightweight coding agent that runs locally in your terminal, and it has full support for MCP servers. To connect DeployHQ:
- Install Codex CLI if you haven't already:
npm i -g @openai/codex - Add the MCP server using the Codex CLI command:
codex mcp add deployhq \
--env DEPLOYHQ_EMAIL=your-email@example.com \
--env DEPLOYHQ_API_KEY=your-api-key \
--env DEPLOYHQ_ACCOUNT=your-account-name \
-- npx -y deployhq-mcp-server
Alternatively, you can manually edit your ~/.codex/config.toml:
[mcp_servers.deployhq]
command = "npx"
args = ["-y", "deployhq-mcp-server"]
[mcp_servers.deployhq.env]
DEPLOYHQ_EMAIL = "your-email@example.com"
DEPLOYHQ_API_KEY = "your-api-key"
DEPLOYHQ_ACCOUNT = "your-account-name"
Once configured, launch Codex and use /mcp to verify your DeployHQ server is connected. Then simply ask Codex to manage your deployments as part of your coding workflow.
Setting Up with Gemini CLI
Google's Gemini CLI is an open-source AI agent that brings Gemini directly into your terminal, with built-in MCP support. Here's how to connect DeployHQ:
- Install Gemini CLI:
npm install -g @google/gemini-cli - Configure the MCP server in
~/.gemini/settings.json:
{
"mcpServers": {
"deployhq": {
"command": "npx",
"args": ["-y", "deployhq-mcp-server"],
"env": {
"DEPLOYHQ_EMAIL": "your-email@example.com",
"DEPLOYHQ_API_KEY": "your-api-key",
"DEPLOYHQ_ACCOUNT": "your-account-name"
}
}
}
}
Launch Gemini CLI and use /mcp to verify the connection. You can then interact with DeployHQ using natural language, leveraging Gemini 2.5 Pro's powerful reasoning capabilities to debug deployments, analyse logs, and manage your infrastructure.
Security First
We've designed the DeployHQ MCP Server with security as a priority:
- Local execution — The server runs on your machine; credentials never leave your environment
- Environment variables — Credentials are passed securely and never written to disk
- HTTPS only — All API communication uses encrypted connections
- No telemetry — We don't collect any usage data or analytics
Your API key has the same permissions as your DeployHQ user account, so you maintain full control over what the AI assistant can access.
Managing Multiple Accounts
Working with multiple DeployHQ accounts? Perhaps separate staging and production environments? Simply configure multiple servers:
{
"mcpServers": {
"deployhq-production": {
"command": "npx",
"args": ["-y", "deployhq-mcp-server"],
"env": {
"DEPLOYHQ_EMAIL": "prod@example.com",
"DEPLOYHQ_API_KEY": "prod-api-key",
"DEPLOYHQ_ACCOUNT": "production-account"
}
},
"deployhq-staging": {
"command": "npx",
"args": ["-y", "deployhq-mcp-server"],
"env": {
"DEPLOYHQ_EMAIL": "staging@example.com",
"DEPLOYHQ_API_KEY": "staging-api-key",
"DEPLOYHQ_ACCOUNT": "staging-account"
}
}
}
}
Getting Started
Ready to bring AI-powered deployment management to your workflow?
- Get your credentials — Find your API key in DeployHQ under Settings → Security
- Choose your AI assistant — Claude Code, Codex CLI, Gemini CLI, or any MCP-compatible tool
- Configure the server — Follow the setup instructions above
- Start deploying — Ask your assistant to list your projects and go from there
The DeployHQ MCP Server is available now via npx, so there's nothing to install permanently — it downloads and runs the latest version automatically.
For detailed documentation, troubleshooting guides, and the full API reference, visit our MCP Server support page.
Resources
The future of deployment management is conversational. Try the DeployHQ MCP Server today and experience the difference.