If you followed our guide to deploying OpenClaw on a VPS, you've got a self-hosted AI assistant running on infrastructure you control. Out of the box it can chat, browse the web, run terminal commands, and remember context across sessions. Skills are what turn it from a capable chatbot into an automation engine that handles the repetitive parts of your workflow without being asked twice.
This post covers what Skills are, how to find and install them, a tour of the most useful directories, and how to build one from scratch to automate a real task. We'll also walk through troubleshooting common issues and best practices for writing skills that work reliably across environments.
What Are Skills?
A Skill is a directory containing a SKILL.md file — a Markdown document with YAML frontmatter — that teaches OpenClaw a repeatable procedure. When OpenClaw starts, it reads all eligible skills and injects compressed descriptions of them into its system prompt. The agent then knows which skills are available and invokes them automatically when a user request matches.
The simplest skill is just a natural-language runbook:
skills/
└── my-skill/
└── SKILL.md
More advanced skills can include supporting scripts and configuration, but the SKILL.md is always the entry point. Unlike traditional plugins, which require the host application to define an API surface, Skills work by giving the LLM instructions — the model figures out how to use them in context.
Three locations are checked when loading skills, in precedence order:
<workspace>/skills— workspace-specific, highest priority~/.openclaw/skills— user-level, shared across all agents on the machine- Built-in bundled skills — shipped with the install, lowest priority
This means you can override a bundled skill just by dropping a folder with the same name into your workspace directory.
flowchart LR
A["Workspace Skills\n(./skills)"] -->|highest priority| D["OpenClaw\nSystem Prompt"]
B["User-Level Skills\n(~/.openclaw/skills)"] -->|medium priority| D
C["Bundled Skills\n(built-in)"] -->|lowest priority| D
D --> E["Agent Ready\n— skills injected"]
Finding Skills: The ClawHub Registry
ClawHub is the official public skills registry, hosting over 13,700 community-built skills. Use the CLI to search and install:
# Search by keyword
clawhub search "github pull request"
# Install a skill
clawhub install pr-summary
# Update all installed skills
clawhub update --all
By default clawhub install places the skill in ./skills under your current directory. To install globally (visible to all your OpenClaw workspaces):
clawhub install pr-summary --global
The community has also published a curated list — awesome-openclaw-skills — which filters the registry down to 5,400+ vetted skills organised into categories. It's the faster way to browse if you don't already know what you're looking for.
Security note: Treat third-party skills as untrusted code. Read the
SKILL.mdbefore enabling a skill, especially if it requires API keys or system binaries. A skill that phones home with your environment variables is possible to write — the registry has moderation, but it's not a guarantee.
A Tour of the Skill Categories
Coding & Development (1,200+ skills) — the largest category by far. Highlights include skills for searching academic papers via OpenAlex, generating hash-chained audit logs for agent actions, and full GitHub workflow automation (open PRs, review diffs, triage issues). If your team uses GitHub-based deployment workflows, skills in this category can trigger deploys directly from a conversation — push a branch, open a PR, and let your CI pipeline handle the rest.
DevOps & Cloud (400+ skills) — Docker container management, process health monitoring, cloud provider CLIs. The agentic-devops skill handles common operations like restarting a crashed container or tailing logs from a service, triggered by a plain-language message. Teams running zero downtime deployments can pair these skills with health-check monitoring to automatically roll back a release if the new container fails readiness probes.
Browser & Automation (335 skills) — web scraping, UI testing, form automation. Useful for monitoring pages that don't expose APIs or automating login-gated workflows. A common pattern is combining a browser skill with a notification skill to alert you when a competitor changes their pricing page or when your own staging environment returns errors after a deploy.
Productivity & Tasks (200+ skills) — Notion CRUD, calendar management, email triage. The better-notion skill gives full create/read/update/delete access to Notion pages and databases from any messaging channel. Other standout skills in this category include time-tracking integrations that log hours against Jira tickets and daily digest generators that summarise activity across multiple project management tools.
Communication (149 skills) — Slack, Discord, Teams, and email integrations. Post to channels, summarise threads, draft replies. The Slack skills are particularly mature — you can configure them to post deployment summaries, standup reminders, or incident alerts directly into the relevant channel without leaving the OpenClaw conversation.
Git & GitHub (170 skills) — beyond the basics, there are skills for commit message generation, changelog drafting, and coordinating multi-repo releases. The release-coordinator skill is worth highlighting: it tags a release, generates a changelog from merged PRs, and posts the release notes to Slack in a single invocation.
Building Your Own Skill
The quickest way to understand Skills is to build one. Here's a practical example: a daily standup note generator that reads your recent git commits and drafts your standup update, so you never have to manually trawl through git log before your morning call.
Step 1 — Create the folder
mkdir -p ~/.openclaw/skills/standup-notes
Step 2 — Write the SKILL.md
nano ~/.openclaw/skills/standup-notes/SKILL.md
---
name: standup-notes
description: Generates a standup update from git commits in the last 24 hours across local repositories
user-invocable: true
metadata: {"openclaw":{"requires":{"bins":["git"]}}}
---
# Standup Notes Generator
When the user asks for standup notes, a standup summary, or "what did I work on yesterday", follow these steps:
## Steps
1. Ask the user which directory their repositories live in, or use `~/repos` as the default if they don't specify.
2. Find all git repositories in that directory:
```bash
find ~/repos -maxdepth 2 -name ".git" -type d | sed 's|/.git||'
For each repository found, get commits from the last 24 hours authored by the current git user:
git -C <repo-path> log --since="24 hours ago" --author="$(git config user.name)" --pretty=format:"%h %s" 2>/dev/nullCollect all results. Ignore repos with no recent commits.
Group the commits by repository name and draft a standup update in this format:
- Yesterday: bullet points summarising work done, one per meaningful commit group (collapse trivial commits like
fix typo
into their parent) - Today: leave as
TBD — please fill in
- Blockers: leave as
None
unless the user mentions something
- Yesterday: bullet points summarising work done, one per meaningful commit group (collapse trivial commits like
Keep the language concise and past-tense. Replace jargon-heavy commit messages with plain descriptions where possible.
Output the draft in a code block so it's easy to copy.
Example trigger phrases
Give me my standup notes
What did I work on yesterday?
Draft my standup for today
Standup summary
```
Step 3 — Test it
Restart the OpenClaw gateway to pick up the new skill:
openclaw gateway restart
Then send your bot a message:
Give me my standup notes
OpenClaw will run the git commands, collect the output, and return a formatted standup summary. Because the instructions are plain English, you can iterate on the output by replying — make it shorter
or use first person
— without editing the skill.
Step 4 — Extend it with an automatic post
Once the notes look right, extend the skill to post them directly to your team's Slack channel. Add a section to the SKILL.md:
## Posting to Slack (optional)
If the user asks to "post" or "send" the standup, use the Slack skill (if installed) to post the formatted message to the `#standup` channel. If the Slack skill is not available, output the message and tell the user to copy it manually.
Now the full workflow — generate, review, post — happens in one conversation turn.
Skill SKILL.md Reference
Here are the frontmatter fields you'll use most often:
| Field | Type | Description |
|---|---|---|
name |
string | Unique identifier (used by clawhub) |
description |
string | One-line summary injected into the system prompt |
user-invocable |
boolean | Exposes the skill as a /skill-name slash command (default: true) |
disable-model-invocation |
boolean | Excludes the skill from model prompts — use for slash-command-only tools |
metadata |
JSON (single line) | Gating, emoji, OS requirements, installer specs |
The metadata.openclaw.requires object gates the skill so it only loads when its dependencies are met:
---
name: my-tool
description: Does something with jq and an API key
metadata: {"openclaw":{"requires":{"bins":["jq","curl"],"env":["MY_API_KEY"]}}}
---
If jq isn't on PATH or MY_API_KEY isn't set, OpenClaw skips the skill entirely — it won't show up in the agent's prompt or the UI.
Tips for Writing Effective Skills
Writing a skill that works on your machine is straightforward. Writing one that works reliably across environments and for other users takes a bit more discipline.
Be explicit about prerequisites. Always declare required binaries and environment variables in the metadata.openclaw.requires block. A skill that silently fails because jq isn't installed wastes debugging time. If your skill needs a specific version of a tool, mention it in the instructions — the requires block only checks for presence, not version.
Write instructions for the model, not for humans. The LLM reads your SKILL.md, so phrase steps as direct commands rather than explanations. Run `docker ps` and parse the output
is better than You might want to check which containers are running.
Ambiguity in instructions leads to inconsistent behaviour across different models and context windows.
Keep descriptions short and specific. The description field is injected into the system prompt on every conversation. A verbose description eats tokens that could be used for reasoning. Aim for 10-15 words that clearly state what the skill does and when to use it.
Handle failure gracefully. Include instructions for what the agent should do when a command returns an error or produces unexpected output. If the API returns a 401, tell the user their token may have expired
prevents the agent from guessing or hallucinating a recovery path.
Test with a clean environment. Before publishing, test your skill on a machine that doesn't have your personal dotfiles, aliases, or globally installed packages. What works in your shell might not work on a fresh Ubuntu server where the only tools available are what's declared in the skill's requirements.
Scope each skill to one task. A skill that tries to do everything — search, filter, format, post, and archive — is harder to maintain and more likely to confuse the model. Break complex workflows into smaller skills that compose together. The standup-notes example above delegates posting to the Slack skill rather than reimplementing Slack integration.
Troubleshooting Common Issues
Skill not appearing in the agent's prompt. The most common cause is a missing or malformed metadata.openclaw.requires block. If the skill requires a binary that isn't installed or an environment variable that isn't set, OpenClaw silently skips it. Run openclaw skills list to see which skills loaded and which were skipped, along with the reason.
Skill loads but doesn't trigger. Check the description field — if it's too vague, the model may not recognise when to invoke it. A description like Useful helper tool
gives the LLM almost nothing to work with. Rewrite it to match the kind of phrases users actually say: Generates daily standup notes from recent git commits.
Permission denied errors. Skills that run shell commands inherit the permissions of the OpenClaw process. If you're running OpenClaw as a non-root user (which you should be), commands like systemctl restart nginx will fail. Either configure passwordless sudo for specific commands, or have the skill check permissions first and return a clear message instead of a raw error trace.
Dependency not found on remote servers. A skill that works locally may fail on your VPS because a binary (e.g., jq, gh, docker) isn't installed there. Add all system-level dependencies to your server provisioning script or Dockerfile. Declaring them in the skill's requires.bins block prevents the skill from loading at all, which is better than it loading and then crashing mid-execution.
Conflicting skills. If two skills have overlapping trigger patterns, the model may pick the wrong one. The easiest fix is to make the description fields more distinct. If both skills live in the same precedence tier (e.g., both in ~/.openclaw/skills), the one loaded first alphabetically wins — rename the folder to control order if needed.
Changes not taking effect after editing. OpenClaw reads skills at gateway startup. After editing a SKILL.md, you need to run openclaw gateway restart for the changes to take effect. Hot-reloading is on the roadmap but not yet implemented.
Publishing to ClawHub
When your skill works reliably and you want to share it:
# From the skills directory
clawhub publish ./standup-notes --slug standup-notes --version 1.0.0
Publishing requires a GitHub account at least a week old (abuse prevention). The registry runs automated checks for obviously malicious patterns, but passes it to you to write a clear README and document what environment variables the skill uses and why.
For updates, bump the version and republish:
clawhub publish ./standup-notes --slug standup-notes --version 1.1.0
Keeping Skills in Sync Across Servers with DeployHQ
If you're running OpenClaw on a VPS — or across multiple servers — keeping skills in sync manually is error-prone. The same Git-based approach from the VPS deployment guide applies here: store your custom skills in a repository and let an automated deployment platform push changes to every server automatically.
my-openclaw-config/
├── config.json
└── skills/
├── standup-notes/
│ └── SKILL.md
└── deploy-digest/
└── SKILL.md
Set up Git deployment automation so that every push to your main branch triggers a deployment. Add a post-deployment command using build pipelines to copy the skills into place and restart the gateway:
rsync -av skills/ ~/.openclaw/skills/
openclaw gateway restart
Every time you add or update a skill and push to your repository, DeployHQ deploys it to all connected servers — no SSH sessions required. If a skill breaks something, use one-click rollback to revert to the previous working version from the dashboard.
For teams with larger budgets or multiple environments (staging, production), DeployHQ's pricing plans include multiple server targets per project — deploy to staging first, verify the skill works, then promote to production.
The Skills ecosystem is what makes OpenClaw genuinely composable. Start with a few skills from ClawHub, modify them to match your workflow, and build custom ones for the tasks that are specific to your team. The bar to entry is low — if you can write a clear step-by-step process in plain English, you can write a skill.
Sign up for DeployHQ to automate skill deployments across your VPS fleet, or drop us a line at support@deployhq.com if you have questions. Find us on Twitter at @deployhq.