Amazon S3 is one of the most popular ways to host static websites, single-page applications, and frontend assets. But updating your S3 bucket manually — whether through the AWS Console, the CLI, or a drag-and-drop client — gets old fast. You have to remember which files changed, upload them in the right order, and hope you don't miss anything.
With DeployHQ, you can connect your Git repository directly to an S3 bucket and deploy automatically every time you push. Changed files are detected and uploaded for you — no manual work, no missed files.
This guide covers the full setup for GitHub, Bitbucket, and GitLab repositories, plus S3-compatible storage like Cloudflare R2, Wasabi, and DigitalOcean Spaces.
What you'll need
- A Git repository on GitHub, Bitbucket, or GitLab
- An Amazon S3 bucket (or S3-compatible storage)
- Your AWS Access Key ID and Secret Access Key
- A DeployHQ account (free plan: 1 project, up to 10 deploys/day)
Step 1: Create a new project in DeployHQ
After signing up, click the New Project button at the top of the screen.

Enter a name for your project and select your repository provider — GitHub, Bitbucket, or GitLab. DeployHQ also supports manual repository configuration if you host code elsewhere.
Step 2: Connect your repository
Click Create Project and you'll be prompted to log in to your repository provider and authorize DeployHQ. After that, you'll see a list of your repositories.

Select the repository you want to deploy from. DeployHQ will automatically add an SSH key for access. Keep the add a webhook option checked — this enables automatic deployments later.
Step 3: Configure your S3 bucket
After connecting your repository, you'll be taken to the New Server screen. Enter a name, then select Amazon S3 as the protocol.

Fill in the following details:
- Bucket name: The name of your S3 bucket (e.g.,
my-website-bucket) - Access Key ID: Your AWS access key
- Secret Access Key: Your AWS secret key
- Region: The AWS region where your bucket lives (e.g.,
us-east-1) - Path prefix: The directory within the bucket to deploy to (e.g.,
public/or leave blank for the root)
Click Create Server to finish.
IAM permissions for your AWS keys
Your AWS access key needs at minimum these S3 permissions on the target bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name",
"arn:aws:s3:::your-bucket-name/*"
]
}
]
}
We recommend creating a dedicated IAM user for DeployHQ rather than using your root account credentials. This limits the blast radius if the keys are ever compromised.
Step 4: Run your first deployment
Click Deploy for the first time in the top right of your project.

Review the details — the server and branch are auto-selected, and the start revision shows The very first commit since nothing has been deployed yet. Click Deploy to transfer all files to your S3 bucket.

Once complete, you can review the deployment log to see exactly which files were uploaded.
Step 5: Enable automatic deployments
Go to the Automatic Deployments page in the left sidebar and enable it for your server:

Now every git push will trigger a deployment automatically. Only changed files are uploaded — DeployHQ compares the commits and uploads just the diff.

Using S3-compatible storage (Cloudflare R2, Wasabi, DigitalOcean Spaces)
DeployHQ supports any S3-compatible storage, not just AWS. When configuring your server, select Amazon S3 as the protocol but enter the custom endpoint for your provider:
| Provider | Endpoint format | Example |
|---|---|---|
| Cloudflare R2 | <account-id>.r2.cloudflarestorage.com |
abc123.r2.cloudflarestorage.com |
| Wasabi | s3.<region>.wasabisys.com |
s3.us-east-1.wasabisys.com |
| DigitalOcean Spaces | <region>.digitaloceanspaces.com |
nyc3.digitaloceanspaces.com |
| MinIO | Your MinIO server URL | minio.example.com:9000 |
Enter the provider's access key and secret, and DeployHQ handles the rest. The S3 protocol is the same — only the endpoint differs.
Setting custom request headers
If you're using S3 to host a static website, you may want to set custom headers like Cache-Control or Content-Type for specific file types. You can configure these in your DeployHQ server settings under Custom Headers.
For example, setting a long cache lifetime for assets:
*.css, *.js→Cache-Control: public, max-age=31536000*.html→Cache-Control: public, max-age=3600
Building assets before deploying to S3
One limitation of S3 is that you can't run server-side commands. If your project uses Sass, TypeScript, or a bundler like Webpack or Vite, you need to compile assets before they're uploaded.
DeployHQ's Build Pipelines feature solves this. You can define build commands that run in an isolated environment before deployment:

Common build commands:
npm install
npm run build
DeployHQ's build environment comes with Node, Ruby, PHP, and many more languages pre-installed. After the build completes, only the output files (e.g., the dist/ directory) are uploaded to S3.
This means you never have to commit compiled assets to Git — your repository stays clean, and your S3 bucket always gets fresh builds.
Troubleshooting S3 deployments
Access Denied
errors: Check that your IAM user has the correct permissions on the bucket. The s3:ListBucket permission is often forgotten but required for DeployHQ to detect existing files.
Wrong region: If the bucket region doesn't match your configuration, uploads will fail. Double-check in the AWS Console under Bucket Properties > AWS Region.
Files deploying to the wrong path: Check the Path prefix in your DeployHQ server settings. Leave it blank to deploy to the bucket root.
For more server connection issues, see our server troubleshooting guide.
If you have questions about deploying to S3 or any other aspect of DeployHQ, drop us an email at support@deployhq.com or find us on Twitter/X.