How to Use Windsurf: The AI-Powered Code Editor with Cascade Agent
Windsurf is the AI-powered code editor that treats artificial intelligence as a first-class collaborator rather than an autocomplete add-on. Originally built by Codeium and acquired by OpenAI in 2025, Windsurf combines a familiar VS Code-based interface with deep AI integration through its signature feature, Cascade — an agentic AI system that can understand your entire codebase, reason across multiple files, and execute multi-step coding tasks autonomously. Whether you're scaffolding a new feature, refactoring a legacy module, or debugging a production issue, Windsurf aims to keep you in flow state while the AI handles the heavy lifting.
Why Windsurf Matters for Web Developers
The AI code editor space has gotten crowded. VS Code has Copilot, Cursor has its Composer agent, and dozens of extensions promise AI-assisted coding. Windsurf differentiates itself in three key ways:
Cascade's agentic architecture. Unlike chat-based AI assistants that respond to prompts one at a time, Cascade operates as a persistent agent. It reads your codebase, builds a mental model of your project structure, and executes multi-step plans — creating files, editing existing code, running terminal commands, and iterating on its own output. You describe the intent; Cascade figures out the implementation path.
Deep codebase awareness. Windsurf indexes your entire project and maintains context across your editing session. When you ask Cascade to add authentication to your Express app, it already knows your route structure, middleware stack, and database models. This isn't retrieval-augmented generation bolted onto a chat window — it's integrated into the editor's core.
VS Code compatibility. Windsurf is built on the VS Code foundation, which means your extensions, keybindings, themes, and settings carry over. The migration from VS Code or Cursor takes minutes, not days. You get the AI capabilities without abandoning your existing workflow.
For web developers specifically, Windsurf shines when you're working across the full stack — modifying a React component, updating the API route that feeds it, and adjusting the database migration, all in a single Cascade session. The AI maintains context across all three layers simultaneously.
Step 1: System Requirements
Windsurf runs on all three major desktop platforms. Here are the minimum requirements:
macOS:
- macOS 10.15 (Catalina) or later
- Apple Silicon (M1/M2/M3/M4) or Intel processor
- 4 GB RAM minimum (8 GB recommended)
Windows:
- Windows 10 (version 1903) or later
- x64 processor
- 4 GB RAM minimum (8 GB recommended)
Linux:
- Ubuntu 20.04+, Fedora 36+, or equivalent
- x64 processor
- 4 GB RAM minimum (8 GB recommended)
Additional requirements:
- 2 GB disk space for the editor plus extensions
- Internet connection required for AI features (all inference runs in the cloud)
- Git installed and available in your PATH (for version control integration)
Windsurf's AI processing happens server-side, so you don't need a powerful GPU locally. The editor itself is lightweight — comparable to VS Code in resource usage.
Step 2: Install Windsurf
Download and Install
Visit windsurf.com and download the installer for your operating system.
macOS: Open the
.dmgfile and drag Windsurf to your Applications folder. On Apple Silicon Macs, Windsurf runs natively without Rosetta.Windows: Run the
.exeinstaller. The default installation path isC:\Users\<username>\AppData\Local\Programs\Windsurf.Linux: Download the
.deb(Debian/Ubuntu) or.rpm(Fedora) package, or use the.tar.gzfor manual installation:
# Debian/Ubuntu
sudo dpkg -i windsurf_*.deb
# Fedora
sudo rpm -i windsurf_*.rpm
Import Settings from VS Code
On first launch, Windsurf detects existing VS Code installations and offers to import:
- Extensions — Most VS Code extensions work directly in Windsurf since they share the same extension marketplace
- Settings — Your
settings.json, keybindings, and snippets - Themes — Color themes and icon packs
- Profiles — If you use VS Code profiles, select which one to import
You can also import settings manually later via the Command Palette (Cmd+Shift+P / Ctrl+Shift+P) and searching for "Import Settings."
Install the CLI
Windsurf includes a command-line launcher. Open the Command Palette and run "Install 'windsurf' command in PATH" to enable launching from the terminal:
# Open a project in Windsurf from terminal
windsurf /path/to/your/project
# Open the current directory
windsurf .
Step 3: Choose Your Plan
Windsurf offers tiered pricing based on usage volume and feature access:
Free Tier
- Cascade and autocomplete access with limited credits
- Access to base AI models
- Community support
- Good for trying Windsurf and light personal projects
Pro ($15/month)
- Significantly higher usage limits for Cascade and autocomplete
- Access to premium AI models (GPT-4o, Claude Sonnet, and others)
- Priority model access during peak times
- Unlimited autocomplete suggestions
Teams ($30/user/month)
- Everything in Pro
- Centralized billing and user management
- Team-wide settings and policy controls
- Admin dashboard for usage monitoring
- Priority support
Enterprise (custom pricing)
- Self-hosted deployment options
- SSO/SAML integration
- Custom model configurations
- Dedicated support
The free tier provides enough credits to evaluate Windsurf properly for a week or two of regular use. If you're using AI features heavily throughout the day, you'll likely want Pro. The Teams tier makes sense when you need organizational controls and consistent provisioning across a development team.
You can upgrade or switch plans anytime from the Windsurf account settings panel within the editor.
Step 4: Configure Windsurf for Your Workflow
AI Model Selection
Windsurf gives you control over which AI models power different features. Open settings (Cmd+, / Ctrl+,) and navigate to the AI section:
- Cascade model: The model used for agentic tasks (multi-file edits, code generation, debugging). Premium models like GPT-4o and Claude tend to produce better results for complex tasks.
- Autocomplete model: The model powering Tab completions. Faster, smaller models work well here since latency matters more than depth.
For most development work, use a premium model for Cascade and let the default handle autocomplete. The premium models are noticeably better at understanding project-wide context and generating correct multi-file changes.
Project-Level Rules
Windsurf supports project-level configuration through a .windsurfrules file in your project root. This file tells the AI about your project's conventions, stack, and preferences:
# .windsurfrules
## Project Overview
This is a Next.js 14 application with App Router, TypeScript, and Tailwind CSS.
The backend uses Prisma with PostgreSQL.
## Code Style
- Use functional components with hooks, never class components
- All components must be TypeScript with explicit prop types
- Use `cn()` utility from lib/utils for conditional class names
- Prefer server components; use 'use client' only when necessary
## File Structure
- Components in src/components/ organized by feature
- API routes in src/app/api/
- Database models in prisma/schema.prisma
- Shared types in src/types/
## Testing
- Use Vitest for unit tests
- Test files colocated with source: component.test.tsx
- Use Testing Library for component tests
## Deployment
- Deployed via DeployHQ to a VPS running Node.js 20
- Build command: npm run build
- Output directory: .next/
This context file significantly improves Cascade's output quality. Instead of guessing your conventions, it follows them from the first generation.
Global Rules
For conventions that apply across all your projects, Windsurf supports global rules in your user settings. These might include your preferred formatting standards, language preferences, or documentation style.
Cascade Configuration
Fine-tune Cascade's behavior in settings:
- Auto-run terminal commands: When enabled, Cascade can execute terminal commands (like
npm installornpx prisma migrate) without asking for confirmation each time. Disable this if you prefer to approve every command. - File creation permissions: Control whether Cascade can create new files or only edit existing ones.
- Context window: Adjust how much of your codebase Cascade includes when generating responses.
Step 5: Core Features
Cascade Agent
Cascade is Windsurf's flagship feature — an AI agent that operates at the project level rather than the line level. Open it with Cmd+L (macOS) or Ctrl+L (Windows/Linux).
How Cascade works:
- You describe what you want in natural language
- Cascade analyzes your codebase to understand the current state
- It creates a step-by-step plan
- It executes the plan — editing files, creating new ones, running commands
- You review the changes in a diff view and accept or reject them
Example — adding an API endpoint:
Prompt: "Add a POST /api/users/invite endpoint that accepts an email address,
creates a pending invitation in the database, and sends an invitation email
using the existing email service."
Cascade will: - Read your existing API route structure - Check your database schema for relevant models - Create the route handler with proper validation - Add a database migration for the invitation model - Wire up the existing email service - Add error handling consistent with your other endpoints
Each change appears as a diff you can review inline. Accept individual changes, modify them, or ask Cascade to take a different approach.
Tab Autocomplete
Windsurf's autocomplete predicts your next edit — not just the next token. It can suggest multi-line completions, fill in function implementations based on the signature, and predict structural patterns from your codebase.
The autocomplete is context-aware: it reads your recent edits, open files, and project conventions. Press Tab to accept a suggestion, Esc to dismiss, or keep typing to refine.
For repetitive patterns (like adding similar API endpoints or test cases), the autocomplete learns the pattern after one or two examples and can generate subsequent instances with high accuracy.
Inline Edits
Select a block of code, press Cmd+I (macOS) or Ctrl+I (Windows/Linux), and describe the change you want. Windsurf edits the selected code in place:
Select a function -> "Add input validation and return proper HTTP error codes"
Select a CSS block -> "Make this responsive with a mobile-first approach"
Select a test -> "Add edge cases for empty input and null values"
Inline edits are faster than Cascade for targeted, single-location changes. Use them when you know exactly what needs to change and where.
Chat
The chat panel provides conversational AI access without the agentic capabilities. Use it for:
- Explaining unfamiliar code ("What does this regex do?")
- Architecture discussions ("Should I use WebSockets or SSE for this feature?")
- Quick lookups ("What's the Prisma syntax for a many-to-many relation?")
Chat reads your codebase for context but doesn't make changes. It's a thinking tool, not an execution tool.
Building features with Windsurf's Cascade? DeployHQ connects your Git repo to any server and deploys automatically when you push — SFTP, SSH, or cloud. Try it free.
Step 6: Advanced Features
Memories
Windsurf's Memories system persists context across sessions. When you correct Cascade's output or establish a preference ("always use named exports, not default exports"), Windsurf stores that as a memory. Future sessions reference these memories automatically.
Memories are scoped to your workspace, so preferences for one project don't bleed into another. You can view and manage stored memories through the Cascade panel settings.
This is particularly useful for teams: when team-wide memories are shared, every developer gets consistent AI output that follows the team's conventions from day one.
Cascade Flows
Flows are Cascade's multi-step reasoning chains. When you give Cascade a complex task, it breaks it into discrete steps and shows you the plan before executing. Each step in the flow is:
- Visible — You can see what Cascade intends to do
- Editable — Modify the plan before execution
- Interruptible — Stop the flow at any point
- Resumable — Continue from where you left off
For example, a flow for "migrate from Express to Fastify" might include steps like: analyze current routes, map middleware equivalents, create Fastify server configuration, convert route handlers, update tests, and verify the build.
MCP Integration
Windsurf supports the Model Context Protocol (MCP), allowing you to connect external tools and data sources directly into the AI's context. This means Cascade can interact with:
- Databases — Query your development database to understand schema and data
- APIs — Read API documentation or interact with external services
- Documentation — Pull in library docs, internal wikis, or style guides
- Custom tools — Connect proprietary tools and services
Configure MCP servers in your Windsurf settings or project configuration:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/docs"]
}
}
}
With MCP configured, you can ask Cascade things like "look at the current users table schema and add a preferences column with a JSON type" — and it will query the actual database schema rather than guessing.
Multi-File Editing
Cascade handles multi-file operations natively. When a change requires touching several files — say, renaming a component and updating all its imports — Cascade:
- Identifies all files that reference the component
- Shows you a consolidated diff across all affected files
- Applies changes atomically (all or nothing)
This eliminates the manual find-and-replace workflow and catches references that simple text search would miss (like dynamic imports, test fixtures, or documentation references).
Step 7: Best Practices
Provide Clear Context
The quality of Cascade's output depends directly on the quality of your input. Be specific:
Vague (poor results):
"Fix the auth"
Specific (good results):
"The JWT refresh token endpoint at /api/auth/refresh returns 401 when the
refresh token is valid but the access token has expired. The issue is likely
in middleware/auth.ts where we validate tokens. Fix this so refresh tokens
work correctly even when the access token has expired."
Include file names, error messages, and expected behavior. Cascade uses all of this to narrow its search and generate accurate fixes.
Use .windsurfrules Effectively
Your .windsurfrules file is the single highest-impact configuration for output quality. Include:
- Stack details: Framework versions, language, major libraries
- Conventions: Naming patterns, file organization, import styles
- Anti-patterns: Things you explicitly don't want ("never use
anytype", "don't use default exports") - Architecture: How your project is structured and why
Update this file as your project evolves. A stale rules file is worse than no rules file.
Review Changes Carefully
Cascade is powerful but not infallible. Always review its output before accepting:
- Check the diff — Windsurf shows every change in a diff view. Read it.
- Run tests — If Cascade modified logic, run your test suite before committing.
- Verify edge cases — AI-generated code sometimes handles the happy path well but misses edge cases.
- Watch for hallucinated imports — Cascade occasionally imports libraries that aren't in your project. Check
package.jsonafter accepting changes.
Work Iteratively
Don't try to build an entire feature in a single Cascade prompt. Break complex work into stages:
- "Create the database schema and migration for the notification system"
- "Add the API endpoints for creating and listing notifications"
- "Build the React components for the notification bell and dropdown"
- "Add real-time updates using WebSockets"
Each stage gives you a checkpoint to review, test, and course-correct before moving on.
Use Inline Edits for Precision
For targeted changes where you know exactly what needs to change, inline edits (Cmd+I / Ctrl+I) are faster and more precise than Cascade. Reserve Cascade for tasks that require codebase-wide reasoning.
Step 8: Deploy with DeployHQ
AI-assisted development accelerates how fast you write code — but the real productivity gain comes when you pair it with automated deployment. Here's how Windsurf and DeployHQ work together in a real development workflow.
The Workflow: Cascade to Production
1. Build the feature with Cascade
You open Cascade and describe the feature you need:
"Add a /webhooks/stripe endpoint that handles subscription.created and
subscription.deleted events. Verify the webhook signature using the
STRIPE_WEBHOOK_SECRET env var. Update the user's subscription status
in the database and send a confirmation email."
Cascade creates the route handler, adds signature verification, updates the database model, and wires up the email service — across four or five files.
2. Review the diff in Windsurf
Windsurf shows you every change Cascade made. You review the diff for each file, checking that the webhook signature verification is correct, the database query uses the right model, and the email template matches your existing ones. You make a small manual adjustment to the error handling and accept the changes.
3. Commit via Windsurf's Git integration
Open the Source Control panel (Cmd+Shift+G / Ctrl+Shift+G), review the staged changes one more time, and commit:
git add -A
git commit -m "feat: add Stripe webhook endpoint for subscription events"
Windsurf's built-in Git support shows you the same diff view as VS Code — branches, staging, conflict resolution, and commit history all work as expected.
4. Push and let DeployHQ take over
Push your branch to the remote:
git push origin feature/stripe-webhooks
If you've configured DeployHQ with automatic deployments, the push triggers a deployment immediately. Here's what happens on the DeployHQ side:
- Webhook notification: Your Git provider (GitHub, GitLab, Bitbucket) notifies DeployHQ of the new commit
- Build commands: DeployHQ executes your configured build commands —
npm install && npm run build,composer install, or whatever your project requires - File transfer: Changed files are transferred to your server via SFTP, SSH, or your configured protocol
- Atomic deployment: DeployHQ uses symlink-based atomic deployments, so your site switches from the old version to the new version in a single operation — zero downtime, no half-deployed states
- Deployment hooks: Post-deployment commands run automatically (restart services, clear caches, run migrations)
5. Verify in production
DeployHQ's deployment log shows you exactly which files were transferred and which commands ran. If something goes wrong, you can roll back to the previous deployment with a single click.
Configuring DeployHQ for Your Windsurf Project
If you haven't set up DeployHQ yet, here's the quick version:
- Create a project at deployhq.com and connect your Git repository
- Add a server — enter your server's hostname, deployment path, and credentials (SFTP, SSH, or cloud provider)
- Set build commands — for a typical Node.js project built with Windsurf:
npm ci
npm run build
- Enable automatic deployments — DeployHQ watches your repository and deploys when you push to your configured branch
- Configure deployment hooks — add post-deploy commands like cache clearing or service restarts:
# Post-deployment: restart the application
pm2 restart ecosystem.config.js --update-env
The entire setup takes under five minutes. From that point on, every git push from Windsurf triggers a production deployment — no manual steps, no FTP uploads, no SSH sessions.
Branch-Based Deployment Environments
DeployHQ supports deploying different branches to different environments. A common setup for teams using Windsurf:
| Branch | Environment | DeployHQ Server |
|---|---|---|
main |
Production | prod.example.com |
staging |
Staging | staging.example.com |
develop |
Development | dev.example.com |
When a developer pushes a Cascade-generated feature branch to staging, DeployHQ deploys it to the staging server for review. After approval, merging to main triggers the production deployment automatically.
Step 9: Troubleshooting
Cascade Not Seeing Your Files
If Cascade generates code that ignores your existing project structure:
- Check indexing status: Look at the bottom status bar for indexing progress. Large projects take a few minutes to index fully on first open.
- Verify
.windsurfrules: Ensure this file is in the project root and doesn't have syntax errors. - Restart the AI session: Open the Command Palette and run "Cascade: Reset Session" to clear the context and re-index.
Autocomplete Suggestions Are Slow or Missing
- Check your internet connection — All AI processing happens server-side.
- Disable conflicting extensions — Other AI completion extensions (Copilot, Tabnine, Codeium standalone) can conflict with Windsurf's built-in autocomplete. Disable them.
- Check plan limits — Free tier users may experience throttling during peak hours. The Pro plan includes priority access.
Extensions Not Working After Migration
Most VS Code extensions work in Windsurf, but some with deep VS Code API dependencies may not:
- Check the extension marketplace within Windsurf for compatibility
- Disable and re-enable the extension
- Check for Windsurf-specific versions — some popular extensions publish Windsurf-compatible builds
High Memory Usage
If Windsurf consumes excessive memory:
- Close unused tabs — Each open file consumes memory for AI context
- Disable unused extensions — Some extensions (like large language packs) use significant memory
- Adjust the AI context window in settings to reduce the amount of project context held in memory
Cascade Produces Incorrect Code
When Cascade generates code that doesn't work:
- Provide error context — Copy the error message back to Cascade: "Running this gives
TypeError: Cannot read property 'id' of undefinedat line 45" - Be specific about what's wrong — "The query returns all users instead of filtering by organization"
- Break down the task — If a complex prompt produces bad output, split it into smaller steps
- Check your
.windsurfrules— Outdated rules can mislead the AI
Conclusion
Windsurf represents a meaningful shift in how AI integrates with the development workflow. Rather than bolting a chatbot onto an editor, it builds the AI into the editing experience itself — Cascade understands your project, reasons across files, and executes multi-step plans while you stay in control of every change.
The real productivity multiplier comes when you combine Windsurf's AI-assisted development with automated deployment. Writing code faster only matters if you can ship it reliably. Pair Windsurf with DeployHQ to close the loop: Cascade builds the feature, you review and commit, DeployHQ deploys it to your server automatically. No manual steps between writing code and running it in production.
Ready to streamline your workflow? Sign up for DeployHQ and connect your repository in under five minutes.
If you have questions about deploying your Windsurf projects, reach out to us at support@deployhq.com or find us on Twitter/X.