Self-host your Next.js Turborepo app with Docker in 5 minutes
Discover the step-by-step process to containerize your Next.js Turborepo application using Docker and learn how to optimize and deploy it to any environment.

Are you looking to deploy your Next.js Turborepo application with Docker? You're in the right place. Deploying a Next.js application built with Turborepo can be challenging, especially when you want to ensure consistent behavior across different environments. While TurboStarter provides you with a rock-solid foundation for your Next.js applications, deploying these features requires careful consideration.
In this comprehensive guide, we'll show you how to containerize your Next.js Turborepo app step by step, with detailed explanations that will help you achieve a production-ready Docker deployment.
Why Docker?
Before we dive into the implementation details, let's understand why Docker is the perfect choice for deploying Next.js Turborepo applications:
- Consistency: Your application runs identically across local, staging, and production environments
- Isolation: Complete environment containment with all dependencies
- Scalability: Easily scale your application horizontally
- CI/CD Integration: Perfect for modern continuous integration and deployment pipelines
- Dependency Management: Efficient handling of monorepo structure and packages
The Docker Monorepo Challenge
Let's address the core challenge of dockerizing applications in a monorepo structure. The primary issue developers face is that unrelated changes can trigger unnecessary work during Docker builds.
Consider this scenario: You have a monorepo with both web
and api
applications. Changes to the web
app's dependencies would traditionally trigger a rebuild of the api
app's Docker image. This occurs because the global package-lock.json
(or pnpm-lock.yaml
) changes, causing Docker's layer cache to invalidate.
This challenge can lead to several issues in large monorepos:
- ⏱️ Unnecessarily long build times
- 💻 Wasted computing resources
- 💰 Increased deployment costs
- 🐌 Slower development cycles
But don't worry - our guide will help you overcome these challenges with proven solutions.
Prerequisites
Before starting your Docker deployment journey, ensure you have:
- A Next.js application in a Turborepo monorepo (we recommend using TurboStarter for the best starting point)
- Docker installed on your development machine
- Basic familiarity with terminal commands
Step 1: Optimizing Next.js for Container Deployment
First, we'll configure Next.js to output a standalone build. This crucial optimization ensures your Docker image contains only the necessary files to run the application.
Add this configuration to your next.config.js
:
The standalone output mode provides several key benefits for containerized deployments:
- Creates a minimal production build by eliminating unnecessary files and dependencies
- Includes only the essential server files needed to run your application
- Optimizes specifically for containerized environments by reducing complexity
- Significantly reduces final image size (often by 50% or more)
Step 2: Create a .dockerignore File
Before we create our Dockerfile, it's crucial to set up a .dockerignore
file. This file tells Docker which files and directories to exclude during the build process, similar to how .gitignore
works for Git. A well-configured .dockerignore
is essential for efficient Docker builds in a monorepo.
Create a .dockerignore
file in your project root:
Let's understand why each category of ignored files is important:
-
Version Control (
.git
,.gitignore
):- Reduces build context size significantly
- Prevents unnecessary cache invalidation
- Keeps version control separate from deployment
-
Dependencies (
node_modules
,.pnpm-store
):- Ensures clean installation in the container
- Prevents platform-specific modules from causing issues
- Reduces build context size dramatically
-
Build Outputs (
dist
,.next
,build
):- Prevents mixing of local and container builds
- Ensures consistent builds across environments
- Reduces unnecessary file copying
-
Development Files (
.env
,.vscode
, etc.):- Maintains security by excluding sensitive information
- Keeps development-specific configurations separate
- Reduces image size
Step 3: Understanding the Multi-stage Build
Our Dockerfile uses a multi-stage build process to create an optimized production image. Let's break down each stage and understand its purpose:
Stage 1: Base Image
This stage sets up our foundation:
- Uses Alpine Linux for a minimal footprint (less than 50MB)
- Configures Node.js 20 for modern JavaScript features
- Sets up PNPM for efficient package management
- Enables corepack for consistent package manager versions
Stage 2: Pruning
The pruning stage is where Turborepo's magic happens. When working with monorepos, a key challenge is that changes to the global lockfile (like package-lock.json
or pnpm-lock.yaml
) can trigger unnecessary rebuilds. For example, if you add a package to your web
app, it would traditionally trigger a rebuild of your api
app's Docker image.
The turbo prune
command with the --docker
flag creates an optimized subset of your monorepo in two important directories:
-
./out/json/
- Contains:- Only the necessary package.json files
- A pruned version of your lockfile
- Dependencies specific to the target application
- Workspace configurations needed for the build
-
./out/full/
- Contains:- All source files needed for the build
- Configuration files (like turbo.json)
- Other assets required by the application
This separation is crucial for Docker layer caching because:
- Dependencies change less frequently than source code
- By copying and installing dependencies first, we can cache this layer
- Source code changes don't invalidate the dependency installation layer
Here's what happens during the pruning process:
-
Workspace Analysis:
- Analyzes your workspace dependencies
- Creates a dependency graph
- Identifies all packages required by the target app
-
Lockfile Optimization:
- Creates a subset of your lockfile
- Includes only the dependencies needed
- Prevents unnecessary package downloads
- Avoids cache invalidation from unrelated changes
-
File Organization:
- Separates dependencies from source code
- Enables efficient Docker layer caching
- Reduces build context size
- Optimizes rebuild performance
This optimization is particularly powerful because:
- Only relevant dependencies are installed
- Changes to other apps don't trigger rebuilds
- Build times are significantly reduced
- Cache hits are maximized
Stage 3: Building
The building stage compiles our application:
- Creates a fresh build environment
- Copies and installs only necessary dependencies
- Uses efficient PNPM flags for faster, reliable installation
- Sets production environment variables
- Builds the application with Turborepo
Stage 4: Production Runner
The final stage creates our production image:
- Starts fresh from the base image
- Creates a non-root user for security
- Copies only the necessary production files:
- Standalone server bundle
- Static assets
- Public files
- Configures proper file permissions
- Exposes the application port
- Sets up the startup command
Complete Dockerfile
Here's the complete Dockerfile that combines all these stages:
This carefully designed multi-stage build process is a key part of our Docker optimization strategy. By breaking down the build into distinct stages - base, pruning, building, and running - we achieve several critical benefits for production deployment.
The final image is highly optimized at under 200MB, making it quick to deploy and scale.
The layer caching system ensures subsequent builds are lightning fast, while the security-focused design (like running as a non-root user) keeps your application protected.
Additionally, the pruned dependency management means you're only shipping what you need, reducing attack surface and resource usage.
Leveraging Remote Caching
Turborepo's remote caching feature can significantly speed up your Docker builds by reusing build artifacts across different environments. This is especially powerful in CI/CD pipelines and team environments.
To enable remote caching:
- Configure Environment Variables: Add these to your Dockerfile's builder stage:
- Provide Credentials: When building your image:
Benefits of remote caching:
- Reduces build times by up to 90%
- Shares cache across team members
- Optimizes CI/CD pipeline performance
- Saves computing resources
Step 4: Building and Running the Container
Now that we have our Dockerfile and .dockerignore set up, let's build and run our container:
For production applications, you'll want to pass your environment variables. Here's how to do it securely:
Optimization tips
To maximize your Docker deployment efficiency:
Performance Optimizations
Layer Caching Strategies
Our Dockerfile is structured to maximize layer caching, speeding up builds by up to 70%
Multi-stage Build Architecture
We use multiple stages to keep the final image size small, typically under 200MB
Security Best Practices
We run the application as a non-root user, following Docker security best practices
Advanced Configuration Tips
Efficient Dependency Management
We leverage PNPM and Turborepo's pruning for efficient package management
Environment Configuration
Use environment validation to ensure all required variables are present
Build Cache Optimization
Utilize Turborepo's remote caching for even faster builds in CI/CD pipelines
Performance Tuning
Dependency Optimization
Use `turbo prune` to minimize the files copied into your Docker image
Remote Cache Implementation
Enable Turborepo's remote caching in your Docker builds for faster CI/CD pipelines
Layer Strategy Optimization
Structure your Dockerfile to maximize cache hits for dependency installation
Troubleshooting
When deploying your Next.js Turborepo application with Docker, you might encounter these common challenges:
Build and Configuration Issues
Build Process Failures
Make sure all required dependencies are in your package.json
Environment Variable Configuration
Use environment validation to catch missing variables early
Port Configuration Issues
If port 3000 is in use, map to a different port using -p 3001:3000
Advanced Troubleshooting
Monorepo Dependencies
Ensure your workspace dependencies are correctly configured
Build Context Optimization
Check your .dockerignore file if builds are slow or the image is too large
Cache Configuration
Ensure your TURBO_TEAM and TURBO_TOKEN are correctly configured
Dependency Resolution
Verify your package.json dependencies are correctly specified in your workspace
Conclusion: Mastering Next.js Turborepo Docker Deployment
Containerizing your Next.js Turborepo application with Docker provides a robust, secure, and scalable deployment solution that's perfect for modern web applications. The multi-stage Dockerfile we've created optimizes for both build time and production performance while maintaining security best practices.
Next Steps
You can now deploy your containerized application to any major cloud platform that supports Docker containers:
- AWS for enterprise-scale deployments
- Google Cloud for integrated cloud services
- DigitalOcean for cost-effective hosting
Remember to follow security best practices, keep your dependencies updated, and monitor your container's performance in production. For more deployment guides and best practices, check out our TurboStarter documentation.
Ready to start your Next.js project? Try TurboStarter today and get a production-ready foundation for your next web application!