❌ The offer is currently closed. We'll let you know when it's back! ❌

Self-host your Next.js Turborepo app with Docker in 5 minutes

·11 min read

Discover the step-by-step process to containerize your Next.js Turborepo application using Docker and learn how to optimize and deploy it to any environment.

Are you looking to deploy your Next.js Turborepo application with Docker? You're in the right place. Deploying a Next.js application built with Turborepo can be challenging, especially when you want to ensure consistent behavior across different environments. While TurboStarter provides you with a rock-solid foundation for your Next.js applications, deploying these features requires careful consideration.

In this comprehensive guide, we'll show you how to containerize your Next.js Turborepo app step by step, with detailed explanations that will help you achieve a production-ready Docker deployment.

Why Docker?

Before we dive into the implementation details, let's understand why Docker is the perfect choice for deploying Next.js Turborepo applications:

  • Consistency: Your application runs identically across local, staging, and production environments
  • Isolation: Complete environment containment with all dependencies
  • Scalability: Easily scale your application horizontally
  • CI/CD Integration: Perfect for modern continuous integration and deployment pipelines
  • Dependency Management: Efficient handling of monorepo structure and packages

The Docker Monorepo Challenge

Let's address the core challenge of dockerizing applications in a monorepo structure. The primary issue developers face is that unrelated changes can trigger unnecessary work during Docker builds.

Consider this scenario: You have a monorepo with both web and api applications. Changes to the web app's dependencies would traditionally trigger a rebuild of the api app's Docker image. This occurs because the global package-lock.json (or pnpm-lock.yaml) changes, causing Docker's layer cache to invalidate.

This challenge can lead to several issues in large monorepos:

  • ⏱️ Unnecessarily long build times
  • 💻 Wasted computing resources
  • 💰 Increased deployment costs
  • 🐌 Slower development cycles

But don't worry - our guide will help you overcome these challenges with proven solutions.

Prerequisites

Before starting your Docker deployment journey, ensure you have:

  1. A Next.js application in a Turborepo monorepo (we recommend using TurboStarter for the best starting point)
  2. Docker installed on your development machine
  3. Basic familiarity with terminal commands

Step 1: Optimizing Next.js for Container Deployment

First, we'll configure Next.js to output a standalone build. This crucial optimization ensures your Docker image contains only the necessary files to run the application.

Add this configuration to your next.config.js:

/** @type {import("next").NextConfig} */
const config = {
  output: "standalone",
  // ... other config options
};

The standalone output mode provides several key benefits for containerized deployments:

  • Creates a minimal production build by eliminating unnecessary files and dependencies
  • Includes only the essential server files needed to run your application
  • Optimizes specifically for containerized environments by reducing complexity
  • Significantly reduces final image size (often by 50% or more)

Step 2: Create a .dockerignore File

Before we create our Dockerfile, it's crucial to set up a .dockerignore file. This file tells Docker which files and directories to exclude during the build process, similar to how .gitignore works for Git. A well-configured .dockerignore is essential for efficient Docker builds in a monorepo.

Create a .dockerignore file in your project root:

# Version control
.git
.gitignore
 
# Dependencies
**/node_modules
.pnpm-store
 
# Build outputs
**/dist
**/.next
**/build
**/out
 
# Development files
**/.env*
!**/.env.example
**/.vscode
**/.idea
**/coverage
**/.turbo
**/.cache
 
# System files
.DS_Store
**/Thumbs.db
 
# Logs
**/npm-debug.log*
**/yarn-debug.log*
**/yarn-error.log*
**/pnpm-debug.log*
 
# Test files
**/__tests__
**/*.test.*
**/*.spec.*

Let's understand why each category of ignored files is important:

  1. Version Control (.git, .gitignore):

    • Reduces build context size significantly
    • Prevents unnecessary cache invalidation
    • Keeps version control separate from deployment
  2. Dependencies (node_modules, .pnpm-store):

    • Ensures clean installation in the container
    • Prevents platform-specific modules from causing issues
    • Reduces build context size dramatically
  3. Build Outputs (dist, .next, build):

    • Prevents mixing of local and container builds
    • Ensures consistent builds across environments
    • Reduces unnecessary file copying
  4. Development Files (.env, .vscode, etc.):

    • Maintains security by excluding sensitive information
    • Keeps development-specific configurations separate
    • Reduces image size

Step 3: Understanding the Multi-stage Build

Our Dockerfile uses a multi-stage build process to create an optimized production image. Let's break down each stage and understand its purpose:

Stage 1: Base Image

FROM node:20-alpine AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable

This stage sets up our foundation:

  • Uses Alpine Linux for a minimal footprint (less than 50MB)
  • Configures Node.js 20 for modern JavaScript features
  • Sets up PNPM for efficient package management
  • Enables corepack for consistent package manager versions

Stage 2: Pruning

The pruning stage is where Turborepo's magic happens. When working with monorepos, a key challenge is that changes to the global lockfile (like package-lock.json or pnpm-lock.yaml) can trigger unnecessary rebuilds. For example, if you add a package to your web app, it would traditionally trigger a rebuild of your api app's Docker image.

FROM base AS pruner
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY . .
RUN pnpm dlx turbo prune web --docker

The turbo prune command with the --docker flag creates an optimized subset of your monorepo in two important directories:

  1. ./out/json/ - Contains:

    • Only the necessary package.json files
    • A pruned version of your lockfile
    • Dependencies specific to the target application
    • Workspace configurations needed for the build
  2. ./out/full/ - Contains:

    • All source files needed for the build
    • Configuration files (like turbo.json)
    • Other assets required by the application

This separation is crucial for Docker layer caching because:

  • Dependencies change less frequently than source code
  • By copying and installing dependencies first, we can cache this layer
  • Source code changes don't invalidate the dependency installation layer

Here's what happens during the pruning process:

  1. Workspace Analysis:

    pnpm dlx turbo prune web --docker
    • Analyzes your workspace dependencies
    • Creates a dependency graph
    • Identifies all packages required by the target app
  2. Lockfile Optimization:

    • Creates a subset of your lockfile
    • Includes only the dependencies needed
    • Prevents unnecessary package downloads
    • Avoids cache invalidation from unrelated changes
  3. File Organization:

    • Separates dependencies from source code
    • Enables efficient Docker layer caching
    • Reduces build context size
    • Optimizes rebuild performance

This optimization is particularly powerful because:

  • Only relevant dependencies are installed
  • Changes to other apps don't trigger rebuilds
  • Build times are significantly reduced
  • Cache hits are maximized

Stage 3: Building

FROM base AS builder
WORKDIR /app
RUN apk add --no-cache libc6-compat
 
# Dependencies installation
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile --ignore-scripts --prefer-offline && pnpm store prune
 
# Application build
ENV SKIP_ENV_VALIDATION=1 \
    NODE_ENV=production
COPY --from=pruner /app/out/full/ .
RUN pnpm dlx turbo build --filter=web

The building stage compiles our application:

  • Creates a fresh build environment
  • Copies and installs only necessary dependencies
  • Uses efficient PNPM flags for faster, reliable installation
  • Sets production environment variables
  • Builds the application with Turborepo

Stage 4: Production Runner

FROM base AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && \
    adduser -S web -u 1001 -G nodejs
 
# Copy built application
COPY --from=builder --chown=web:nodejs /app/apps/web/.next/standalone ./
COPY --from=builder --chown=web:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=builder --chown=web:nodejs /app/apps/web/public ./apps/web/public
 
USER web
EXPOSE 3000
CMD ["node", "apps/web/server.js"]

The final stage creates our production image:

  • Starts fresh from the base image
  • Creates a non-root user for security
  • Copies only the necessary production files:
    • Standalone server bundle
    • Static assets
    • Public files
  • Configures proper file permissions
  • Exposes the application port
  • Sets up the startup command

Complete Dockerfile

Here's the complete Dockerfile that combines all these stages:

# Base image with Node.js and pnpm
FROM node:20-alpine AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
 
# Pruning stage - creates an optimized monorepo subset
FROM base AS pruner
WORKDIR /app
RUN apk add --no-cache libc6-compat
COPY . .
RUN pnpm dlx turbo prune web --docker
 
# Building stage - installs dependencies and builds the app
FROM base AS builder
WORKDIR /app
RUN apk add --no-cache libc6-compat
 
# Copy only the necessary package.json files and lockfile
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile --ignore-scripts --prefer-offline && pnpm store prune
 
# Copy source code and build
ENV SKIP_ENV_VALIDATION=1 \
    NODE_ENV=production
COPY --from=pruner /app/out/full/ .
RUN pnpm dlx turbo build --filter=web
 
# Runner stage - final, optimized image
FROM base AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && \
    adduser -S web -u 1001 -G nodejs
COPY --from=builder --chown=web:nodejs /app/apps/web/.next/standalone ./
COPY --from=builder --chown=web:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=builder --chown=web:nodejs /app/apps/web/public ./apps/web/public
USER web
EXPOSE 3000
CMD ["node", "apps/web/server.js"]

This carefully designed multi-stage build process is a key part of our Docker optimization strategy. By breaking down the build into distinct stages - base, pruning, building, and running - we achieve several critical benefits for production deployment.

The final image is highly optimized at under 200MB, making it quick to deploy and scale.

The layer caching system ensures subsequent builds are lightning fast, while the security-focused design (like running as a non-root user) keeps your application protected.

Additionally, the pruned dependency management means you're only shipping what you need, reducing attack surface and resource usage.

Leveraging Remote Caching

Turborepo's remote caching feature can significantly speed up your Docker builds by reusing build artifacts across different environments. This is especially powerful in CI/CD pipelines and team environments.

To enable remote caching:

  1. Configure Environment Variables: Add these to your Dockerfile's builder stage:
# Add before the build command in the builder stage
ARG TURBO_TEAM
ENV TURBO_TEAM=$TURBO_TEAM
 
ARG TURBO_TOKEN
ENV TURBO_TOKEN=$TURBO_TOKEN
 
RUN pnpm dlx turbo build --filter=web
  1. Provide Credentials: When building your image:
docker build -f ./apps/web/Dockerfile . \
  --build-arg TURBO_TEAM="your-team-name" \
  --build-arg TURBO_TOKEN="your-token" \
  --no-cache

Benefits of remote caching:

  • Reduces build times by up to 90%
  • Shares cache across team members
  • Optimizes CI/CD pipeline performance
  • Saves computing resources

Step 4: Building and Running the Container

Now that we have our Dockerfile and .dockerignore set up, let's build and run our container:

# Build the Docker image
docker build -f ./apps/web/Dockerfile . -t turbostarter
 
# Run the container
docker run -p 3000:3000 turbostarter

For production applications, you'll want to pass your environment variables. Here's how to do it securely:

docker run -p 3000:3000 \
  -e DATABASE_URL=your_url \
  -e NEXTAUTH_SECRET=your_secret \
  -e NEXTAUTH_URL=http://localhost:3000 \
  turbostarter

Optimization tips

To maximize your Docker deployment efficiency:

Performance Optimizations

Layer Caching Strategies

Our Dockerfile is structured to maximize layer caching, speeding up builds by up to 70%

Multi-stage Build Architecture

We use multiple stages to keep the final image size small, typically under 200MB

Security Best Practices

We run the application as a non-root user, following Docker security best practices

Advanced Configuration Tips

Efficient Dependency Management

We leverage PNPM and Turborepo's pruning for efficient package management

Environment Configuration

Use environment validation to ensure all required variables are present

Build Cache Optimization

Utilize Turborepo's remote caching for even faster builds in CI/CD pipelines

Performance Tuning

Dependency Optimization

Use `turbo prune` to minimize the files copied into your Docker image

Remote Cache Implementation

Enable Turborepo's remote caching in your Docker builds for faster CI/CD pipelines

Layer Strategy Optimization

Structure your Dockerfile to maximize cache hits for dependency installation

Troubleshooting

When deploying your Next.js Turborepo application with Docker, you might encounter these common challenges:

Build and Configuration Issues

Build Process Failures

Make sure all required dependencies are in your package.json

Environment Variable Configuration

Use environment validation to catch missing variables early

Port Configuration Issues

If port 3000 is in use, map to a different port using -p 3001:3000

Advanced Troubleshooting

Monorepo Dependencies

Ensure your workspace dependencies are correctly configured

Build Context Optimization

Check your .dockerignore file if builds are slow or the image is too large

Cache Configuration

Ensure your TURBO_TEAM and TURBO_TOKEN are correctly configured

Dependency Resolution

Verify your package.json dependencies are correctly specified in your workspace

Conclusion: Mastering Next.js Turborepo Docker Deployment

Containerizing your Next.js Turborepo application with Docker provides a robust, secure, and scalable deployment solution that's perfect for modern web applications. The multi-stage Dockerfile we've created optimizes for both build time and production performance while maintaining security best practices.

Next Steps

You can now deploy your containerized application to any major cloud platform that supports Docker containers:

Remember to follow security best practices, keep your dependencies updated, and monitor your container's performance in production. For more deployment guides and best practices, check out our TurboStarter documentation.

Ready to start your Next.js project? Try TurboStarter today and get a production-ready foundation for your next web application!

world map
Community

Connect with like-minded people

Join our community to get feedback, support, and grow together with 100+ builders on board, let's ship it!

Join us

Ship your startup everywhere. In minutes.

Don't spend time on complex setups. TurboStarter simplifies everything.