Docker Compose – multi-container applications

TL;DR: Docker Compose pozwala na definiowanie i uruchamianie multi-container aplikacji za pomocą YAML files. Jeden docker-compose.yml file może orchestrate całą aplikację: web server, database, cache, i inne services z proper networking i volume management.

Dlaczego Docker Compose zmienia sposób deploymentu aplikacji?

Uruchamianie aplikacji składającej się z kilku kontenerów ręcznie to jak montowanie mebli bez instrukcji – możliwe, ale frustrujące i error-prone. Docker Compose to instrukcja obsługi dla multi-container applications. Jeden plik YAML definiuje całą architekturę, networking i dependencies między serwisami.

Docker Compose został původně stworzony jako Fig, acquired by Docker w 2014. Wersja 1.10+ (2017) wspiera Compose file format 3.x z lepszym swarm integration.

Co się nauczysz:

  • Docker Compose basics i docker-compose.yml syntax
  • Multi-container aplikacje: web + database + cache
  • Networking między kontenerami
  • Volume management i data persistence
  • Environment variables i secrets
Wymagania wstępne: Podstawy Docker (containers, images), znajomość YAML syntax, pojęcie multi-tier applications, basic networking concepts.

Czym jest Docker Compose?

Docker Compose to tool do definiowania i uruchamiania multi-container Docker applications. Używa YAML file do konfiguracji application services, networks i volumes. Single command może spin up całą aplikację.

Service w Docker Compose – logical grouping kontenerów który runs specific component aplikacji (np. web server, database, cache).
Docker Compose to jak conductor orkiestry – coordinator multiple musicians (containers) żeby grały w harmony. Every musician ma swoją role, ale wszystko jest synchronized przez jednego conductora.

Docker commands vs Docker Compose

TaskDocker CommandsDocker Compose
Start servicesdocker run (multiple times)docker-compose up
Stop servicesdocker stop (multiple containers)docker-compose down
View logsdocker logs (per container)docker-compose logs
Scale servicesManual container managementdocker-compose scale

Pierwszy docker-compose.yml – web + database

Simple web application stack

# docker-compose.yml
version: '3'

services:
  # Web application service
  web:
    build: .                    # Build from Dockerfile w current directory
    ports:
      - "8080:8080"            # Host:Container port mapping
    environment:
      - DATABASE_URL=jdbc:postgresql://db:5432/myapp
      - DATABASE_USER=postgres
      - DATABASE_PASSWORD=secret
    depends_on:
      - db                      # Start db before web
    volumes:
      - ./logs:/app/logs        # Mount logs directory
    networks:
      - app-network

  # PostgreSQL database service  
  db:
    image: postgres:9.6         # Use official PostgreSQL image
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=secret
    volumes:
      - postgres_data:/var/lib/postgresql/data  # Persistent storage
    ports:
      - "5432:5432"            # Optional: expose for debugging
    networks:
      - app-network

# Named volumes for data persistence
volumes:
  postgres_data:

# Custom network for service communication
networks:
  app-network:
    driver: bridge

Corresponding Dockerfile dla web service

# Dockerfile
FROM openjdk:8-jre-alpine

# Copy application JAR
COPY target/myapp.jar /app/myapp.jar

# Create logs directory
RUN mkdir -p /app/logs

# Set working directory
WORKDIR /app

# Expose port
EXPOSE 8080

# Start application
CMD ["java", "-jar", "myapp.jar"]
Pro tip: Używaj service names (jak „db”) jako hostnames w connection strings. Docker Compose automatically tworzy DNS entries dla service discovery.

Complex application stack

Full-stack application z cache i load balancer

# docker-compose.yml - Production-like setup
version: '3.2'

services:
  # Load balancer
  nginx:
    image: nginx:1.12-alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - web1
      - web2
    networks:
      - frontend

  # Web application instances (for load balancing)
  web1:
    build: .
    environment:
      - DATABASE_URL=jdbc:postgresql://db:5432/myapp
      - REDIS_URL=redis://redis:6379
      - INSTANCE_NAME=web1
    depends_on:
      - db
      - redis
    networks:
      - frontend
      - backend
    volumes:
      - app_logs:/app/logs

  web2:
    build: .
    environment:
      - DATABASE_URL=jdbc:postgresql://db:5432/myapp
      - REDIS_URL=redis://redis:6379
      - INSTANCE_NAME=web2
    depends_on:
      - db
      - redis
    networks:
      - frontend
      - backend
    volumes:
      - app_logs:/app/logs

  # Redis cache
  redis:
    image: redis:3.2-alpine
    command: redis-server --appendonly yes  # Enable persistence
    volumes:
      - redis_data:/data
    networks:
      - backend

  # PostgreSQL database
  db:
    image: postgres:9.6
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD_FILE=/run/secrets/db_password
    secrets:
      - db_password
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    networks:
      - backend

  # Database backup service
  db_backup:
    image: postgres:9.6
    environment:
      - PGPASSWORD_FILE=/run/secrets/db_password
    secrets:
      - db_password
    volumes:
      - ./backups:/backups
      - backup_script:/scripts
    networks:
      - backend
    entrypoint: |
      sh -c '
        while true; do
          pg_dump -h db -U postgres myapp > /backups/backup_$$(date +%Y%m%d_%H%M%S).sql
          sleep 3600  # Backup every hour
        done
      '
    depends_on:
      - db

# Secrets management
secrets:
  db_password:
    file: ./secrets/db_password.txt

# Named volumes
volumes:
  postgres_data:
  redis_data:
  app_logs:
  backup_script:

# Networks for service isolation
networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

Nginx load balancer configuration

# nginx.conf
events {
    worker_connections 1024;
}

http {
    upstream webapp {
        server web1:8080;
        server web2:8080;
    }

    server {
        listen 80;
        
        location / {
            proxy_pass http://webapp;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
        
        # Health check endpoint
        location /health {
            access_log off;
            return 200 "healthy\n";
            add_header Content-Type text/plain;
        }
    }
}
Pułapka: Services w różnych networks nie mogą komunikować się directly. Web service musi być w obu networks (frontend i backend) żeby receive traffic i access database.

Docker Compose commands

Basic lifecycle commands

# Start all services (build if needed)
docker-compose up

# Start w detached mode (background)
docker-compose up -d

# Build i start services
docker-compose up --build

# Start tylko specific services
docker-compose up web db

# Stop all services
docker-compose down

# Stop i remove volumes (⚠️ data loss)
docker-compose down -v

# Stop services bez removing containers
docker-compose stop

# Restart services
docker-compose restart
docker-compose restart web  # Restart tylko web service

Development workflow commands

# View logs from all services
docker-compose logs

# Follow logs w real-time
docker-compose logs -f

# Logs from specific service
docker-compose logs web

# Scale services horizontally
docker-compose scale web=3

# Execute command w running container
docker-compose exec web bash
docker-compose exec db psql -U postgres myapp

# Run one-off command
docker-compose run web python manage.py migrate

# View running services
docker-compose ps

# View service configuration
docker-compose config

Environment management

Environment files

# .env file - default environment variables
POSTGRES_VERSION=9.6
REDIS_VERSION=3.2
APP_PORT=8080
DEBUG=false

# Database configuration
DB_NAME=myapp
DB_USER=postgres
DB_PASSWORD=secret123

# Redis configuration  
REDIS_PORT=6379
# docker-compose.yml using environment variables
version: '3'

services:
  web:
    build: .
    ports:
      - "${APP_PORT}:8080"
    environment:
      - DATABASE_URL=jdbc:postgresql://db:5432/${DB_NAME}
      - DATABASE_USER=${DB_USER}
      - DATABASE_PASSWORD=${DB_PASSWORD}
      - DEBUG=${DEBUG}

  db:
    image: postgres:${POSTGRES_VERSION}
    environment:
      - POSTGRES_DB=${DB_NAME}
      - POSTGRES_USER=${DB_USER}
      - POSTGRES_PASSWORD=${DB_PASSWORD}

  redis:
    image: redis:${REDIS_VERSION}
    ports:
      - "${REDIS_PORT}:6379"

Multiple environment files

# Development environment
docker-compose --env-file .env.dev up

# Production environment  
docker-compose --env-file .env.prod up

# Override specific services
docker-compose -f docker-compose.yml -f docker-compose.override.yml up

Volume management i data persistence

Różne typy volumes

services:
  web:
    image: myapp
    volumes:
      # Bind mount - host directory do container
      - ./app:/usr/src/app
      
      # Named volume - managed by Docker
      - app_data:/data
      
      # Anonymous volume - temporary storage
      - /tmp
      
      # Read-only mount
      - ./config:/etc/config:ro
      
      # Specific file mount
      - ./app.properties:/app/config/app.properties

volumes:
  app_data:
    driver: local

Database backup strategy

# Backup database
docker-compose exec db pg_dump -U postgres myapp > backup.sql

# Restore database
docker-compose exec -T db psql -U postgres myapp < backup.sql

# Backup volumes
docker run --rm -v postgres_data:/data -v $(pwd):/backup alpine \
  tar czf /backup/postgres_backup.tar.gz -C /data .

# Restore volumes
docker run --rm -v postgres_data:/data -v $(pwd):/backup alpine \
  tar xzf /backup/postgres_backup.tar.gz -C /data
Jaka różnica między docker-compose up i docker-compose start?

docker-compose up tworzy i uruchamia containers, networks, volumes. docker-compose start tylko uruchamia existing stopped containers. Używaj 'up' dla initial deployment, 'start' dla restarts.

Czy mogę używać Docker Compose w produkcji?

Tak, ale dla single-host deployments. Dla multi-host clusters używaj Docker Swarm, Kubernetes lub innych orchestrators. Compose świetnie works dla development i small production environments.

Jak debugować problemy z networking w Compose?

Użyj docker-compose exec [service] nslookup [other-service] do sprawdzenia DNS resolution. docker network ls pokazuje networks. docker-compose logs pomaga identify connection issues.

Co się dzieje z data gdy robię docker-compose down?

Named volumes persist, anonymous volumes są usuwane. Używaj docker-compose down -v żeby usunąć ALL volumes (⚠️ data loss). Always backup important data before down -v.

Jak sprawdzić który containers są running?

docker-compose ps pokazuje status services w current project. docker ps pokazuje ALL running containers. docker-compose top pokazuje processes w każdym container.

🚀 Zadanie dla Ciebie

Stwórz complete e-commerce application stack:

  1. Frontend: Nginx serving static files
  2. API: Spring Boot application
  3. Database: PostgreSQL z initial data
  4. Cache: Redis dla session storage
  5. Monitoring: Container z health checks
  6. Volumes: Persistent storage dla DB i logs

Include environment variables, proper networking, i backup strategy. Test complete user flow od frontend do database.

Przydatne zasoby:

Używasz Docker Compose w swoich projektach? Jakie największe zalety widzisz w porównaniu do ręcznego zarządzania kontenerami?

Zostaw komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *

Przewijanie do góry