Docker Mastery: Complete Installation, Setup, and Configuration Guide with Hands-On Labs

Introduction

Docker has revolutionized how we develop, ship, and run applications by providing lightweight, portable containers that can run consistently across different environments. This comprehensive guide will walk you through downloading, installing, setting up, and configuring Docker, complete with practical labs to reinforce your learning.

Understanding Docker: The Foundation

Before diving into installation, let’s understand what Docker is and why it’s essential in modern development workflows.

Docker is a containerization platform that packages applications and their dependencies into lightweight, portable containers. Unlike traditional virtual machines that virtualize entire operating systems, Docker containers share the host OS kernel, making them more efficient and faster to start.

Key Benefits:

  • Consistency: “Works on my machine” becomes “works everywhere”
  • Isolation: Applications run in isolated environments
  • Scalability: Easy to scale applications up or down
  • Resource Efficiency: Lower overhead compared to VMs
  • DevOps Integration: Streamlines CI/CD pipelines

System Requirements and Prerequisites

Before installation, ensure your system meets Docker’s requirements:

For Windows:

  • Windows 10 64-bit: Pro, Enterprise, or Education (Build 15063 or later)
  • Windows 11 64-bit
  • WSL 2 feature enabled
  • Virtualization enabled in BIOS
  • At least 4GB RAM

For macOS:

  • macOS 10.15 or newer
  • At least 4GB RAM
  • VirtualBox prior to version 4.3.30 must be uninstalled

For Linux:

  • 64-bit kernel and CPU support
  • Kernel version 3.10 or higher
  • iptables version 1.4 or higher
  • git version 1.7 or higher

Installation Guide by Operating System

Installing Docker on Windows

Step 1: Download Docker Desktop

  1. Visit the official Docker website: https://www.docker.com/products/docker-desktop
  2. Click “Download for Windows”
  3. Save the installer file

Step 2: Install Docker Desktop

  1. Run the downloaded installer as administrator
  2. Follow the installation wizard
  3. Ensure “Enable Hyper-V Windows Features” is checked
  4. Complete the installation and restart your computer

Step 3: Post-Installation Setup

  1. Launch Docker Desktop from the Start menu
  2. Sign in with your Docker Hub account (or create one)
  3. Configure resource limits in Settings > Resources

Installing Docker on macOS

Step 1: Download Docker Desktop

  1. Visit https://www.docker.com/products/docker-desktop
  2. Click “Download for Mac”
  3. Choose the appropriate version (Intel or Apple Silicon)

Step 2: Install Docker Desktop

  1. Open the downloaded .dmg file
  2. Drag Docker.app to Applications folder
  3. Launch Docker from Applications
  4. Authorize with your system password when prompted

Step 3: Verify Installation Open Terminal and run:

docker --version
docker-compose --version

Installing Docker on Linux (Ubuntu/Debian)

Step 1: Update Package Index

sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release

Step 2: Add Docker’s Official GPG Key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Step 3: Add Docker Repository

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Step 4: Install Docker Engine

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Step 5: Add User to Docker Group

sudo usermod -aG docker $USER
newgrp docker

Initial Configuration and Setup

Configuring Docker Daemon

The Docker daemon can be configured through the daemon.json file:

Linux/macOS Location: /etc/docker/daemon.json Windows Location: C:\ProgramData\docker\config\daemon.json

Sample Configuration:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "default-address-pools": [
    {
      "base": "172.17.0.0/12",
      "size": 24
    }
  ]
}

Setting Up Docker Hub Authentication

docker login
# Enter your Docker Hub username and password

Configuring Resource Limits (Docker Desktop)

  1. Open Docker Desktop Settings
  2. Navigate to Resources
  3. Adjust CPU, Memory, and Disk limits based on your system
  4. Apply and restart Docker

Lab 1: Verify Installation and Basic Commands

Let’s start with fundamental Docker commands to verify your installation:

# Check Docker version
docker --version

# Display system-wide information
docker system info

# Show Docker disk usage
docker system df

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# List Docker images
docker images

# Pull a test image
docker pull hello-world

# Run your first container
docker run hello-world

Expected Output: The hello-world container should display a welcome message confirming Docker is working correctly.

Lab 2: Working with Images and Containers

Downloading and Running Images

# Pull Ubuntu image
docker pull ubuntu:20.04

# Run Ubuntu container interactively
docker run -it ubuntu:20.04 /bin/bash

# Inside the container, try these commands:
apt update
apt install curl -y
curl --version
exit

Container Management Commands

# Run container in detached mode
docker run -d --name my-nginx nginx:latest

# View running containers
docker ps

# Stop a container
docker stop my-nginx

# Start a stopped container
docker start my-nginx

# Remove a container
docker rm my-nginx

# Remove an image
docker rmi ubuntu:20.04

Lab 3: Building Your First Docker Image

Create a simple web application and containerize it:

Step 1: Create Application Files

app.py:

from flask import Flask
import os

app = Flask(__name__)

@app.route('/')
def hello():
    return f"Hello from Docker! Hostname: {os.environ.get('HOSTNAME', 'unknown')}"

@app.route('/health')
def health():
    return {"status": "healthy"}

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

requirements.txt:

Flask==2.3.3

Dockerfile:

# Use official Python runtime as base image
FROM python:3.9-slim

# Set working directory
WORKDIR /app

# Copy requirements file
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY app.py .

# Expose port
EXPOSE 5000

# Define environment variable
ENV FLASK_APP=app.py

# Run the application
CMD ["python", "app.py"]

Step 2: Build and Run the Image

# Build the image
docker build -t my-python-app .

# Run the container
docker run -d -p 5000:5000 --name flask-app my-python-app

# Test the application
curl http://localhost:5000
curl http://localhost:5000/health

# View container logs
docker logs flask-app

# Stop and remove
docker stop flask-app
docker rm flask-app

Lab 4: Docker Compose Multi-Container Application

Create a multi-service application with Docker Compose:

docker-compose.yml:

version: '3.8'

services:
  web:
    build: .
    ports:
      - "5000:5000"
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
    depends_on:
      - db
    networks:
      - app-network

  db:
    image: postgres:13
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - app-network

  redis:
    image: redis:alpine
    networks:
      - app-network

volumes:
  postgres_data:

networks:
  app-network:
    driver: bridge

Commands:

# Start all services
docker-compose up -d

# View running services
docker-compose ps

# View logs
docker-compose logs web

# Scale a service
docker-compose up -d --scale web=3

# Stop all services
docker-compose down

# Remove volumes
docker-compose down -v

Advanced Configuration

Docker Networks

# Create custom network
docker network create my-network

# Run containers on custom network
docker run -d --name web1 --network my-network nginx
docker run -d --name web2 --network my-network nginx

# Inspect network
docker network inspect my-network

# List networks
docker network ls

Docker Volumes

# Create named volume
docker volume create my-data

# Use volume with container
docker run -d -v my-data:/data --name data-container alpine

# List volumes
docker volume ls

# Inspect volume
docker volume inspect my-data

Environment Variables and Secrets

# Pass environment variables
docker run -e MY_VAR=value -e ANOTHER_VAR=value2 my-app

# Use environment file
echo "MY_VAR=value" > .env
echo "ANOTHER_VAR=value2" >> .env
docker run --env-file .env my-app

Lab 5: Docker Registry and Image Management

Tagging and Pushing Images

# Tag image for registry
docker tag my-python-app username/my-python-app:v1.0

# Push to Docker Hub
docker push username/my-python-app:v1.0

# Pull from registry
docker pull username/my-python-app:v1.0

Setting Up Private Registry

# Run local registry
docker run -d -p 5000:5000 --name registry registry:2

# Tag image for local registry
docker tag my-python-app localhost:5000/my-python-app

# Push to local registry
docker push localhost:5000/my-python-app

# Pull from local registry
docker pull localhost:5000/my-python-app

Performance Optimization and Best Practices

Dockerfile Best Practices

  1. Use Multi-stage Builds:
# Build stage
FROM node:16 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Production stage
FROM node:16-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
  1. Minimize Layers:
# Instead of multiple RUN commands
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y git

# Combine into single RUN
RUN apt-get update && \
    apt-get install -y curl git && \
    rm -rf /var/lib/apt/lists/*

Resource Management

# Limit container resources
docker run -m 512m --cpus="1.5" my-app

# Monitor resource usage
docker stats

# Clean up unused resources
docker system prune -a

Troubleshooting Common Issues

Container Won’t Start

# Check container logs
docker logs container-name

# Inspect container configuration
docker inspect container-name

# Check exit codes
docker ps -a

Port Conflicts

# Check which process is using a port
netstat -tulpn | grep :5000

# Use different host port
docker run -p 5001:5000 my-app

Permission Issues (Linux)

# Add user to docker group
sudo usermod -aG docker $USER

# Fix socket permissions
sudo chmod 666 /var/run/docker.sock

Security Considerations

Running as Non-Root User

FROM ubuntu:20.04

# Create non-root user
RUN groupadd -r appgroup && useradd -r -g appgroup appuser

# Set ownership
COPY --chown=appuser:appgroup . /app

# Switch to non-root user
USER appuser

CMD ["./app"]

Scanning Images for Vulnerabilities

# Scan image with Docker Scout (if available)
docker scout cves my-image:latest

# Use third-party scanners
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
  -v $HOME/.cache:/root/.cache/ \
  aquasec/trivy my-image:latest

Production Deployment Considerations

Health Checks

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:5000/health || exit 1

Logging Configuration

# Configure log driver
docker run --log-driver=json-file --log-opt max-size=10m --log-opt max-file=3 my-app

Backup and Recovery

# Backup volume
docker run --rm -v my-data:/data -v $(pwd):/backup alpine \
  tar czf /backup/backup.tar.gz -C /data .

# Restore volume
docker run --rm -v my-data:/data -v $(pwd):/backup alpine \
  tar xzf /backup/backup.tar.gz -C /data

Conclusion

Docker is a powerful tool that transforms how we build, ship, and run applications. Through this guide, you’ve learned to install Docker across different platforms, understand its core concepts, and work with containers, images, and compose files through hands-on labs.

The key to mastering Docker is practice. Continue experimenting with different configurations, explore advanced features like Docker Swarm for orchestration, and integrate Docker into your development workflow.

Remember these essential practices:

  • Keep images small and secure
  • Use multi-stage builds for production
  • Implement proper logging and monitoring
  • Regular cleanup of unused resources
  • Follow security best practices

Docker’s ecosystem continues to evolve, so stay updated with the latest features and best practices. The containerization skills you’ve developed will serve as a foundation for modern DevOps practices and cloud-native development.

Share: