
Containers are lightweight, portable units that package an application with its dependencies, libraries, and configuration files, allowing it to run consistently across different environments. Unlike virtual machines, which include an entire operating system, containers share the host OS kernel, making them more efficient in terms of resource usage and startup time. Containers isolate applications at the process level, enhancing portability and scalability, though they share the host kernel and do not provide full OS isolation like VMs.
Key characteristics of containers include:
Containers are the foundation of modern application development, enabling microservices architectures and cloud-native solutions.
Docker is an open-source platform that simplifies the creation, deployment, and management of containers. It provides tools to build, share, and run containerized applications efficiently. Docker uses a client-server architecture, where the Docker client communicates with the Docker daemon, which handles container operations. The core components include:
Docker simplifies workflows by standardizing environments, reducing the “it works on my machine” problem, and enabling seamless deployment across development, testing, and production.
Docker Hub is a cloud-based registry service for sharing and managing Docker images. It serves as a centralized repository where developers can store, distribute, and collaborate on container images. Key features include:
Docker Hub is integral for discovering pre-built images and sharing custom ones, streamlining development workflows. Note: Docker Hub enforces rate limits for image pulls, especially for unauthenticated users.
Docker runs natively on Linux, as it leverages the Linux kernel’s containerization features. For Windows and Mac, Docker Desktop provides a user-friendly interface and virtualization layer to run containers.
Docker Desktop on Windows and Mac includes a GUI for managing containers, images, and volumes, while Linux users typically rely on the command line or third-party tools.
Here are common Docker commands to manage containers and images:
# Pull an image from Docker Hub
docker pull nginx:latest
# Run a container from an image
docker run -d -p 8080:80 --name my-nginx nginx:latest
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Show logs of a running container
docker logs my-nginx
# Access a running container's shell:
docker exec -it my-nginx /bin/bash
# Stop a running container
docker stop my-nginx
# Remove a container
docker rm my-nginx
These commands demonstrate basic operations. Use docker --help for more options or docker --help for specific details.
GPU passthrough allows containers to access the host’s GPU, enabling high-performance tasks like machine learning, gaming, or rendering. Docker supports GPU passthrough primarily for NVIDIA GPUs via the NVIDIA Container Toolkit.
Note! On Windows 11, the latest NVIDIA Windows GPU Driver fully supports WSL 2. With CUDA support in the driver, existing applications (compiled elsewhere on a Linux system for the same target GPU) can run unmodified within the WSL environment.
Once a Windows NVIDIA GPU driver is installed on the system, CUDA becomes available within WSL 2. The CUDA driver installed on the Windows host will be stubbed inside WSL 2 as libcuda.so, therefore users must not install any NVIDIA GPU Linux driver within WSL 2.
# Run a container with GPU access
docker run --gpus all -it --rm nvidia/cuda:11.0-base nvidia-smi
This command runs a container with access to all available GPUs and executes nvidia-smi to verify GPU availability. GPU passthrough is critical for compute-intensive workloads. Support for AMD or Intel GPUs is experimental and may require alternative runtimes like ROCm.
Docker and containers have transformed various industries with their versatility. Common use cases include:
By enabling consistency, scalability, and portability, Docker and containers have become essential for modern software development and deployment.