Demystifying Docker: How It Works

Demystifying Docker: How It Works


Docker is a containerization tool that uses lightweight virtualization to create isolated environments called containers. Unlike virtual machines (VMs) which emulate entire operating systems, Docker containers share the host’s kernel, making them faster and more efficient.



What We’ll Cover

  • An exploration of Docker’s core concepts, including images, containers, and networking.
  • Insights into how Docker works under the hood, such as namespaces, cgroups, and the Docker architecture.
  • Practical examples to reinforce key concepts, like running containers and inspecting images.



Why Docker Matters

Docker plays a big role in modern IT:

  • Consistency: Ensures applications run the same across all environments.
  • Efficiency: Containers are lightweight and use fewer resources compared to VMs.
  • Portability: Easily moves applications between different environments; local setups, on-premises servers and cloud platforms.
  • Scalability: Simplifies dynamic scaling, particularly for microservices architectures.
  • Automation: Streamlines development and CI/CD workflows.



Core Concepts

IMAGES
Docker images are the blueprint for containers, the image has everything that an application needs to run – code, dependencies and configurations. Here’s how images work:

Built from Dockerfiles

  • A Dockerfile is a script with instructions to build an image.
  • Each instruction (e.g., FROM, RUN, COPY) creates a new layer.
  • Example:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y curl

Enter fullscreen mode

Exit fullscreen mode

  • What this Dockerfile does:
    • FROM ubuntu:latestspecifies the base for the Docker image. It uses the latest version of Ubuntu as the starting point for the build.
    • The RUN apt-get update && apt-get install -y curl uses standard Linux/Unix commands to install the curl utility for making HTTP requests. The y flag confirms installation to avoid getting interactive prompts.
    • Each command creates a new layer in the image and using && combines them into a single layer for efficiency.

Layered Filesystem

  • Each layer represents a change (e.g., installing software or adding files)
  • Layers are cached and reused for faster builds and efficient storage.

Example: Pulling and inspecting an image:

docker pull ubuntu
docker inspect ubuntu
Enter fullscreen mode

Exit fullscreen mode

CONTAINERS
Containers are lightweight, isolated environments created from the images.

How Containers Work

  • Containers are running instances of images.
  • They include a writable layer for temporary changes, separate from the image’s readonly layers.

Key Features

  • Isolation: Containers run independently. They don’t interfere with other things.
  • Portable: Containers run the same across different environments.
  • Ephemeral: Changes in a container are lost when the container is removed unless its explicitly saved.

Example: Run a container from the Ubuntu image we pulled:
We can view the docker images we have pulled locally with docker image ls:

Run the below command to start a container using the ubuntu image, and opens a bash terminal session inside. Try running commands like ls, cat /etc/os-release. Running exit will exit the terminal session.

docker run -it ubuntu bash
Enter fullscreen mode

Exit fullscreen mode

Image showing running Ubuntu container

REGISTRY
A Docker registry is a storage and distribution system for Docker images.
The most popular public registry is Docker Hub.

How Docker Registries Work

  • When you pull an image (e.g., docker pull ubuntu), Docker fetches it from a registry.
  • Registries host and organize images, often using repositories and tags (e.g., ubuntu:latest).



Docker Architecture

CLIENT-SERVER MODEL
Docker uses a client-server model to manage containers.

  • Docker Client: The client is the part of Docker responsible for interacting with the Docker Daemon. It includes the Docker CLI, which is the command-line interface used to run commands like docker run or docker build.
  • Docker Daemon: A background process (dockerd) that handles tasks like building, running and managing containers.
  • Communication happens over a REST API, enabling flexibility (e.g., remote management).

Image showing Docker's Client-Server model

Run docker version to display client and server versions.

DOCKER ENGINE COMPONENTS
The Docker Engine consists of key components:

  • Image Builder: Constructs Docker images using instructions in a Dockerfile.
  • Container Runtime: Runs and manages containers, handling isolation and resource allocation.
  • Orchestration Layer: Coordinates multi-container setups (e.g. Docker Swarm or integrates with tools like Kubernetes).



How Docker Works Under the Hood

NAMESPACES

  • Isolates processes, networks, and file systems.
  • Example: Inside a container, ps aux shows only processes running inside the container, not the host system. This demonstrates how namespaces isolate the container’s process view.

CONTROL GROUPS (cgroups)

  • Limit and allocate resources like CPU and memory, and disk I/O for each container.
  • Example: Use docker stats to view resource usage.

UNION FILE SYSTEMS

  • Allow image layers to stack into a single, unified filesystem.
  • Changes during container runtime are saved in the top writable layer, preserving the original image.
  • Example: Run docker history <image> to see the image’s layers and how they stack.



Networking in Docker

Docker provides different networking options to connect to containers and external systems:

  • BRIDGE (default)

    • Creates an isolated network for containers on the same host.
    • Containers can communicate with each other via internal IPs but require port mapping to be accessed externally.
  • HOST

    • Removes network isolation and directly uses the host’s network stack.
    • Useful for performance-critical applications but sacrifices isolation.
  • OVERLAY

    • Enables communication between containers running on different hosts in a Docker Swarm or Kubernetes cluster.

Example: Run a container with a mapped port:

docker run -p 8080:80 nginx and test it in a browser.
Enter fullscreen mode

Exit fullscreen mode

  • This maps port 8080 on your host to port 80 in the container (where Nginx serves HTTP requests).
  • Open a browser and navigate to http://localhost:8080 to verify Nginx is running. You’ll see the default Nginx welcome page.

Image showing Nginx container running on mapped port
This shows how port mapping allows external access to the containerized application running in Docker’s default bridge network.

Inspecting and Creating Networks

  • Use docker network ls to list available networks and docker network inspect <network> for detailed information.
  • Create custom networks with docker network create <network_name> to better control container communication.



Persistent Storage

VOLUMES vs. BIND MOUNTS
Purpose: Both provide ways to store data that persists beyond the lifecycle of a container.

DIFFERENCES

  • Volumes:

    • Managed by Docker.
    • Stored in Docker’s directory (e.g., /var/lib/docker/volumes/).
    • Recommended for portability and easier management.
  • Bind Mounts:

    • Use a specific directory on the host system
    • Offers more control but required manual management.
    • Good for sharing directories or files from the host system with the container

Example: Create a volume and running a container with persistent data:

Create a volume:

docker volume create mydata
Enter fullscreen mode

Exit fullscreen mode

Run a container and map the volume:

docker run --name ubuntu1 -v mydata:/data -it ubuntu bash
Enter fullscreen mode

Exit fullscreen mode

Now from the terminal inside the container, write some data to /data:

echo "Hello!" > /data/hello.txt
exit
Enter fullscreen mode

Exit fullscreen mode

Start a new container with the same volume to verify persistence:

docker run --name ubuntu2 -v mydata:/data -it ubuntu bash
cat /data/hello.txt
Enter fullscreen mode

Exit fullscreen mode

This shows how data written to /data in container ubuntu1 persisted even when that container was removed. We ran the ubuntu2 container with the same volume to verify persistence.

Image showing data persistence with volume



Challenges and Limitations

Shared kernel issues

  • Containers on the same host share the host’s kernel.
  • A vulnerability in the kernel can compromise all containers, making security isolation less robust that virtual machines.

Resource overhead for large deployments

  • While lightweight individually, managing large numbers of containers can strain system resources like CPU, memory and networking.
  • Scaling requires planning and may necessitate orchestration tools like Kubernetes, which also introduces complexity.



Conclusion

Docker has transformed how we build, ship, test and run applications, making containerization a cornerstone of modern IT. By exploring its core concepts and understanding how it works under the hood, you’ll develop a skill that streamlines workflows and enables smarter, more efficient solutions.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.