Containers demystified — what Docker is, why it matters, and how to get it running on your Linux machine.
Docker is a platform that lets you package an application and everything it needs to run — code, runtime, libraries, config — into a single portable unit called a container.
Think of it like a shipping container: the same box travels by truck, ship, and train without anyone caring what's inside. Docker containers work the same way — they run identically on your laptop, a colleague's machine, or a cloud server.
Containers vs Virtual Machines: A VM emulates an entire computer, including its own OS kernel — heavy and slow to start. A container shares the host OS kernel and only packages what's different, making it much lighter and faster to spin up.
"Works on my machine" becomes a thing of the past. Everyone runs the same environment.
Projects can't interfere with each other. PHP 7 and PHP 8 happily coexist.
Containers start in seconds, not minutes. Spin up and tear down with ease.
No more installing half of the internet onto your laptop. Delete a container and it's gone.
You'll encounter these terms constantly. Here's what they actually mean:
A read-only template used to create containers. Think of it as a recipe or a snapshot. You pull images from Docker Hub (Docker's public registry) or build your own with a Dockerfile.
A running instance of an image. You can run many containers from the same image simultaneously — each is isolated from the others. Stop a container and its changes are lost unless you've used a volume.
A plain-text script that defines how to build your own image. Each line is an instruction — copy these files, run this command, expose this port.
Persistent storage that survives container restarts and deletions. Mount a volume to a container path and data written there is safe even if the container is destroyed.
A tool for defining and running multi-container apps using a single docker-compose.yml file. One command brings up your entire stack — web server, database, cache — all wired together.
Tools like DDEV sit on top of Docker and Docker Compose, providing a friendly developer interface. When DDEV starts your project, it's launching a set of pre-configured containers behind the scenes.
The recommended approach is to install Docker CE (Community Edition) directly from Docker's own repository — this gives you the latest stable version and proper update support.
Avoid installing Docker via your distro's package manager (sudo apt install docker.io). This often installs an outdated version. Use Docker's official repository instead.
# Remove any old Docker packages first $ sudo apt remove docker docker-engine docker.io containerd runc # Install prerequisites $ sudo apt update $ sudo apt install -y ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key $ sudo install -m 0755 -d /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg \ | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg $ sudo chmod a+r /etc/apt/keyrings/docker.gpg # Add the repository $ echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \ | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ sudo apt update $ sudo apt install -y docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin # Verify it worked $ sudo docker run hello-world → Hello from Docker! This message shows your installation appears to be working correctly.
By default, Docker requires sudo for every command. To fix that, add your user to the docker group:
# Add your user to the docker group $ sudo usermod -aG docker $USER # Apply the change in your current session (or log out/in) $ newgrp docker # Test — should work without sudo now $ docker run hello-world
The newgrp docker trick applies the group to your current terminal session only. For a permanent fix, log out and log back in completely.
# Start Docker when the system boots $ sudo systemctl enable docker # Start it manually right now if needed $ sudo systemctl start docker # Check its status $ sudo systemctl status docker
| Command | What it does |
|---|---|
| docker pull <image> | Download an image from Docker Hub |
| docker images | List all locally stored images |
| docker rmi <image> | Remove a local image |
| docker build -t myapp . | Build an image from the current directory's Dockerfile |
| Command | What it does |
|---|---|
| docker run <image> | Create and start a container from an image |
| docker run -d -p 8080:80 <image> | Run detached, map port 8080 on host to 80 in container |
| docker ps | List running containers |
| docker ps -a | List all containers including stopped ones |
| docker stop <id> | Gracefully stop a running container |
| docker rm <id> | Delete a stopped container |
| docker exec -it <id> bash | Open an interactive shell inside a running container |
| docker logs <id> | View the output/logs of a container |
| Command | What it does |
|---|---|
| docker system df | Show disk usage by Docker |
| docker system prune | Remove all stopped containers, unused images & networks |
| docker volume ls | List all volumes |
| docker volume prune | Delete all unused volumes |
| docker info | Display system-wide Docker information |
| docker version | Show Docker client and daemon version |
Docker Compose lets you define a multi-container stack in a single docker-compose.yml file and manage it with simple commands.
services: web: image: nginx:latest ports: - "8080:80" volumes: - ./src:/var/www/html db: image: mysql:8.0 environment: MYSQL_ROOT_PASSWORD: secret MYSQL_DATABASE: myapp volumes: - db_data:/var/lib/mysql volumes: db_data:
| Command | What it does |
|---|---|
| docker compose up | Start the stack (in foreground) |
| docker compose up -d | Start the stack detached (background) |
| docker compose down | Stop and remove the stack's containers |
| docker compose down -v | Same, but also delete volumes |
| docker compose logs -f | Follow live logs from all services |
| docker compose ps | List the stack's containers and their status |
| docker compose exec db bash | Shell into a specific service container |
# Error: permission denied while trying to connect to the Docker daemon socket # Fix: apply group membership to current session $ newgrp docker # Permanent fix: log out and log back in
# Cannot connect to the Docker daemon — is it running? $ sudo systemctl start docker $ sudo systemctl status docker
# Find what's using a port (e.g. 8080) $ sudo lsof -i :8080 # Or use a different host port in your docker run / compose file $ docker run -p 9090:80 nginx
# See how much space Docker is using $ docker system df # Nuclear option: remove everything unused $ docker system prune -a
docker system prune -a will remove ALL unused images, not just dangling ones. Only run this if you're happy to re-pull or rebuild images you might want later.