Getting Started with Docker: A Beginner’s Guide to Docker Basics (Volume 2)

IOPSHub
13 min readJun 3, 2023

--

In continuation to our last blog post on Docker Volume 1, we have continued further Docker concepts on below topics in Volume 2:

  • Docker File Concepts
  • Docker File Best Practices
  • Docker Image Build Commands
  • Docker Multi-Stage Build
  • Docker Logs
  • Docker Security
  • Docker in a Production Environment

Docker file Concepts

  • FROM: Specifies the base image for the Dockerfile.
  • RUN: Executes a command inside the container.
  • CMD: Specifies the command to run when the container is started.
  • EXPOSE: Exposes a port for the container.
  • ENV: Sets an environment variable in the container.
  • COPY: Copies files from the host to the container.
  • ADD: Copies files from the host to the container, and can also extract archives.
  • WORKDIR: Sets the working directory for the container.

Docker file Best Practices

  1. Start with a Base Image: Select a suitable base image that provides the necessary runtime environment for your application. Use an official image whenever possible, as they are typically well-maintained and regularly updated.
  2. Keep Images Lightweight: Aim for minimal and optimised images by including only the required dependencies. This helps reduce image size and improves container startup time.
  3. Use Appropriate Tags: Specify a specific version or tag for the base image to ensure reproducibility and avoid unexpected changes. Avoid using the “latest” tag, as it may not provide a stable and predictable environment.
  4. Leverage Layer Caching: Docker builds images in layers, and it utilises layer caching to improve build times. Arrange your instructions in the Dockerfile from least frequently changed to most frequently changed. This allows Docker to leverage caching for intermediate layers during subsequent builds, speeding up the process.
  5. Minimise the Number of Instructions: Each instruction in a Dockerfile creates a new layer in the image, so try to minimize the number of instructions to reduce layer count. Combine related commands whenever possible, using tools like RUN or && for sequential commands, and use && or ; for multiple commands on a single line.
  6. Use .dockerignore: Create a .dockerignore file in the same directory as your Dockerfile to exclude unnecessary files and directories from being copied into the image during the build process. This helps reduce the image size and build time.
  7. Copy Files Efficiently: When copying files into the image, leverage the build context efficiently. Place frequently changing files or directories at the end of the Dockerfile to optimise caching. Use the COPY or ADD instructions as needed, and consider using wildcards to copy multiple files at once.
  8. Define the Execution Command: Use the CMD or ENTRYPOINT instructions to specify the command that should be executed when a container is launched from the image. This command defines the container's primary process.
  9. Provide Configuration Options: Use environment variables (ENV) or runtime arguments (CMD or ENTRYPOINT) to allow users to customise the container's behavior without modifying the Dockerfile. This provides flexibility and simplifies deployment in different environments.
  10. Regularly Update Images: Stay up to date with security patches and new features by periodically rebuilding your images based on updated base images. This helps ensure the security and stability of your containerised applications.

Remember to regularly test and validate your Dockerfile to ensure it produces the desired results and behaves as expected.

Docker file Best Practices ( will use either of Docker file practices mentioned above as well)

  • Use the official base image: Use the official base image from Docker Hub instead of creating a new one.
  • Use a specific tag: Use a specific tag for the base image instead of latest to ensure consistency.
  • Minimize the number of layers: Minimise the number of layers in the Dockerfile to reduce the image size.
  • Use COPY instead of ADD: Use COPY instead of ADD to avoid unexpected behavior when copying files.
  • Run one process per container: Run only one process per container to ensure a clear separation of concerns.
  • Use environment variables: Use environment variables to make the Dockerfile configurable and easier to maintain.
  • Clean up after each step: Clean up after each step to reduce the image size.
  • Use multi-stage builds: Use multi-stage builds to reduce the size of the final image.

Docker file Example based on above best practices

# Use an official, minimal Python base image with a specific tag
FROM python:3.9-slim-buster

# Set the working directory to /app
WORKDIR /app

# Copy the requirements.txt file first to leverage caching
COPY requirements.txt .

# Install project dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code
COPY . .

# Set environment variables
ENV PORT=8000
ENV DEBUG=False

# Expose the specified port
EXPOSE $PORT

# Define the command to run the application
CMD ["python", "app.py"]

Docker Image Build Commands

  • docker build: Build a Docker image from a Dockerfile.
docker build -t myimage:latest .

This command builds a Docker image with the tag myimage:latest from the Dockerfile in the current directory (.).

  • docker build — no-cache: Build a Docker image without using the cache.
docker build --no-cache -t myimage:latest .

This command builds a Docker image with the tag myimage:latest from the Dockerfile in the current directory (.) without using the cache.

  • docker build — build-arg: Pass build arguments to the Dockerfile.
docker build --build-arg MYVAR=myvalue -t myimage:latest .

This command builds a Docker image with the tag myimage:latest from the Dockerfile in the current directory (.) and passes the build argument MYVAR with the value myvalue.

  • docker build — target: Build a specific stage in a multi-stage build.
docker build --target mystage -t myimage:latest .

This command builds the Docker image with the tag myimage:latest from the Dockerfile in the current directory (.) and only builds the stage named mystage in a multi-stage build.

  • docker build — file: Build a Docker image from a specific Dockerfile.
docker build --file Dockerfile.prod -t myimage:latest .

This command builds a Docker image with the tag myimage:latest from the Dockerfile named Dockerfile.prod in the current directory (.).

  • docker history: Show the history of a Docker image.
docker history myimage:latest

This command shows the history of the Docker image with the tag myimage:latest.

  • docker tag: Tag a Docker image.
docker tag myimage:latest myrepo/myimage:latest

This command tags the Docker image with the tag myimage:latest as myrepo/myimage:latest.

  • docker push: Push a Docker image to a registry.
docker push myrepo/myimage:latest

This command pushes the myrepo/myimage:latest image to a Docker registry.

  • docker save: Save a Docker image to a file.
docker save myimage:latest -o myimage.tar

This command saves the Docker image with the tag myimage:latest to a file named `myimage.tar

Docker Multi-Stage Build

Docker multi-stage builds are a feature that allows you to build multiple stages or phases within a single Dockerfile. Each stage can use a different base image and include specific instructions. This concept is useful for optimising Docker images, reducing their size, and separating the build environment from the runtime environment. Here’s a use case and an example to illustrate the concept:

Use Case: Imagine you have a web application written in a compiled language like Go, Rust, or Java. To build the application, you need a specific build environment with compilers, libraries, and development tools. However, once the application is built, you don’t need all those build dependencies in the final image. Using multi-stage builds, you can have one stage for building the application and another stage for the runtime environment, resulting in a smaller and more secure final image.

Example: Let’s take an example of a JAVA / REACT application. In this case, we’ll use a multi-stage build to separate the build environment from the runtime environment.

Java Application Example:

# Stage 1: Build the application
FROM maven:3.8.4-openjdk-11-slim AS builder

WORKDIR /app

COPY pom.xml .
RUN mvn dependency:go-offline

COPY src ./src
RUN mvn package -DskipTests

# Stage 2: Create the final runtime image
FROM adoptopenjdk:11-jre-hotspot

WORKDIR /app

COPY --from=builder /app/target/myapp.jar .

EXPOSE 8080

CMD ["java", "-jar", "myapp.jar"]

In this example, we’re using Maven as the build tool for a Java application.

Stage 1 (builder): uses the Maven image as the build environment. It copies the project files, downloads dependencies, and builds the application using mvn package.

Stage 2 (final runtime image): uses the AdoptOpenJDK image as the runtime environment. It copies the built JAR file from the builder stage using the --from=builder flag and specifies the command to run the application.

React Application Example:

# Stage 1: Build the application
FROM node:16.3.0 AS builder

WORKDIR /app

COPY package.json package-lock.json ./
RUN npm ci

COPY . .
RUN npm run build

# Stage 2: Create the final runtime image
FROM nginx:1.21.3-alpine

COPY --from=builder /app/build /usr/share/nginx/html

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

Docker Logs:

Docker logs are a way to view and monitor the output generated by a Docker container. Logs can be used to troubleshoot issues, debug applications, and monitor the health of a container. By default, Docker logs are sent to stdout and stderr.

To view the logs of a running container, you can use the docker logs command:

docker logs <container-name>

To follow the logs in real-time, you can use the -f option:

docker logs -f <container-name>

Logs Format:

Docker logs can be output in different formats, such as JSON or plain text. The format can be specified using the --log-driver option when starting a container. By default, Docker uses the json-file log driver.

To change the log driver and format, you can use the --log-driver and --log-opt options when starting a container:

docker run --log-driver=<driver-name> --log-opt <option>=<value> <image-name>

Some commonly used log drivers are json-file, syslog, journald, and awslogs. Each driver has its own set of options.

Log Rotation:

Docker logs can grow quickly and take up a lot of disk space. To prevent this, Docker provides log rotation, which automatically rotates and compresses logs based on size and age.

Log rotation can be configured using the --log-opt option when starting a container. The options max-size and max-file can be used to specify the maximum size of a log file and the number of log files to keep, respectively.

Example:

docker run --log-driver=json-file --log-opt max-size=10m --log-opt max-file=3 <image-name>

This command starts a container with the json-file log driver and configures log rotation to keep a maximum of 3 log files, each with a maximum size of 10 MB.

Docker also provides built-in log rotation for some log drivers, such as json-file, local, and splunk. For these drivers, you can use the --log-opt option to configure the rotation policy. For example, to rotate logs every hour and keep the last 24 hours of logs, you can use the following command:

docker run --log-driver=json-file --log-opt max-size=10m --log-opt max-file=3 --log-opt max-age=1h <image-name>

This command starts a container with log rotation configured to rotate logs every hour and keep the last 24 hours of logs.

you can also configure the log driver and rotation options in the Docker daemon configuration file (/etc/docker/daemon.json on Linux).

Here is an example of how to configure the json-file log driver with log rotation options in the daemon configuration file:

json#{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"max-age": "1h"
}
}

You can also use other log drivers and options as needed.

Note that you need to restart the Docker daemon after making changes to the configuration file for them to take effect:

sudo systemctl restart docker

Docker Security

The Center for Internet Security (CIS) Docker benchmark provides guidelines and best practices for securing Docker containers. Here are some of the security fixes recommended by CIS with their corresponding commands and examples:

  1. Ensure the Docker socket is not mounted inside any containers.
docker ps --quiet | xargs docker inspect --format '{{ .Id }}: {{ range .Mounts }}{{ .Destination }} {{ end }}' | grep /var/run/docker.sock

If the command returns any container IDs, then those containers have the Docker socket mounted and should be reviewed to determine whether this is necessary for their operation.

  1. Ensure that containers use trusted base images.
docker history --format "{{.CreatedBy}}: {{.Comment}}" image-name

Review the output to verify that the base image is from a trusted source.

  • Ensure that HEALTHCHECK instructions have been added to the container image.
docker inspect --format='{{.Config.Healthcheck}}' image-name

The output should show a HEALTHCHECK instruction.

  • Ensure that image signing is configured.
docker trust key generate key-pair-name

This command generates a key pair that can be used to sign Docker images. The private key is saved to the local Docker trust store, and the public key can be shared with others to verify the signature of the signed images.

  • Ensure that Docker Content Trust is enabled.
export DOCKER_CONTENT_TRUST=1

This command enables Docker Content Trust, which requires that Docker images be signed and verified before they can be pulled and run.

  • Ensure that sensitive host system directories are not mounted inside containers.
docker ps --quiet | xargs docker inspect --format '{{ .Id }}: {{ range .Mounts }}{{ .Destination }} {{ end }}' | grep -E '/etc|/lib|/usr|/var/lib|/var/run'

If the command returns any container IDs, then those containers have sensitive host system directories mounted and should be reviewed to determine whether this is necessary for their operation.

These commands and examples can help you secure your Docker containers according to the CIS Docker benchmark.

The CIS Docker Benchmark provides guidelines for securing Docker containers. Here are the guidelines with their corresponding commands and examples:

  1. Ensure the Docker service is running with the latest version.
  2. Ensure that the Docker daemon is running with the minimum necessary privileges. Command: ps -ef | grep dockerd This command shows the running Docker daemon process. Ensure that it is running with the necessary privileges.
  3. Ensure that Docker content trust is enabled. Command: export DOCKER_CONTENT_TRUST=1 This command enables Docker Content Trust, which requires that Docker images be signed and verified before they can be pulled and run.
  4. Ensure that Docker is configured to use a proxy server if required. Command: export HTTP_PROXY=http://proxy.example.com:80 This command sets the HTTP proxy server that Docker should use to connect to the internet.
  5. Ensure that Docker logging is configured. Command: docker info | grep -i logging This command shows the Docker logging driver. Ensure that it is configured to log to a centralized logging server.
  6. Ensure that Docker is configured to limit resources. Command: docker run --cpu-shares 512 image-name This command limits the CPU shares of a container to 512, which ensures that it does not consume all available CPU resources.
  7. Ensure that Docker containers are configured with resource limits. Command: docker run --memory 512m image-name This command limits the memory of a container to 512 megabytes, which ensures that it does not consume all available memory resources.
  8. Ensure that Docker images are scanned for vulnerabilities. Command: docker scan image-name This command scans a Docker image for known vulnerabilities.
  9. Ensure that Docker images are signed and verified. Command: docker trust key generate key-pair-name This command generates a key pair that can be used to sign Docker images. The private key is saved to the local Docker trust store, and the public key can be shared with others to verify the signature of the signed images.
  10. Ensure that Docker containers are not run with privileged access. Command: docker run --privileged image-name This command runs a container with privileged access. Ensure that containers are not run with this option unless it is absolutely necessary.
  11. Ensure that sensitive host system directories are not mounted inside containers. Command: docker ps --quiet | xargs docker inspect --format '{{ .Id }}: {{ range .Mounts }}{{ .Destination }} {{ end }}' | grep -E '/etc|/lib|/usr|/var/lib|/var/run' This command shows containers that have sensitive host system directories mounted. Review these containers to ensure that this is necessary for their operation.

These guidelines and commands can help you secure your Docker containers according to the CIS Docker Benchmark.

Docker in a Production Environment

It is important to tune the Linux environment to ensure optimal performance and stability. Here are some tips for Linux environment tuning for Docker in production:

  1. Adjust kernel settings: Docker uses many system resources, so it is important to adjust the kernel settings to optimize performance. Some recommended kernel settings include increasing the number of available file descriptors, increasing the maximum number of processes, and increasing the maximum number of open files.
  2. Optimize storage: Docker images and containers consume disk space, so it is important to optimize storage to prevent disk space shortages. One approach is to use a dedicated partition for Docker data, and another approach is to use a storage driver that supports thin provisioning to minimize disk usage.
  3. Increase network limits: Docker containers can generate a lot of network traffic, so it is important to increase network limits to optimize performance. This includes increasing the maximum number of open files and the maximum number of sockets.
  4. Secure the Docker daemon: Docker daemon is a critical component of the Docker architecture, so it is important to secure it against attacks. This includes configuring access control, setting up a firewall, and using TLS certificates for secure communication.
  5. Monitor system resources: Monitoring system resources is essential to ensure that Docker containers are running smoothly. It is important to regularly monitor CPU, memory, disk usage, and network traffic to detect any issues and make optimizations as needed.
  6. Use a container orchestration platform: A container orchestration platform like Kubernetes can help manage Docker containers in a production environment, making it easier to manage and scale large numbers of containers.

By tuning the Linux environment to optimize performance and security, and monitoring system resources, Docker can be used effectively in a production environment.

Adjust kernel settings:

  • To increase the maximum number of available file descriptors, you can modify the /etc/sysctl.conf file by adding the following line: fs.file-max = 100000
  • To increase the maximum number of processes, you can modify the /etc/security/limits.conf file by adding the following lines: * soft nproc 65535 and * hard nproc 65535
  • To increase the maximum number of open files, you can modify the /etc/security/limits.conf file by adding the following lines: * soft nofile 65535 and * hard nofile 65535

Optimize storage:

  • To use a dedicated partition for Docker data, you can create a new partition and mount it to /var/lib/docker.
  • To use a storage driver that supports thin provisioning, you can modify the Docker daemon configuration file (/etc/docker/daemon.json) by adding the following line: "storage-driver": "overlay2"

Increase network limits:

  • To increase the maximum number of open files, you can modify the /etc/security/limits.conf file by adding the following lines: * soft nofile 65535 and * hard nofile 65535
  • To increase the maximum number of sockets, you can modify the /etc/sysctl.conf file by adding the following line: net.core.somaxconn = 65535

Secure the Docker daemon:

  • To configure access control, you can create a new user group for Docker and add users to that group: sudo groupadd docker and sudo usermod -aG docker username
  • To set up a firewall, you can use iptables to restrict incoming traffic to the Docker daemon port: sudo iptables -A INPUT -p tcp --dport 2376 -j DROP and sudo iptables -A INPUT -p tcp --dport 2376 -s your-ip-address -j ACCEPT
  • To use TLS certificates for secure communication, you can create self-signed certificates using OpenSSL: sudo openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

Monitor system resources:

  • To monitor CPU and memory usage, you can use the top command: top
  • To monitor disk usage, you can use the df command: df -h
  • To monitor network traffic, you can use the iftop command: iftop

Use a container orchestration platform:

  • To use Kubernetes as a container orchestration platform, you can follow the installation instructions provided by the official Kubernetes documentation.

IOPSHub offers a range of DevSecOps advisory and implementation services, including security assessments, compliance reviews, and automation solutions.

Their team of experts can help you identify security vulnerabilities and implement solutions that align with your business objectives. With IOPSHub, you can ensure that your organization is secure and compliant, while also driving innovation and growth.

So, if you’re serious about DevSecOps and want to take your security to the next level, check out IOPSHub today. Their extensive services and expertise can help you stay ahead of the curve in the ever-changing world of cybersecurity.

--

--

IOPSHub

IOPSHub is a Delhi-based DevSecOps consulting provider for startups, SaaS, and enterprises. Our services include IT, Cloud, DevOps, Containerisation, and more.