# Docker containerization and Dockerfile rules Docker is a technology for running applications inside an isolated container, as if they were running on a virtual machine. You need to provide everything that will be installed inside the container in your `Dockerfile`. You can see an Dockerfile example in a boilerplate. > When you are going to make a pull request **always** test your app inside the docker container. To create a container named example based on the image, use the following command: ```text sudo docker run --name -d ``` Creating a container named example based on an nginx image: ```text sudo docker run --name example -d nginx ``` Use this command to view the currently running containers: ```text sudo docker ps ``` To run the created container in the background, use the following command: ```text sudo docker container start ``` To go inside a container that is running in the background, run the following command: ```text sudo docker exec -i -t /bin/bash ``` To exit the container, use the standard `exit` command. To remove a container, use the rm option: ```text sudo docker rm -f ``` > To connect your container with other containers you need to create networks, see more about it [official documentation](https://docs.docker.com/network/). Dockerfile example for multi-stage build: ```dockerfile FROM node:fermium-alpine AS environment ARG MS_HOME=/app ENV MS_HOME="${MS_HOME}" ENV MS_SCRIPTS="${MS_HOME}/scripts" ENV USER_NAME=node USER_UID=1000 GROUP_NAME=node GROUP_UID=1000 WORKDIR "${MS_HOME}" # Build FROM environment AS develop COPY ["./package.json", "./package-lock.json", "${MS_HOME}/"] FROM develop AS builder COPY . "${MS_HOME}" RUN PATH="$(npm bin)":${PATH} \ && npm ci \ && npm run test:ci \ && npm run test:e2e \ && npm run-script build \ # Clean up dependencies for production image && npm install --frozen-lockfile --production && npm cache clean --force # Serve FROM environment AS prod COPY ["./scripts/docker-entrypoint.sh", "/usr/local/bin/entrypoint"] COPY ["./scripts/bootstrap.sh", "/usr/local/bin/bootstrap"] COPY --from=builder "${MS_HOME}/node_modules" "${MS_HOME}/node_modules" COPY --from=builder "${MS_HOME}/dist" "${MS_HOME}/dist" RUN \ apk --update add --no-cache tini bash \ && deluser --remove-home node \ && addgroup -g ${GROUP_UID} -S ${GROUP_NAME} \ && adduser -D -S -s /sbin/nologin -u ${USER_UID} -G ${GROUP_NAME} "${USER_NAME}" \ && chown -R "${USER_NAME}:${GROUP_NAME}" "${MS_HOME}/" \ && chmod a+x \ "/usr/local/bin/entrypoint" \ "/usr/local/bin/bootstrap" \ && rm -rf \ "/usr/local/lib/node_modules" \ "/usr/local/bin/npm" \ "/usr/local/bin/docker-entrypoint.sh" USER "${USER_NAME}" EXPOSE 8085 ENTRYPOINT [ "/sbin/tini", "--", "/usr/local/bin/entrypoint" ] ``` Use a multi-stage build in Dockerfile to simply _**copy the necessary artifacts into the environment**_. For example, if you send a lot of dependencies and files to the production environment at build time are redundant and not needed to run your application. With multi-stage builds, these resources can only be used when the runtime environment contains only what is required and needed. In other words, multi-stage builds are a promising way to get rid of so-called overweight and security threats. There are common Dockerfile rules: Do not write the container as root. Assign a root user to an executable file, but without write permission. Keep the image as minimal as possible. Avoid confidential data leaks. Build dockerignore and context. Do not add env-file into docker container. [Article about a Dockerfile best practices by Moeid Heidari](https://moeidheidari.pro/blog/how-to-write-a-dockerfile-considering-best-practicecs) ### Docker Compose Docker Compose is an add-on to the docker, an application that allows you to run multiple containers simultaneously and route data streams between them. The Docker Compose file describes the process of loading and configuring containers. The example code of `docker-compose.yml` file looks like this: ```dockerfile version: "2.3" # Set version of docker-compose.yml services: # Specify containers nginx: # Installs the name of the first container, nginx, and configures it build: ./nginx # Specify where to build from ports: # Specify ports to be forwarded outside - "80:80" volumes: # Сonnect the working directory with the project code - ./www:/var/www depends_on: # Setting the order in which the containers will be loaded php: # php container starts before Nginx condition: service_healthy # Set the condition to start the nginx container php: # Set name of the first container - php and configure it build: ./php # Specify where to build from volumes: # Include the same working directory with the project code - ./www:/var/www check: # Check that the application works inside the container test: ["CMD", "php-fpm","-t"] # Test command we want to execute interval: 3s # Interval of attempts to run the test timeout: 5s # Delay in starting the command retries: 5 # Number of retries start_period: 1s # How long after the container starts the test ``` To run the container on the background: ```bash sudo docker compose up -d ```