Docker Tutorial
Free VPS trial from IONOS
Test your vServer for free now - Try a virtual server for 30 days!
“Build, Ship, and Run Any App, Anywhere” – this is the motto under which the open-source container platform Docker offers a flexible, low-resource alternative for the emulation of hardware components based on virtual machines (VMs). In our Docker tutorial for beginners, we explore the differences between the two virtualization techniques and introduce you to the open source project Docker with clear step-by-step instructions.
While traditional hardware virtualization is based on launching multiple guest systems on a common host system, Docker applications are run as isolated processes on the same system with the help of containers. This is called container-based virtualization, also referred to as operating-system-level virtualization.
The following graphic shows the fundamental differences in the architectural structure of both virtualization techniques:
Both techniques offer developers and system administrators the possibility to use various applications with different requirements in parallel to one another on the same system. The biggest differences between the two are in terms of resource consumption and portability.
Container: Virtualization with minimal overhead
If applications are encapsulated in the context of a traditional hardware virtualization, this is done using a hypervisor. This acts as an abstraction layer between the host system and the virtual guest systems. Each guest system is implemented as a complete machine with a separate operating system core. The hardware resources of the host system (CPU, memory, hard disk space, available peripherals) are proportionately assigned by the hypervisor.
With container-based virtualization, on the other hand, no complete guest systems are simulated. Instead, applications are launched in containers. These share the same core, namely the host system, but run as isolated processes in the user space.
Modern operating systems generally share virtual memory in two separate areas: core space and user space. While the core space is exclusively reserved for the operation of the core and other core components of the operating system, the user space represents that memory area available to applications. The strict separation between core and user space is primarily used to protect the system from harmful or flawed applications.
One big advantage of container-based virtualization is that applications with different requirements can run in isolation from one another without requiring the overhead of a separate guest system. For this, container technology makes use of two basic functions of the Linux core: Control Groups (Cgroups) and core namespaces.
- Cgroups limit the access of processes to memory, CPU, and I/O resources, and prevent a process’s resource requirements from affecting other running processes.
- Namespaces limit a process and its child processes to a specific part of the underlying system. To encapsulate processes, Docker uses namespaces in five different areas:
- System identification (UTS): UTS namespaces are used in container-based virtualization to assign containers their host and domain names.
- Process IDs (PID): Every Docker container uses a unique namespace for process IDs. Processes that run outside of a container are not visible inside the container. So, container-encapsulated processes on the same host system can have the same PID without conflicts.
- Inter-process communication (IPC): IPC namespaces isolate processes in a container in such a way that communication with processes outside of the container is prevented.
- Network resources (NET): With network namespaces, each container can be assigned separate network resources like IP addresses or routing tables.
- Mountpoints of the file system (MNT): Thanks to mount namespaces, an isolated process never sees the entire file system of the host, but instead only a small portion of it – usually an image created specifically for this container.
Up to version 0.8.1, Docker based its process only on Linux containers (LXC). Since version 0.9, the self-developed container format Libcontainer has been available to users. This enables Docker to be deployed across platforms and run the same container on various host systems. It also allowed a Docker version to be offered for Windows and macOS.
Scalability, high availability, and portability
The container technology doesn’t only represent a low-resource alternative to traditional hardware virtualization. Software containers also allow applications to be set up across platforms and in different infrastructures without needing special hardware or software configurations on the respective host system.
Docker uses portable images for software containers. Container images contain individual applications including all their libraries, binary files, and configuration files that are necessary for running the encapsulated application processes, and only make minimal demands on the respective host system. This allows an application container to be relocated between various Linux, Windows, or MacOS systems without further configuration, so long as the Docker platform has been installed as an abstraction layer. As a result, Docker is the ideal basis for the implementation of scalable, high-availability software architectures. Docker is used on production systems by companies like Spotify, Google, eBay, or Zalando.
Docker: Structure and functions
Docker is the most popular software project that provides users with container-based virtualization technology. The open-source platform is based on three basic components: In order to execute containers, users only need the Docker engine as well as special Docker images, which can be obtained via the Docker hub or created themselves.
Docker images
Similar to virtual machines, Docker containers are based on images. An image is a read-only template that contains all instructions that the Docker engine needs to create a container. A Docker image is described as a portable image of a container in the form of a text file – also called a Docker file. If a container is to be launched on a system, then first a package with the respective image is loaded – as long as it doesn’t exist locally. The loaded image provides the necessary file system including all parameters for the runtime. A container can be viewed as a running process of an image.
The Docker hub
The Docker hub is a cloud-based registry for software repositories – in other words, a type of library for Docker images. The online service is split between a public and a private section. The public section offers users the option to upload their own developed images and share with the community. Here, there are a number of official images available from the Docker developer team and established open source projects. Images uploaded to the private section of the registry aren’t accessible publicly and so can be shared, for example, within a company’s internal circle or with friends or acquaintances. The Docker hub can be accessed on hub.docker.com.
The Docker engine
At the heart of the Docker project is the Docker engine. This is an open source client server application, which is available to all users in the current version on all established platforms.
The basic architecture of the Docker engine is divided between three components: A daemon with server functions, a programming interface (API) based on the programming paradigm REST (Representational State Transfer), and the operating system terminal (command line interface, CLI) as a user interface (client).
- Docker daemon: As a server for the Docker engine, a daemon process is used. The Docker daemon runs in the background of the host system, and is used for the central control of the Docker engine. This function creates and manages all images, containers, or networks.
- REST-API: The REST-API specifies a set of interfaces that allow other programs to communicate with the Docker daemon and give it instructions. One of these programs is the terminal of the operating system.
- Terminal: As a client program, Docker uses the operating system’s terminal. This, integrated with the Docker daemon via the REST-API, enables users to control it through scripts or user input.
With Docker, user software containers can be started, stopped, and managed directly from the terminal. The Daemon is addressed via the command docker and instructions like build, pull, or run. Client and server can be on the same system. Users are also given the option to access a Docker daemon on another system. Depending on the type of connection being established, communication between the client and the server takes place via the REST-API, via UNIX sockets, or via a network interface.
The following graphic illustrates the interplay of the individual Docker components with the example commands docker build, docker pull, and docker run:
The command docker build instructs the Docker daemon to create an image (dotted line). For this, a corresponding Docker file needs to be available. If the image isn’t to be created, but instead loaded from a repository in the Docker hub, then the docker pull command is used (dashed line). If the Docker daemon is instructed via docker run to launch a container, the background program checks whether or not the corresponding container image is locally available. If it is, then the container is run (solid line). If the daemon can’t find the image, it automatically initiates a pull from the repository.
The installation of the Docker engine
While Docker was initially used exclusively on Linux distributions, the current version of the container engine is characterized by a high degree of platform independence. Installation packages can be found for Microsoft Windows and macOS, as well as for cloud services like Amazon Web Services (AWS) and Microsoft Azure. Supported Linux distributions include:
- CentOS
- Debian
- Fedora
- Oracle Linux
- Red Hat Enterprise Linux
- Ubuntu
- openSUSE
- SUSE Linux Enterprise
In addition, community-managed Docker distributions are found for:
- Arch Linux
- CRUX Linux
- Gentoo Linux
The following illustrates the installation process of the Docker engine using the popular Linux distribution Ubuntu. Detailed installation advice for the other platforms can be found in the Docker documentation.
Depending on which requirements to be met, there are three different ways to install the Docker container platform on your Ubuntu system:
- Manual installation via DEB package
- Installation from the Docker repository
- Installation from the Ubuntu repository
Before doing so, though, you should look at the system requirements of the Docker engine.
System requirements
To install the current version of Docker on your Ubuntu distribution, you need the 64-bit variant of one of the following Ubuntu versions:
- Yakkety 16.10
- Xenial 16.04 (LTS)
- Trusty 14.04 (LTS)
On productive systems, we recommend using software products with long-term support (LTS). These are provided by the vendor with updates, even if a successor version is already on the market.
Before the Docker installation
The following tutorial is based on the Ubuntu version Xenial 16.04 (LTS). The installation process follows the same steps with Yakkety 16.10. For users of Trusty 14.04, it’s recommended to install linux-image-extra-* packages before the Docker installation. This allows the Docker engine to access the AUFS storage driver.
A convenient method of updating a Linux system is provided by the integrated package manager APT (Advanced Packaging Tool). To install the additional package for Trusty 14.04, perform the following steps:
1. Call the terminal: Launch Ubuntu and open the terminal – for example, by using the key combination [CTRL] + [ALT] + [T].
2. Update the package list: Enter the following command to update the local package index of your operating system. Confirm your entry by hitting [ENTER].
$ sudo apt-get update
The apt-get update command doesn’t install new packages. Instead, locally available package descriptions are updated.
The addition of sudo allows you to run commands as an administrator (superuser “root”). Under certain circumstances, some commands may require root permissions. In this case, Ubuntu prompts you to enter the administrator password. You also have the option to permanently switch to the administrator role via sudo -s.
To install the container platform Docker, you need root permissions for the respective host system.
If you’ve identified yourself as the root user with a password, then Ubuntu starts the update process. The status is displayed in the terminal.
3. Install additional packages: If all package descriptions have been updated, you can proceed to install new packages. The package manager APT has the command apt-get install “PackageName” available for this. To load the recommended additional packages for Trusty 14.04 from the Ubuntu repository and install them on your system, enter the following command in the terminal and confirm with [ENTER]:
$ sudo apt-get install -y --no-install-recommends \
linux-image-extra-$(uname -r) \
linux-image-extra-virtual
If you use the command with the option -y, all interactive questions are automatically answered with ‘Yes’. The option --no-install-recommends prevents Ubuntu from automatically installing recommended packages.
After downloading the additional packages for Trust 14.04, all functions of the Docker platform are available on this Ubuntu version as well.
Do you not know which Ubuntu version is meant for your system? Or are you not sure whether you have the 64-bit architecture required for a Docker installation available to you? Core versions and system architecture can be identified in the Ubuntu terminal with the help of the following commands:
$ sudo uname -rm
The respective Ubuntu version, the release, and the nicknames are output with the following entry:
$ sudo lsb_release –a
Manual installation via DEB package
In principle, Docker is downloaded as a DEB package and installed manually. The required installation package is available under the following URL:
https://apt.dockerproject.org/repo/pool/main/d/docker-engine/
Download the DEB file of the desired Ubuntu version and start the installation process by entering this command in the Ubuntu terminal:
$ sudo dpkg -i /path/to/package.deb
Modify the placeholder /path/to/ so that the file path points to the storage location of the DEB package.
In the case of a manual installation, all software updates must also happen manually. The Docker documentation recommends you to use Docker’s own repository. This allows for the container platform to conveniently be installed from the Ubuntu terminal and kept up to date.
The following illustrates a Docker installation according to the recommended approach.
Installation from the Docker repository
The recommended way to install your own container platform is from the Docker repository. Here we show you how to configure your system so that the package manager APT can access the Docker repository via HTTPS.
1. Install packages: Enter the following command to install the necessary packages for accessing the Docker repository:
$ sudo apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
2. Add GPG key: Add Docker’s official GPG key:
$ curl -fsSL https://apt.dockerproject.org/gpg | sudo apt-key add -
3. Verify GPG key: Make sure that the GPG key coincides with the following ID: 5811 8E89 F3A9 1289 7C07 0ADB F762 2157 2C52 609D. Use the following command:
$ apt-key fingerprint 58118E89F3A912897C070ADBF76221572C52609D
The output appears in the terminal:
pub 4096R/2C52609D 2015-07-14
Key fingerprint = 5811 8E89 F3A9 1289 7C07 0ADB F762 2157 2C52 609D
uid Docker Release Tool (releasedocker) <docker@docker.com>
4. Configure Docker repository: Enter the following command to guarantee access to the stable Docker repository:
$ sudo add-apt-repository \
"deb https://apt.dockerproject.org/repo/ \
ubuntu-$(lsb_release -cs) \
main"
Your system is now fully preconfigured to install the container platform from the Docker repository.
As an alternative to the stable repository, you can also use the Docker test repository. Call the file /etc/apt/sources.list and replace the word main with testing. On productive systems, the use of the test repository is not recommended.
5. Update package index: Before you go ahead with the installation of the Docker engine, it’s recommended to update the package index of your operating system once more. Use this command to refresh:
$ sudo apt-get update
6. Docker installation from the repository: There are two available options for loading the Docker engine from the Docker repository and installing it on your Ubuntu system. If you want to load the current version of the Docker engine, use the command:
$ sudo apt-get -y install docker-engine
The container platform is ready to be used as soon as the installation process is finished. The Docker daemon starts automatically. If an older version of the container platform was on your system before the Docker installation, it will be replaced by the newly installed software.
As an alternative to the most current version, any older version of the Docker engine can also be installed. This is useful, for example, on productive systems. Here, users sometimes introduce established releases with good experience values to newer software versions.
An overview of the available Docker versions for your system can be obtained using the following command:
$ apt-cache madison docker-engine
To install a special Docker version, simply add the corresponding version string (i.e. 1.12.5-0) to the respective installation command, separated from the package name (in this case, docker-engine) by an equals sign.
$ sudo apt-get -y install docker-engine=<VERSION_STRING>
Installation from the Ubuntu repository
Users who don’t want to rely on the Docker repository have the option to load the container platform from the operating system’s repository.
Use the following command line directive to install a Docker package previously created by the Ubuntu community:
$ sudo apt-get install -y docker.io
The installation package of the container platform ‘docker.io’ is not to be confused with the ‘docker’ package, a system tray for KDE3/GNOME2 Docklet applications.
Test run
After the installation process has successfully concluded, you should make sure that the container platform functions properly. The development team has a simple hello-world container available for this purpose. Check your Docker installation by entering the following command into the Ubuntu terminal and confirming with [ENTER]:
$ sudo docker run hello-world
The Docker daemon is bound to a Unix socket (i.e. a communications endpoint provided by the operating system), assigned by default to the root user. Other users can use Docker commands only with the help of the sudo addition. This can be changed by creating a Unix group with the name Docker and adding the desired users to it. Further information can be found in the Docker project documentation.
The command docker run instructs the Docker daemon to search for and start a container with the name hello-world. If your Docker installation is configured without any errors, you should obtain an output that matches the following screenshot:
This terminal output is interpreted as follows: To run the command docker run hello-world, the Docker daemon searches the local files of your system for the corresponding container image. Since you’re running the hello-world container for the first time, the daemon’s search will be unsuccessful. You’ll receive the message ‘Unable to find image’.
$ sudo docker run hello-world
[sudo] password for osboxes:
Unable to find image 'hello-world:latest' locally
If Docker can’t find a desired image in the local system, then the daemon introduces a download process (pulling) from the Docker repository.
latest: Pulling from library/hello-world
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for hello-world:latest
Following a successful download, you’ll receive the following message: ‘Downloaded newer image for hello-world:latest.’ The container is then launched. This includes a simple hello-world script with the following message from the developers:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
Basically, what this text means for you is: Your Docker installation is working properly.
Uninstall Docker
Just as easy as installing the Docker engine via the terminal is uninstalling the container platform. If you want to remove the Docker package from your system, enter the following command in the Ubuntu terminal and confirm with [ENTER]:
$ sudo apt-get purge docker-engine
To continue, enter ‘Y’ (Yes) and confirm with [ENTER]. Enter ‘n’ to cancel the uninstallation.
Images and containers are not automatically removed when the Docker engine is uninstalled. Delete these with the help of this command:
$ sudo rm -rf /var/lib/docker
If additional configuration files are installed, they also have to be manually removed.
Work with Docker
Have you ensured that your Docker engine is fully installed and running flawlessly? Then it’s time to familiarize yourself with the application possibilities of the container platform. In the following, learn how to control the Docker engine via the terminal, which possibilities the Docker hub offers, and why Docker containers could revolutionize the handling of applications.
How to control the Docker engine
From version 16.04, Ubuntu has used the background program systemd (short for ‘system daemon’) to manage processes. Systemd is an init process, also used on other Linux distributions like RHEL, CentOS, or Fedora. Typically, systemd receives the process ID 1. As the first process of the system, the daemon is responsible for starting, monitoring, and ending all following processes. For previous Ubuntu versions (14.10 and older), the background program upstart takes over this function.
The Docker daemon can also be controlled via systemd. In the standard installation, the container platform is configured so that the Daemon automatically starts when the system is booted up. This default setting can be customized via the command line tool systemctl.
With systemctl, you send commands to systemd to control a process or request its status. The syntax of such a command is as follows:
Systemctl [OPTION] [COMMAND]
Some commands refer to specific resources (for example, Docker). In the terminology of systemd, these are referred to as units. In this case, the command results from the respective instruction and the name of the unit to be addressed.
If you would like to activate the autostart of the Docker daemon (enable) or deactivate it (disable), use the command line tool systemctl with the following commands:
$ sudo systemctl enable docker
$ sudo systemctl disable docker
The command line tool systemctl allows you to query the status of a unit:
$ sudo systemctl status docker
If you would like to manually start, stop, or restart your Docker engine, address systemd with one of the following commands.
To start the deactivated daemon, use systemctl in combination with the command start:
$ sudo systemctl start docker
If the Docker daemon is to be ended, use the command stop instead:
$ sudo systemctl stop docker
A restart of the engine is prompted with the command restart:
$ sudo systemctl restart docker
How to use the Docker hub
If the Docker engine represents the heart of the container platform, then the Docker hub is the soul of the open source project. This is where the community meets. In the cloud-based registry, users can find everything that they need to breathe life into their Docker installation.
The online service offers diverse official repositories with more than 100,000 free apps. Users have the option to create an image archive and use them collectively with work groups. In addition to the professional support offered by the development team, beginners can find connections to the user community here. A forum for community support is available on GitHub.
Registration in the Docker hub
Registering in the Docker hub is free. Users just need an e-mail address and their chosen Docker ID. This serves as a personal repository namespace later and grants users access to all Docker services. Currently, this offer includes the Docker cloud, Docker store, and selected beta programs in addition to the Docker hub. It also allows the Docker ID to be used as a log-in for the Docker support center as well as the Docker success portal and Docker forum.
The registration process unfolds in five steps:
1. Choose your Docker ID: As the first part of the application, choose a username that will later be used as your personal Docker ID. Your username for the Docker hub and all other Docker services must be between 4 and 30 characters and may only contain numbers and lowercase letters.
2. Enter an e-mail address: Enter your current e-mail address. Note that you will have to confirm your registration with Docker hub via e-mail.
3. Choose a password: Choose a secret password with between 6 to 128 characters.
4. Submit your registration: Click on ‘Sign up’ to submit your registration. Once the data has been fully transmitted, Docker will send a link for the verification of your e-mail address to your specified inbox.
5. Confirm your e-mail address: Confirm your e-mail address by clicking on the verification link.
The online services of the Docker project are immediately available following your registration in the browser. Here you can create repositories and workgroups, or search the Docker hub for public resources using ‘Explore’.
You can also register directly on your operating system’s terminal via docker login. A detailed description of the command can be found in the Docker documentation.
In principle, Docker hub is also available to users without an account or Docker ID. In this case, though, only images from public repositories can be loaded. An upload (push) of your own images isn’t possible without a Docker ID.
Create repositories in the Docker hub
The free Docker hub account contains one private repository, and offers the possibility to create any number of public repositories. If you should need more private repositories, you can unlock these with a paid upgrade.
To create a repository, proceed as follows:
1. Choose a namespace: Newly created repositories are automatically assigned to the namespace of your Docker ID. You also have the option to enter the ID of an organization that you belong to.
2. Label the repository: Enter a name for the newly created repository.
3. Add a description: Add a short description as well as detailed usage instructions.
4. Set visibility: Decide whether the repository should be publically visible (public) or only accessible by you or your organization (private).
Confirm your entries by clicking ‘Create’.
Create teams and organizations
With the hub, Docker provides a cloud-based platform on which self-created images are centrally managed and conveniently shared with workgroups. In the Docker terminology, these are called organizations. Just like user accounts, organizations receive individual IDs via which images can be provided and downloaded. Rights and roles within an organization can be assigned via teams. For example, users assigned to the ‘Owners’ team have the authority to create private or public repositories and assign access rights.
Workgroups can also be created and managed directly via the dashboard. Further information about organizations and teams can be found in the Docker documentation.
Working with images and containers
As the first point of contact for official Docker resources, the Docker hub is our starting point for this introduction to handling images and containers. The developer team provides the demo image whalesay here, among others, which will serve as the basis for the following Docker tutorial.
Download Docker images
In the search results, click on the resource with the title docker/whalesay to access the public repository for this image.
Docker repositories are always built according to the same pattern: In the header of the page, users find the title of the image, the category of the repository, and the time of the last upload (last pushed).
Each Docker repository also offers the following info boxes:
- Short description: Short description of the resource
- Full description: Detailed description, usually including directions for use
- Docker pull command: Command line directive used to download the image from the repository (pull)
- Owner: Information about the creator of the repository
- Comments: Comment section at the end of the page
The information boxes of the repository show that whalesay is a modification of the open source Perl script cowsay. The program, developed by Tony Monroe in 1999, generates an ASCII graphic in the form of a cow, which appears together with a message in the user’s terminal.
To download docker/whalesay, use the command docker pull according to the following pattern:
$ docker pull [OPTIONS] NAME [:TAG|@DIGEST]
The command docker pull instructs the daemon to load an image from the repository. You specify which image this is by entering the image title (NAME). You can also instruct Docker on how the desired command should be carried out (OPTIONS). Optional input includes tags (:TAG) and individual identification numbers (@DIGEST), which allow you to download a specific version of an image.
A local copy of the docker/whalesay image is obtained with the following command:
$ docker pull docker/whalesay
In general, you can skip this step. If you’d like to launch a container, the Docker daemon automatically downloads the images from the repository that it can’t find on the local system.
Launch Docker images as containers
To start a Docker image, use the command docker run according to the following pattern:
$ docker run [OPTIONS] IMAGE [:TAG|@DIGEST] [CMD] [ARG...]
The only obligatory part of the docker run command is the name of the desired Docker image. But when you launch a container, you also have the chance to define extra options, TAGs, and DIGESTs. In addition, the docker run command can be combined with other commands that are run as soon as the container starts. In this case, the CMD (COMMAND, defined by the image creator and executed automatically when the container is started) is overwritten. Other optional configurations can be defined through additional arguments (ARG…). This makes it possible, for example, to add users or to transfer environment variables.
Use the command line directive
$ docker run docker/whalesay cowsay boo
If the image docker/whalesay is run, the script outputs an ASCII graphic in the form of a whale as well as the text message ‘boo’, passed with the cowsay command in the terminal.
As with the test run, the daemon first looks for the desired image in the local file directory. Since there is no package of the same name, a pulling from the Docker repository is initiated. Then the daemon starts the modified cowsay program. If this has run through, then the container is ended automatically.
Like cowsay, Docker’s whalesay also offers the option to intervene in the program sequence to influence the text output in the terminal. Test this function by replacing the ‘boo’ in the output command with any string – for example, with a lame whale joke.
$ sudo docker run docker/whalesay cowsay What did the shark say to the whale? What are you blubbering about?
Display all Docker images on the local system
If you aren’t sure whether you’ve already downloaded a particular image, you can call an overview of all the images on your local system. Use the following command line directive:
$ sudo docker image
If you start a container, the underlying image is downloaded as a copy from the repository and permanently stored on your computer. This saves you time if you want to access the image at a later time. A new download is only initiated if the image source changes – for example, if a current version is available in the repository.
Display all containers on the local system
If you want to output an overview of all containers that are running on your system or have been run in the past, use the command line directive docker ps in combination with the option --all (-a for short):
$ sudo docker ps -a
The terminal output contains information like the respective container ID, the underlying image, the command run when the container was started, the time when the container was started, and the status.
If you only want to show the containers that are currently running on your system, use the command line directive docker ps without any other options:
$ sudo docker ps
Currently, though, there should be no running containers on your system.
Create Docker images
You now know how to find images in the Docker hub, download them, and run them on any system with the Docker engine installed. But with Docker, you won’t only be able to access the extensive range of apps available in the registry. The platform also offers a wide range of options for creating your own images and sharing them with other developers.
In the introductory chapters of this tutorial, you already learned that each Docker image is based on a Docker file. You can imagine Docker files as a kind of building template for images. These are simple text files that contain all the instructions Docker needs to create an image. In the following steps, you’ll learn how to write this type of Docker file and instruct Docker to use this as the basis for your own image.
1. Create new directory: The Docker developer team recommends creating a new directory for each Docker file. Directories are easily created under Linux in the terminal. Use the following command line directive to create a directory with the name mydockerbuild:
$ mkdir mydockerbuild
2. Navigate in the new directory: Use the command cd to navigate in the newly created working directory.
$ cd mydockerbuild
3. Create new text file: You can also easily create text files via the terminal with Ubuntu. To do this, use an editor like Nano or Vim. Create a text file with the name Dockerfile in the mydockerbuild directory.
$ nano Dockerfile
4. Write Docker file: The newly created text file serves as a building plan for your self-developed image. Instead of programming the image from the ground up, in this Docker tutorial we’ll use the demo image docker/whalesay as a template. This is integrated using the command FROM in your Docker file. Use the tag :latest to point to the newest version of the image.
FROM docker/whalesay:latest
So far, the way that docker/whalesay works is by you putting words into its mouth. In the terminal, the exact text that you entered is displayed in combination with the command to start the container. But it would be more interesting if the script automatically generated new text output. This can be done, for example, by using the fortunes program available on every Linux system. The basic function of fortunes is to generate fortune cookie sayings and humorous aphorisms. Use the following command to update your current local package index and install fortunes:
RUN apt-get -y update && apt-get install -y fortunes
Then define a CMD statement. This is executed after the RUN command, unless it’s been overwritten by the call (docker run image CMD). Use
CMD /usr/games/fortune -a | cowsay
to run the fortunes program with the option -a (‘Select from all databases’) and display the output in the terminal using the cowsay program.
Your Docker file should look as follows:
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
Note: Commands within a Docker file are always single-spaced and always start with a keyword. The underlying syntax is case-insensitive – so it doesn’t matter whether you write in upper- or lowercase. A consistent capitalization of keywords has been established, though.
5. Save text file: Save your entry. If you’re using the Nano editor, save with the key combination [CTRL] + [O] and confirm with [ENTER]. Nano gives you the message that three lines have been written to the selected file. Close the text editor with the key combination [CTRL] + [X].
6. Create image as Docker file: To create an image from a Docker file, navigate first to the directory where the text file is located. Start the image creation with the command line directive docker build. If you want to individually name the image or provide it with a tag, use the option -t followed by the desired combination of label and tag. The standard format is name:tag.
In the current example, an image with the name docker-whale should be created:
$ docker build -t docker-whale .
The build process starts as soon as the command is confirmed with [ENTER]. First, the Docker daemon checks whether it has all the files it needs to create the image. In Docker terminology, this is summarized under the term ‘context’. The following status message is displayed in the terminal:
Sending build context to Docker daemon 2.048 kB
Then, the docker/whalesay image with the tag :latest is located:
Step 1/3 : FROM docker/whalesay:latest
---> 6b362a9f73eb
If the required context for the image creation already exists in its entirety, then the Docker daemon starts the image templated attached via FROM in a temporary container and moves to the next command in the Docker file. In the current example, this is the RUN command, which causes the fortunes program to be installed.
Step 2 : RUN apt-get -y update && apt-get install -y fortunes
---> Running in 80b81eda1a11
…etc.
At the end of each step of the image creation process, Docker gives you an ID for the corresponding layer that’s created in the step. This means that each line in the underlying Docker file corresponds to a layer of the image built on it.
When the RUN command is finished, the Docker daemon stops the container created for it, removes it, and starts a new temporary container for the layer of the CMS statement.
Step 3/3 : CMD /usr/games/fortune -a | cowsay
---> Running in c3ac46675e7a
---> 4419af61d32d
Removing intermediate container c3ac46675e7a
At the end of the creation process, the temporary container created in step three is also ended and removed. Docker gives you the ID for the new image:
Successfully built 4419af61d32d
Your newly created image can be found under the name docker-whale in the overview of your locally saved images.
$ sudo docker images
To start a container from your newly created image, use the command line directive sudo docker run in combination with the name of the image:
$ sudo docker run docker-whale
Tag Docker images and upload them to Docker hub
If you want to upload your custom docker-whale image to the hub and make it available to either the community or a workgroup, you first need to link it with a repository of the same name in your own personal namespace. In the Docker terminology, this step is known as tagging.
To publish an image in the Docker hub, proceed as follows:
1. Create a repository: Log in to the Docker hub using your Docker ID and personal password, and create a public repository with the name docker-whale.
2. Determine the image ID: Determine the ID of your custom image docker-whale using the command line directive docker images.
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-whale latest 4419af61d32d 22 hours ago 275 MB
hello-world latest 48b5124b2768 6 weeks ago 1.84 kB
docker/whalesay latest 6b362a9f73eb 21 months ago 247 MB
3. Tag the image: Tag the docker-whale image using the command line program docker tag according to the following pattern:
$ sudo docker tag [Image-ID][Docker-ID]/[Image-Name]:[TAG]
For the current example, the command line directive for tagging reads:
$ sudo docker tag 4419af61d32d myfreedockerid/docker-whale:latest
4. Upload the image: To upload the image, you first need to log in to the Docker hub. This takes place using the docker login command.
$ sudo docker login
If the login was successful, then use the command line directive docker push to upload your image into the newly created repository.
$ sudo docker push myfreedockerid/docker-whale
If you want to upload more than one image per repository, use varying tags to offer your images in different versions. For example:
myfreedockerid/docker-whale:latest
myfreedockerid/docker-whale:version1
myfreedockerid/docker-whale:version
Images of different projects should be offered in separate repositories, though.
As long as the upload was successful, your custom image is now available in the public repository to every Docker user across the globe.
5. Test run: Test the success of the upload by attempting a download of the image.
Note that the local version of the image needs to first be deleted in order to download a new copy with the same tag. Otherwise, Docker will report that the desired imaged already exists in the current version.
To delete the local Docker image, use the command line directive docker rmi in combination with the corresponding image ID. This is determined, as usual, via docker images. If Docker logs a conflict – e.g. because an image ID is used in multiple repositories or is used in a container – reiterate your command with the option --force (-f for short) to force a deletion.
sudo docker rmi -f 4419af61d32d
Display an overview of all local images again:
$ sudo docker Images
The deleted elements should no longer appear in the terminal output. Now use the pull command given in the repository to download a new copy of the image from the Docker hub.
$ sudo docker pull myfreedockerid/docker-whale
From beginners to Docker professionals
In this Docker tutorial, we showed that the lean container platform varies from traditional hardware virtualization at several essential points. Docker relies on software containers, and so avoids the overhead of a virtual guest operating system. Containers share the core of the same host, and generate everything necessary for the runtime of applications as isolated processes in the user space. The result is maximum portability. With Docker, a single software container can be run across platforms and on various systems and infrastructures. The only requirements are a local installation of the Docker engine and access to the cloud-based Docker hub.
Our example demonstrated how quickly a fully functioning container platform can be implemented on the popular Linux distribution Ubuntu using Docker. You’ve learned how to install and set up Docker on Ubuntu, how do download applications as images from the Docker hub, and how to locally run them in containers. You’ve written a Docker file yourself, created your own image, and made it available to other Docker users over the cloud service. In short: You are now acquainted with the basics of the container platform.
But the Docker universe is large. Over time, the prominent open source project has developed into a living ecosystem. Numerous competitors are also pushing to place their alternative software products Produkten on the market. Docker is particularly interesting for administrators, especially if they’re operating complex applications with multiple containers in parallel on different systems. Docker offers diverse functions for the orchestration of such clusters. You can find more information about this in our article on the Docker tools Swarm and Compose.