Create your first Cluster with Docker Swarm

In this blog post, we will create several Amazon EC2 instances and join them into a cluster by using cluster management integrated with Docker Engine, Docker Swarm. Why Docker Swarm? Because this is a production-grade cluster management engine and the easiest entry-level for you to get your hands dirty with containers, clusters, and orchestrators. Also, we will use AWS because you can get 12 months free account in AWS and 750 hours per month for cloud computing which is enough for this showcase and for you to play with containers at least for one year. In this showcase we will create 5 EC2 Linux instances, each instance will act as Docker hosts and they will run in swarm mode. Let’s dive in.

First, let’s create an AWS account and log-in to the AWS Management Console. Change the default region to the region of your choice. I will use Europe (Frankfurt) as a region of my choice.

Region section in AWS Management Console

From the left top menu select Services and chose EC2 under the Compute section.

Click on Free tier only checkbox for filtering only instances that allow you to launch and run micro instances for a year for free. We will filter Ubuntu instances and take latest (in the moment of writing this post the latest image is Ubuntu Server 18.04 LTS (HVM))

In the next step make sure that Free tier eligible instance type is selected

Under Configure Instance Details type 5 as the desired number of instances.

 

All instances need to have static (fixed) public IP address. There are two ways on how to do this on AWS and here is a link for more info about it: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-associate-static-public-ip/.
You can create instances now and assign static IP address later but make sure you did this before you initialize a cluster because if your instance goes down (for example on shutdown) when you start again the instance, your instance will get a new public IP address and your cluster will be interrupted. We will configure static IP addresses after we create instances. It is also a good practice that you configure a security group and enable SSH connection only to specific IP address and enable inbound and outbound traffic between instances. There is a default security group that can be selected in Configure Security Group that already is configured to allow inbound and outbound traffic between instances so you can use this security group but feel free to configure your own security group.

Select existing security group and assign a default one

Before you launch a new instance a key pair needs to be created. This is an important step and it’s important that you store it in a secure and accessible location because you will not be able to download the file again after it’s created. If you lose your key you will have a bad day because you will not be able to change the password (in case of Windows AMI) or connect to your instance with SSH client.

Our 5 instances are up and running and now we can install Docker engine on Ubuntu

To connect to instances with SSH you will need to add an inbound rule to your security group. Find the name (in my case this is default) and click on the security group under the Security Groups column in the Instances section under EC2 service.

This will open the selected security group where you can edit inbound and outbound rules for your instances. Let’s create SSH inbound rule (select SSH as type) and add My IP as a source. With this rule, you will be able to connect to your instances by using SSH command in a later section of this blog post.

In case you create your own security group you will need to add one more rule (inbound and outbound) to All traffic and Anywhere as the destination. This rule will allow inbound and outbound traffic between our six instances. It isn’t a secure way to do it but for this showcase, this will be ok.

Now it’s time to configure static IP addresses. Under this link https://aws.amazon.com/premiumsupport/knowledge-center/ec2-associate-static-public-ip/ you will find two ways to assign the public IP address. We will use Allocate an Elastic IP address from either Amazon’s pool of public IPv4 addresses. Under EC2 console management search for Elastic IPs menu. Click Allocate Elastic IP address and then click Allocate.

Do the same steps until you create 5 static IP addresses. This is also a maximum number of static IP addresses from Elastic IP address poll that can be created under one region so keep that in mind. You will need to do the second step where you create your own pool of public IP addresses.

Let’s associate those IP addresses to each instance by selecting an IP address and under Action chose Associate Elastic IP address. Use Instance as resource type and chose Instance.

All public IP’s are associated with the instances.

Under Connect section of selected instance, you can find the tutorial on how to connect to the instance on Windows and Linux operating system but if you are using macOS the best way to do it through the terminal (make sure that you are in the same directory where your private key is stored). Here is an example of the command that I am using:

ssh -i "markosaric.pem" ubuntu@ec2-3-124-27-48.eu-central-1.compute.amazonaws.com

Once you’re connected your bash will change and now you can follow the steps under Install Docker Engine – Community section.


I’ve taken six commands that you will need:

1. Update the apt package index:
sudo apt-get update

2. Install packages to allow apt to use a repository over HTTPS:
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

3. Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

4. Use the following command to set up the stable repository: 
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
	
5. Update the apt package index:
sudo apt-get update

6. Install the latest version of Docker Engine:
sudo apt-get install docker-ce docker-ce-cli containerd.io

Once the steps are completed we can run the command docker -v and see that docker is installed. Let’s do this on the other 4 instances.

Now it’s time to configure swarm mode. Let’s shortly explain what is a swarm. The definition is taken from Swarm mode key concepts section from official Docker documentation:

A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles. When you create a service, you define its optimal state (number of replicas, network and storage resources available to it, ports the service exposes to the outside world, and more). Docker works to maintain that desired state. For instance, if a worker node becomes unavailable, Docker schedules that node’s tasks on other nodes. A task is a running container which is part of a swarm service and managed by a swarm manager, as opposed to a standalone container.

Let’s add a name to our 5 EC2 instances for managers and worker nodes. It’s a common practice to create an odd number of manager nodes (the best practice is 3, 5 or 7 manager nodes) so we will create a tree manager nodes and two worker nodes. Overall this will create 5 worker roles because Manager roles are also workers.

Connect to Manager 1 instance using SSH and run docker info. You will see that Swarm is currently inactive.

We will use the following command to initialize a swarm mode and to create our first manager:

docker swarm init --advertise-addr <MANAGER-IP>

MANAGER-IP is current instance public IP address. IP addresses should be fixed. When we run a command:

sudo docker swarm init --advertise-addr 3.124.27.48:2377

we will get the following message:

Swarm initialized: current node (radg9hu2ygtkkindn5rqttd3d) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0urjpy7cz2i1g1kg6kqtpl2jz9azgxy13l1grvzrecmedtcc3l-2icejk047u79frq51jym45u6g 3.124.27.48:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The first command contains a token that is used for adding workers into a swarm mode:

docker swarm join --token SWMTKN-1-0urjpy7cz2i1g1kg6kqtpl2jz9azgxy13l1grvzrecmedtcc3l-2icejk047u79frq51jym45u6g 3.124.27.48:2377

and the second command docker swarm join-token manager is for generating a token for creating a token for adding manager roles into the swarm. Copy the first command for later and execute the second one for creating a manager role token. Now you got the following message and a manager token:

To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0urjpy7cz2i1g1kg6kqtpl2jz9azgxy13l1grvzrecmedtcc3l-enmqpq04x7ido4bzn6gmiknrc 3.124.27.48:2377

Let’s connect to other instances marked as Manager nodes and join them as managers into the cluster by using the manager token.

Let’s do the same for the instances marked as Workers but now we will use worker token.

Once you finished adding manager and worker roles when executing docker info you will see that Swarm is active and that you got 3 Manager roles and 5 Nodes (worker) roles.

Let’s check our cluster now by executing docker node ls.
As you can see 5 nodes are available and three nodes have manager status and one is always a leader. Our cluster is ready to run some services.

Let’s test our cluster with one simple service. The container nginxdemos/hellois taken from the docker hub as an example. Here is the command that we will use to create our service:

sudo docker service create --replicas 4 -p 8080:80 --name test nginxdemos/hello
  • The docker service create command creates the service.
  • The –name flag names the service test.
  • The –replicas flag specifies the desired state of 4 running instances.
  • The -p flag maps port 8080 on host (our instance) to containers port 80

Let’s run docker service ps test command to see on which instances our application is running.

And if we  http://ec2-35-157-182-134.eu-central-1.compute.amazonaws.com:8080 in our browser we will get our sample page. 

We can hit any instance on port 8080 in our cluster and we will get the same result because of ingress load balancing of the docker swarm engine and you get that out of the box and that’s beautiful. Now let’s examine what will happen if we shut down the instance of the test.2 service node.

Connect to the instance and run sudo shutdown now and then run sudo docker service ps test again on any manager node. As you can see our instance was really shouted down but miraculously we have 4 instances up and running and that’s the beauty of it. You can easily scale, configure rolling updates and your app is running 24/7. Cluster management engine does the heavy work for you and you can sleep tight.

And we have reached the end of this blog post. Where to go from here? As I’ve already said, you can play for one year with the EC2 instances and spin-up several containers and deep dive with docker features like networking, logging, volumes, stack deploys, etc. When you are confident with containers you can try to break down your existing monolith application into microservices and spin them on the Docker Swarm cluster. When you have reached a good knowledge of containers and microservices it’s time for Kubernetes. But let’s leave this for some future writings.