Tuesday, February 6, 2018

Drooling Over Docker #4 — Installing Docker CE on Linux

Choosing the right product

Docker engine comes in 2 avatars — Docker Community Edition (CE) and Docker Enterprise Edition (EE). While the CE product is free to experiment with, the EE comes for a fee (1-month free trial available).
Docker CE — Docker Community Edition (CE) webpage informs that it is a good platform for developers or small teams to learn about containerized applications.
Docker CE is made available via two update channels –
1. Stable — Updates are released quarterly (Check the available RPM packages)
2. Edge — Updates are released monthly (Check the available RPM packages)
Since 2017, Docker has moved to YY.MM style of versioning. So if you explore a Docker CE Edge repository, if you will find monthly releases there e.g. last few versions as 17.11, 17.12, and 18.01; however, the Stable repository would have the last few versions as 17.06, 17.09, and 17.12.

Installing Docker CE using repositories

As I collect details from Docker CE installation page today, amongst Linux distributions, Docker CE is available for CentOS, Fedora, Debian and Ubuntu Linux only. Let us see how to intall it on CentOS & Ubuntu -
1. Prerequisites
  • For CentOS — to have CentOS 7; centos-extras repository enabled (remains enabled by default)
  • For Ubuntuto have any of the following 64-bit Ubuntu versions — Artful 17.10(Docker CE 17.11 Edge and higher only); Zesty 17.04Xenial 16.04(LTS);Trusty 14.04(LTS)
2. Installing Necessary Packages
  • For CentOS#yum install -y yum-utils device-mapper-persistent-data lvm2
  • For Ubuntu# apt-get update ; # apt-get install apt-transport-https ca-certificates curl software-properties-common;
3. Adding Docker GPG Key
4. Adding Docker CE Repository
5. Installing Docker CE From The Repositories
Installing the latest update from the repository
  • For CentOS# yum install docker-ce;
  • For Ubuntu# apt-get update ; # apt-get install docker-ce
Installing the chosen version from the repository
  • For CentOS#yum list docker-ce — showduplicates; #yum install docker-ce-.ce;
  • For Ubuntu# apt-cache madison docker-ce; #apt-get install docker-ce=
Contd…
— — — — — — — — — — — — — — — — — — — — — — — — — — — — 
Resources:
  1. https://blog.docker.com/2017/03/docker-enterprise-edition/ — Docker Versioning
  2. https://docs.docker.com/install/linux/docker-ce/centos/ — Docker CE on CentOS
  3. https://docs.docker.com/install/linux/docker-ce/ubuntu/ Docker CE on Ubuntu

Saturday, January 20, 2018

Using templates in 'docker service create'

Let us take a quick look at some useful commands here before using and 'inspecting' a template -

Node ID / Host Name:

A docker node
$ docker node ls
ID                                            HOSTNAME      STATUS     AVAILABILITY   MANAGER STATUS
s5id2jv8uofjfmbbiqiqe6nnx *   ubuntu              Ready               Active                           Leader

Note that the above-mentioned ID is the node id.


Each host that participates in a docker swarm is called a node. A node (a host) has a NODE ID (a swarm given identity) and a hostname (an independent name that has always been there with the host even before joining a swarm). I have highlighted the NODE ID (the swarm thing) and the hostname in the above image.

Service ID / Service Name:

Creating a single replica of a service

When we create a service - we are actually creating a running instance from the image that we call a task (a container) in swarm mode. Since we are creating just one replica here, we are creating just one task.

$ docker service create --name django httpd

Now we want to check the the service(s) running on this swarm -

$ docker service ls
ID                          NAME                MODE                REPLICAS            IMAGE               PORTS
ep9luuvn21y2        django              replicated                  1/1                     httpd:latest

Note that that above mentioned ID is the service ID


Task ID / Task Name:

$ docker service ps django  OR
 $ docker service ps ep9luuvn21y2
ID                      NAME      IMAGE      NODE     DESIRED St   CURRENT St        ERROR   PORTS

y2n63eaa1ln3   django.1    httpd:latest   ubuntu     Running     Running 17 minutes ago

When you want to list the tasks of one or more services, use service ps command.
The list of tasks informs task id as well as the task name (servicename.#) and the node name (hostname) is it running on.

docker node ps OR docker service ps SERVICENAME show information from node (all services) and service (all nodes) perspective 

Note -

$docker service ps SERVICES_NAMES  <--- div="" list="" more="" nbsp="" of="" one="" or="" services="" tasks="" the="">
$docker node ps [NODES] <--- i="" nbsp="">List tasks running on one or more nodes, defaults to current node

So by now, we are familiar with NODE ID / NODE NAME (hostname), SERVICE ID / SERVICE NAME, TASK ID / TASK NAME and the commands to list them. The commands are 'docker node ls', 'docker service ls', 'docker service ps SERVICE_NAME or SERVICE_ID' OR 'docker node ps NODE_NAME or NODE_ID'.

Let us make it easier to understand through the following image - 


Task ID & Task Name Vs. Container ID & Container Name:


From the following commands, also notice that the task name and task id form the underlying container name while container id happens to be something else.



Putting everything together - Using Templates:

 Using a template we can pass a customized hostname, mount, or env values to a container at the time of service creation.

Now let us try to understand the Templates through this sequence of commands - 


Here first we create a service with --hostname variable set go GO Language styled variable it will set the hostname within the going-to-be created container under this service. In the example here, the hostname recorded within the container is composed of node's hostname, node's id and the service name.
The last command in the sequence is trying to paste that hostname recorded within the container back to screen (re-pasted below) - 

docker container inspect --format="{{.Config.Hostname}}" hosttempl.1.xudl0i6soylcnhep47nvog4nn
ubuntu-s5id2jv8uofjfmbbiqiqe6nnx-hosttempl





Some useful docker orchestration commands

Listed below are the commands that are available in the 17.11 version of Docker CE. Listing them here together is good to compare and see when to use them - 

docker swarm (working with the cluster)

Commands:
  ca          Display and rotate the root CA
  init        Initialize a swarm
  join        Join a swarm as a node and/or manager
  join-token  Manage join tokens
  leave       Leave the swarm
  unlock      Unlock swarm
  unlock-key  Manage the unlock key
  update      Update the swarm

docker node (working with nodes - individual hosts that make us a swarm cluster)

Commands:
  demote      Demote one or more nodes from manager in the swarm
  inspect     Display detailed information on one or more nodes
  ls          List nodes in the swarm
  promote     Promote one or more nodes to manager in the swarm
  ps          List tasks running on one or more nodes, defaults to current node
  rm          Remove one or more nodes from the swarm
  update      Update a node


docker service (the service that runs instances aka containers aka tasks on nodes)
Commands:
  create      Create a new service
  inspect     Display detailed information on one or more services
  logs        Fetch the logs of a service or task
  ls          List services
  ps          List the tasks of one or more services
  rm          Remove one or more services
  rollback    Revert changes to a service's configuration
  scale       Scale one or multiple replicated services
  update      Update a service

Friday, January 5, 2018

Drooling Over Docker #3 - Docker CLI Structure & Management Commands

Prior to Docker 1.13 release (January 2017), docker commands were logically grouped around the purpose e.g. build, ship, or run commands; however this list of commands was quite unstructured and often confusing for the lack of context they would operate inside of e.g. for a certain command, it was difficult to determine whether it would run on an image or on the container spawned from that image. A detailed list for few such challenges in using earlier Docker CLI can be seen here.
With Docker 1.13 release, the Docker commands are given a structure where a command has three parts (with a few exceptions) in addition to other options or arguments as needed -
docker
The command "$ docker image ls" has the following components:
  • The docker client command - docker
  • To provide a context, management command e.g. image
  • The sub-command to inform of the specific action required e.g. ls to list the downloaded docker images
Let us see how these commands look like at the terminal:
Listing downloaded Docker images
Removing one of the downloaded images
The following list presents all management commands available in the current docker version:
Each of the above listed management commands has a set of sub-commands to initiate a specific action; the following screen-shot shows the sub-commands under management command image :
The following table shows all available docker image commands:
Though the earlier Docker commands are compatible with the newer Docker versions, the new commands are easier to remember and invoke. The following table shows a comparison between earlier (still working) vs. newly structured commands -
Contd...
---------------------------------------------------------------------------------------------
Resources:
  1. Docker 1.13 Management Commands
  2. Docker Management Commands by Romin Irani
  3. Working with Docker v1.13 CLI -Management commands
  4. Docker CLI documentation

Saturday, October 28, 2017

So Amazon Develops Its Own NIC - AWS Enhanced Networking!

Ethernet adapter types available for EC2
AWS EC2 instances offer 3 types of network adapters (depending upon the instance types) - the default VIF, which offers low (~100Mbps) to moderate (~300Mbps) network throughput, and two other network adapters that support enhanced networking (~10Gbps or greater) - Intel 82599 Virtual Function adapter, and the next generation Elastic Network Adapter (ENA). Enhanced networking also requires specialized H/W support (the above mentioned physical network adapters installed on the EC2 instance host) and thus this feature is only available through specific EC2 instance types.
Enhanced Networking
The VIF adapter present in an EC2 instance is provided by the underlying virtualization layer i.e. XEN hypervisor for AWS. This adapter employs usual network virtualization technique that involves significant overheads as it relies on traditional interrupt based (IRQ) approach inherent within the PCIe NIC design.
In a attempt to support higher network throughput, Amazon Web Services introduced enhanced networking for which they relied on PCIe NICs equipped with SR-IOV technology that allows a VM (EC2 instance) to bypass the underlying hypervisor (XEN) and use direct memory access (DMA) instead of interrupting the CPU (more about SR-IOV in the last section of this article).
Initially, Amazon offered 10Gbps network throughput for a few EC2 instance types using Intel's 82599 VF PCIe NIC. In January 2015, Amazon bought an Israel based chip maker Annapurna Labs; based upon the newly acquired company's flagship product Alpine, Amazon launched its own new generation PCIe NIC that supported upto 25Gbps of network throughput. Amazon christened this NIC as Elastic Network Adapter (ENA). ENA is only available for some specific EC2 instance types.
The following table, based on a similar table from Amazon Knowledge Center page on Enhanced Networking, summarizes the network adapter types and which EC2 instances types they are available for as of today -

How do I enable enhanced networking on EC2 instances
Refer Amazon documentation on enabling Enhanced Networking for EC2 instances.

The technology behind Enhanced Networking - SR-IOV

Assigning a dedicated PCI Network H/W adapter (or port) for each VM on a host can give a line rate throughput but it is not feasible, and software based sharing of IO device (IO Virtualization) imposes significant overheads and thus unable to use the capabilities of the physical device fully.
Single Root - Input Output Virtualization (SR-IOV) specification, released by PCI SIGin 2007, is one of the technologies to achieve Network Function Virtualization (NFV). It gets its name "Single Root" from the PCIe Root Complex. SR-IOV enables the physical network adapter to be shared directly with the VMs bypassing the hypervisor as shown below.
SR-IOV architecture offers two function types -
  • Physical Functions (PFs) - A NIC feature for configuring and managing SR-IOV functionality on the deivce. It is exploited by PF device driver that is part of the hypervisor.
  • Virtual Functions (VFs) - It is a PCIe function that is used by the respective VMs for communicating directly with the physical NIC. The hypervisor, using the PF device driver, assigns VFs to VMs and then VFs use native virtual function device drivers to directly communicate with the NIC.
So when a data packet is received by the NIC adapter, the classifier at SR-IOV capable NIC - as configured by the PF device driver - places it to the appropriate queue mapped to the appropriate virtual function and its target VM.
Question to ponder upon ...
Since SR-IOV lets a VM directly map to a PCI port bypassing the hypervisor, how does Amazon achieve Inter-EC2 network switching, implement Security Groups or ACLs?
---------------------------------------------------------------------------------------------
Resources:
Amazon's docs, blogs & videos:
  1. Read #9 "The importance of the network" @ All Things Distributed
  2. How do I enable and configure enhanced networking on my EC2 instances?
  3. Enhanced Networking - FAQs
  4. AWS re:Invent 2016: Optimizing Network Performance for Amazon EC2 Instances
  5. AWS re:Invent 2016: Optimizing Network Performance @ YouTube
  6. AWS re:Invent 2016: James Hamilton @ YouTube (between 23:00-36:00 minutes)
  7. Elastic Network Adapter – High Performance Network Interface for Amazon EC2
  8. Amazon EC2 Instance Types
Independent blogs & videos:
  1. How did they build that — EC2 Enhanced Networking?
  2. Single Root I/O Virtualization (SR-IOV) Primer @ RedHat
  3. An Introduction to SR-IOV Technology @ Intel
  4. Single Root I/O Virtualization (SR-IOV) @ VMWare
  5. Accelerating the NFV Data Plane: SR-IOV and DPDK… in my own words
  6. Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics
  7. Kernel bypass
  8. Network function virtualization (NFV)
  9. PCI Express Architecture In a Nutshell
  10. Amazon buys secretive chip maker Annapurna Labs for $350 million
  11. The chip company Amazon bought for $350 million has a new product that could terrify Intel
  12. Annapurna Labs
  13. Intel VMDq Explanation by Patrick Kutch @ YouTube
  14. Intel SR-IOV Explanation by Patrick Kutch @ YouTube
  15. What is SR-IOV?

Saturday, October 7, 2017

Drooling Over Docker #2 - Understanding Union File Systems

To understand the composition of Container images better, let us take a detour and learn about Union File Systems first.
File Systems and Mounting in Unix/GNU Linux
In Unix (and GNU Linux), everything is a file i.e. - apart from regular data files, even system devices are exposed through a file system name space, e.g. a hard disk can be seen as a file named sda in the directory for device files i.e. /dev and can be accessed through its absolute path being /dev/sda.
Even disk partitions are seen as device files within rootfs e.g. /dev/sda1 is the first primary partition of disk represented by /dev/sda1. Such a partition is further formatted with a file system driver (etx3, ext4 etc.) so that it can store files and directories within it.
Now, if we have a formatted (have a file system) partition /dev/sda1 that has some data stored on it in files organized within a few sub-directories and we want to read from or write to those files - we will have to attach this file system (on file /dev/sda1) to some directory in the logical file system tree. This process is known as mounting (to an existing directory) a file system.
#mount -t ext4 /dev/sda1 /mnt
Union File System - what is that!?
The keyword here is UNION as in SET THEORY.
If you experiment and mount two separate file systems at the same mount point, one after the other, you'll only get to see the files from the file system that was mounted last.
Try out these commands in your Linux terminal and see for yourself -
Unlike simple mount option, a union mount would provide a UNION of both the file systems mounted at the same mount point. -
After these examples, now consider that you have a read-only file system and you want to modify a certain file in there so that you can go ahead with your computing needs - on the lines of above mentioned example, Union File System can help us here. We can create another read-write file system either on disk or in RAM as the case may be, and mount both these file systems to another mount point using Union File System. Now, this mount point can give you access to all the files in both ro and rw file systems. In case, you want to modify any of the files residing on the ro file system, Union File System driver would search for that file and perform a CoW (Copy on Write) to make another copy of the file in rw file system that overrides the copy that exists on ro file system. This newly created copy is finally updated with the new contents. Any new files as part of software installation would also go in to the rw files system.
Please check Link1 and Link2 for some very good examples on the points discussed so far. I have also given these links in the resources section in the end of the article.
A use case for Union File System - Knoppix
Again, what if the file system we attempting to mount is read-only (e.g. from a CD ROM) and we intend to change its contents by editing/removing existing files or adding new files to the file system. Is it possible?
Let us understand another example from Knoppix - a Live CD version of Linux that could boot your machine and allow you to work on your system. You could change system settings or download additional software when the Knoppix OS was running in memory and even save these changes for subsequent runs.
The possibility of making Knoppix settings persistent allowed Knoppix to be a good portable desktop OS - all it required was a Live Knoppix CD and a USB drive. You could boot system through Knoppix and load changed settings from the USB drive and you could get the same desktop environment on any machine.
With the support from Union File Syems (Knoppix 3.9 brought in UnionFS and its later versions used aufs), Knoppix mounted multiple filesystems on top of each other ( in the logical space as mounting happens in logical space) i.e. here mounting rw RamDisk on the top of ro CD file system forming a UNION of both these file systems.
If you write to a file that is in the ro area, the aufs driver would copy it in the rw area and perform the write operation. When next time you access the file,it accesses the modified version of the file from the rw area (which hides the same named file in the ro area). You work on it transparently and aufs does the work for you - you keep working as it it was a writable system.
How Union File Systems help Docker Containers
Docker Containers bring you immutable (unchanging over time) software in the form of layers. During build time, you stack multiple such immutable layers of software to get the desired applications with their dependencies. Many a times there are software layers that override the functionality given by the lower layers in the stack. This is only possible by implementing a Union File System. Also, at run time, you download some additional software within the container that is the newer version of some software; such a case would trigger CoW and have the updated copy of it written in the rw layer of the software stack. Even any newly installed software would settle in this very rwtop layer of the stack. I'll discuss more about Docker Layers in the next chapter...

Contd...
---------------------------------------------------------------------------------------------------------
  1. DON'T MISS THIS VIDEO FROM VMware about Containers
  2. Knoppix 3.8 and UnionFS. Wow. Just Wow. by Kyle Rankin
  3. Knoppix Hacks: Tips and Tools for Hacking, Repairing, and Enjoying Your PC - Hack#25
  4. Docker Storage: An Introduction
  5. Union file systems: Implementations, part I
  6. Digging into Docker layers
  7. Docker Container’s Filesystem Demystified
  8. Lightweight Virtualization LXC containers & AUFS
  9. Why to use AuFS instead of UnionFS
  10. Union Filesystem - FreeBSD
  11. Linux AuFS Examples: Another Union File System Tutorial
  12. Why does Docker need a Union File System
  13. Manage data in Docker
  14. Select a storage driver
  15. Use the AUFS storage driver
  16. Use the BTRFS storage driver
  17. Use the Device Mapper storage driver
  18. Use the OverlayFS storage driver
  19. Filesystems in LiveCD by Junjiro R. Okajima 
  20. AuFS2 - ReadMe
  21. AuFS4 - ReadMe
  22. AuFS - Ubuntu Man Page
  23. AUFS: How to create a read/write branch of only part of a directory tree?
  24. Unionfs: User- and Community-Oriented Development of a Unification File System
  25. UnionMount and Union-type Filesystem (Google Translated from Japanese)

Drooling Over Docker #4 — Installing Docker CE on Linux

Choosing the right product Docker engine comes in 2 avatars — Docker Community Edition (CE) and Docker Enterprise Edition (EE). While the...