Thursday 5 July 2018

16. Hyper-V - Windows Containers 2016 & Hyper-V Containers – Part 3

Windows Containers 2016 & Hyper-V Containers – Part 3

When it comes to building your own container images, you actually have two different options, so first you can do it manually by launching a container, customizing it and then saving an image based on that, and then you can also do this in an automated fashion using something called a Dockerfile. So a Dockerfile basically lists out all the customizations that you want, and it’s a great way to store your container images as code.
Let’s create a new container which we will modify and then save. This time I will create a new container running windows server core and I’ll start PowerShell when the Container is ready and inside this container I’ll make some customizations.
docker run -it –name windowscontainer_core microsoft/windowsservercore powershell
screen.134.jpg
Once the Container Is created, you can run Get-WindowsFeature | where installed to check roles and features that are installed on it.
screen.135.jpg
Let’s install for example DNS
screen.137.jpg
In a practical sense you’d probably install some other Windows features, maybe IIS, but for this example my custom image is just going to have DNS server role. Now let me get out of this container by hitting Ctrl+PQ, and then I’m going to run a docker ps to actually get the container ID for that container, so it starts with 8a.
screen.138.jpg
Now we need to stop the container in order to create an image using the customizations that we have made to this container.
docker stop 8a
screen.139.jpg
So in this simple example, my container image will consist of a base layer image for the operating system, and then one layer where I added that dns role. To be able to capture the image we will need to use docker commit command.
docker commit 8a <name of the new image>
screen.140.jpg
So we use the container commit command to actually capture the image, and you would use this workflow anytime you want to create a container image manually. So launch a container using the base operating system container image that you want, customize it, stop the container, and then run docker commit to capture your image.
If you run docker images you will be able to see our new image.
screen.141.jpg

HOW TO PUSH AND PULL CONTAINER IMAGES

Now that we know how to build container images, we’re ready to push them up to the Docker Hub so we can pull them down from any container host that we want, and in order to do that you need to go to hub.docker.com. This is free to use so you just need to sign up.
Important thing here to keep in mind, I will use mehiccontainers for my user name and that’s going to be the namespace that should be included in my image names. 
screenshot.17.jpg
screenshot.19.jpg
screenshot.1.jpg
What I mean by that is if we go back to powershell and type in docker images, and to give you an idea of what I mean, look at our base container OS images, notice that those are formatted with the name of Microsoft, so we’ve basically got a namespace and then a slash, then the name of the repository where the images live
screen.142.jpg
and if we look at our image, it is currently tagged with just the name of mehic_dns_corecontainerimage, so I need to re-tag my image before I want to push them up to the Docker Hub.
docker tag mehic_dns_corecontainerimage mehiccontainers/mehic_dns_corecontainerimage
screen.144.jpg
Now when we have new image names with the actual namespace we are ready to push it into Docker Hub. To be able to do it you will need to run docker login and login wiht username and pass.
docker login 
screen.145.jpg
Now I’m ready to run the push command to send my image up into my account.
docker push mehiccontainers/mehic_dns_corecontainerimage
screen.146.jpg
When that is done we can go back to docker hub and do a refresh and that’s it.
screenshot.21.jpg
I’ve got my repository set to public. We can change that to private, if needed. If you click on your new image you will be able to see what is the pull command, change settings, customize description, see tags etc.
screenshot.22.jpg
If you would like to pull it down to another container host you would need to run docker pull command.

CONTAINER VOLUMES

To store data inside your container images as you build them, you can also set it up to where the containers can access data outside their own environment. So for example you can mount a folder on the container host inside a container, and you can even create something called data volumes where containers can share data.
Let’s see how this works.
On my container host I created a folder called ContainerVM and inside I created a couple of files.
screen.147.jpg
What I want to do is to launch a container and mount a folder inside the container to this folder on the host machine. We can do it by running
docker run -it -v c:\ContainerVM:c:\Containerdata microsoft/windowsservercore powershell
screen.148.jpg
the -v flag maps the local folder to a folder inside the container. Source is C:\containerVM and the destination will be Containerdata.
If we do a listing inside of a container we can see that we now have containerdata folder and if we run ls containerdata we can see that we do see the 2 files in that folder.
screen.150.jpg

CREATE PERSISTENT DATA VOLUMES

Now the other thing that we can do is create persistent data volumes that can be shared across multiple containers. It’s kind of like a virtual volume that you would create that could be connected to different containers. The volume itself is a Docker construct. Let’s see how we can create one.
docker run -it -v VolumeData:c:\containervolumedata microsoft/windowsservercore powershell
What this command is doing here is we’re creating a data volume called volumedata, and it’s going to be mounted to a folder inside the container called c:\containervolumedata, but this volumedata volume will be a Docker volume that’s accessible by other containers and I can connect it to other containers and the containers can share the data inside that volume.
screen.151.jpg
Now to connect another container to this volume run the same command again with the same mapping.
Just to point that you will not be able to see this docker volume on your C:\ on the container host. To be able to see it you will need to run
docker volume ls
screen.152.jpg

CONTAINER RESOURCE CONTROL

When you create a virtual machine one of the very first things you do is specify how much CPU and memory that virtual machine should have, and we also have some resource controls for our Windows containers as well, and we can do this when we’re running the Docker command to run our containers. More specifically this is a function of the docker run command, so for example if I run docker run –help command and look at the help file I can see that we have some flags that specify the CPU resources, probably the most commonly used one here for Windows containers is going to be CPU percent and this allows you to specify the percentage of the container hosts CP resources that should be used.
screen.153.jpg
If you scroll down you can find flags for memory as well
screen.154.jpg
Now the reason this is important is that containers are just going to use up the resources on the container host as they’re needed, so if a process needs to hog a bunch of memory, and you don’t actually specify these controls before you create the container, there’s a possibility that the container itself could chew up all the memory on the host machine.
Let’s see how we can configure this
screen.155.jpg
Keep in mind that you’re not pre-provisioning or pre-reserving those resources, these are just caps that we’re setting on the container itself, so we’re allowing the container to use up to 1 Gb of memory but we’re not pre-allocating that memory. Same thing goes for the CPU percentage as well, which is logical, but just keep this in mind as you’re working with this concept and as you’re deploying containers on your own Windows container host.

CONTAINER NETWORKING

Windows container networks are very similar to the virtual networks that you have in something like Hyper-V, and the idea is that your containers are going to have a virtual NIC, those virtual NICs will be connected to a virtual switch, and just as you would expect you can create your own virtual networks, you can customize the IP address space used, and there’s a few different options.
We can check container network with the get-containernetwork command. You run this on the container host.
screen.156
We can see here that we have a NAT network defined, and this is going to be defined for you by default when you build your container hosts on Windows Server.
So I didn’t have to set this up manually, this was done for me when I installed the Docker components and set up my container host, so you can see that the subnet defined for this NAT network is a 172.22.32.0/20 network, so as I’ve been spinning up my containers, the virtual NICs in those containers have been going into this network, and if those containers needed to leave that network for example to go to the internet, it would actually use NAT to go out through the host’s network to gain access to the internet that way.
Now in addition to building NAT networks, there’s a few different network types that are available, and there’s basically two ways that you can create them, there’s an actual PowerShell cmdlet that you can use to create the networks, and then there’s a Docker command as well. Let’s run help New-ContainerNetwork and what I want to point out here is as you’re creating these virtual networks there’s a mode parameter that lets you define the mode for the network that you’re creating. So as you can see here in the help file, there’s four different network driver options that are available, so we have NAT, Transparent, L2Bridge and an L2Tunnel.
prints.1

NAT –> so we do get a NAT network by default like we just talked about, and a container in those networks will be basically in an isolated network, if they need to go out to the internet they’ll use the IP address of the container host to do that.
TRANSPARENT –>This network type is a little bit different, and when you create a transparent network, each container in that network, in that transparent network, is going to get an IP address from the physical network that the container host is actually using. So transparent networks for Windows containers are very similar to bridged networks in a virtualized environment, we’re essentially saying that the container is going to be on the same physical network as the container host, and the nice thing about transparent networks here is that the containers can get IP addresses from the DHCP server on your physical network, or you can even statically hard code those IP addresses if you need to.
L2Bridge and L2Tunnel,  these two network drivers are typically used for public or private cloud deployments.
If I run ipconfig on my container host, we can see that I do have a virtual Ethernet adapter that has an IP address in that subnet, so it’s 172.22.32.1, and I’ve also got my physical Ethernet adapter, which is on a 10.52.99 network. So by default the containers that I launch on this container host are going to go into that 172 network, and we can see that by just running a container.
prints.2

CONFIGURE TRANSPARENT NETWORK

Transparent network allows your containers to be on the same network as the container host, so as a review we can use the new container network cmdlet to do this, we can also use the docker network command, and just a quick note here, if you’re going to create a transparent network on a Hyper-V based virtual machine, so your container host is a virtual machine, you’ll need to enable MAC address spoofing
First step will be to stop docker service
prints.13.jpg
Now I’m going to do a Get-ContainerNetwork, I’m going to pipe that over to Remove-ContainerNetwork, so I’m going to blow away all the networks on this machine, type in A to confirm
prints.14.jpg
Next step is to modify daemon.json file. You can use Notepad++ to open json file.
prints.15.jpg
The value is
{ “bridge” : “none” }
This is going to tell the Docker Engine that we don’t want a NAT network to be built when we start the service. Save the file.
prints.16.jpg
Start the docker service. Now if I type in Get-ContainerNetwork I will not see anything there so at this point we’re actually ready to create our transparent network
prints.17.jpg
To create transparet network we need to run
(the -d tells the command which driver to use) the docker network create command is going to be the equivalent to the PowerShell command called New-ContainerNetwork
docker network create -d transparent <network name>
prints.18.jpg
Now if I run Get-ContainerNetwork I will be able to see it.
prints.19.jpg
If you do a docker network ls to list your networks, you can see that we do have a network called MEHIC and we see that it’s used in the transparent network driver, so at this point we can go ahead and launch a container into this transparent network.
prints.20.jpg
When we create a new container –network flag will be the way that we define and tell the Docker Engine which container network we want to use, so this network flag has to be used because we don’t have the default NAT network in place. We removed it.
prints.21.jpg
Now if we run ipconfig inside container we will see that container is connected to host network
prints.23.jpg

VIEW WINDOWS LOG FILES (NANO SERVER CONTAINER)

“Get-EventLog” cmdlet is not supported in nanoserver so how can we get the Windows event logs in nano server container? There is a command that we need to run to get this.
Let’s create a new nano container
prints.48.jpg
Now to be able to see for example application log files we need to run
Get-CimInstance -Class Win32_NTLogEvent -Filter “(logfile = ‘application’)”
prints.49.jpg
If you want to see system log files just replace application with system
prints.50.jpg

No comments:

Post a Comment