Ever since Docker made containers accessible to mere mortals, they became a technology of choice for many companies. This popularity also brings to us the container management hell, leading to the emergence of container orchestration technologies like Kubernetes. This article targets those now giving the first steps into the Kubernetes world and shows a simple solution for setting up a cluster in Google Cloud through Terraform.
A brief history: How did we get here?
Docker is relatively new, but containers are not. Actually, in the late ’70s,
chroot command started to make it possible to change the apparent directory of a running process so it could live in partial isolation. But kudos to Docker for making containers so accessible to the whole software development community, not only for the infrastructure wizards.
When companies grow to some hundreds of developers, microservices architectures emerge due to the need to accelerate delivery and reduce conflicts between teams. Some hundreds of developers also mean different skill sets. At some point, it becomes more productive to consider different technologies, even when targeting the same types of use-cases, to keep the pace on hiring great developers and building different features without having the technology impedance in the way.
With a big technology stack, infrastructure can quickly become a mess. Suddenly, the number of systems administrators start to grow and a separate infrastructure team needs to exist, to give support to post-release issues on a huge amount of different technologies and guarantee availability. At the same time, developers become less aware of such problems.
Another common problem is the setup of environments for different purposes (e.g. QA, Staging, Production). By creating infrastructure as snowflakes, each environment quickly becomes drastically different, leading to problems that only appear when it’s too late.
It’s undeniable that now, even small companies can benefit from containers but, like everything, it comes with a price, so it might as well be an overkill for such scenarios.
DevOps Culture: The customer is way beyond the local workstation
The DevOps culture brings more power to those who are greatly familiar with the business features, the developers, and along with time the fence of trust gets increasing, leading them to actually take part of the infrastructure deployment.
With great power comes great responsibility. Developers need to care about the whole delivery chain until the customer actually uses it, not just until they
git pushit. They also become more active in solving problems, likely caused by themselves.
On the other side, system administrators, now called by the fancy title “Infrastructure Engineers” because they actually started doing engineering (or wrongly called DevOps Engineers, by those who don’t get DevOps at all), become more focused on building reliable infrastructure and adopting/developing tools that simplify and accelerate this delivery chain instead of just being helpdesk agents for the development team. In this culture, both roles are now driven by the same goal, which is to get software delivered faster to the customer, without stumbling on each other.
Put a great feedback loop strategy on top of all this, and we can observe the beauty of infrastructure and development continuously improving together, as it was expected by the DevOps culture in the first place.
Of course, this does not mean that infrastructure engineers should not learn how these new technologies work, neither those developers should do more work than they are supposed to. This way we simply spread the infrastructure knowledge across development teams and bound the right responsibilities within the right teams.
What containers have to do with this?
The massive adoption of containers we see today is not only due to the simplicity of Docker itself, but also because containers provide a layer of abstraction that simplifies a lot the infrastructure management, making it easier to make deployments homogeneous across a company with fewer costs. It also makes it feasible to create integrated environments locally, avoiding all the conflicts that arise on shared integrated environments.
Developers become more proficient on building software that can live in isolation, and container images that have the right parameters, tuned just right for the use-cases they are developing, creating a better awareness of the product needs in terms of NFRs (Non-Functional Requirements).
Infrastructure Engineers are able to reduce the scope of the tools they create/adopt into container management, instead of potentially everything. This reduced scope not only saves money but also puts less inertia into the delivery chain, promoting business growth.
What about container orchestrators?
With the popularization of containers, it also becomes difficult to manage a huge amount of containers. Companies launch so many isolated containers that it becomes hard to maintain, troubleshoot, or upgrade.
Container orchestration simplifies all of this by providing a great number of tools that allow things like the easy spin up, scale, stop and rolling upgrade of containers.
The most popular container orchestration technology today, and the one with the greatest active community is Kubernetes (K8s), although some other alternatives quickly come to mind (e.g. Mesos, Docker Swarm).
Kubernetes Cluster Architecture
This article is about deploying and using a Kubernetes (K8s) cluster. Before diving into the practical guide, we need to understand what Kuberentes is made of. To better understand the architecture of a K8s cluster and all the components involved, what better than the K8s documentation itself?
One practical way to create the cluster is using Terraform, a leading technology on building infrastructure from code in the cloud. It’s able to create new resources, like compute instances, storage volumes, etc. We then can interact with Kubernetes through a command-line utility called
It’s also possible to also deploy containers into an existing cluster through Terraform using a specific provider, but that’s out of the scope of this article.
In this article Google Cloud Platform (GCP) will be used, although Terraform can also be used with different providers, such as Amazon Web Services (AWS). The remaining of the article will show how to create a simple cluster using Terraform and deploying a container into it using
kubectl, as well as the most basic operations for scaling and upgrading the image.
Creating a K8s cluster on Google Cloud With Terraform
Enough with the theory, let’s build the Terraform files. For that, we need the following:
First, we need a service account with permissions to edit GCP projects (Roles / Project / Editor). This will allow Terraform to interact with Google Cloud APIs. To create the service account, open the Service Accounts and create one new Service Account (e.g. named terraform).
Then we just need to create a new key for that account. This will generate and download a
json file. For the sake of the example, let’s call it
The following Terraform spec allows building the simplest K8s cluster possible using Google Kubernetes Engine (GKE).
The above spin up a GKE cluster in the
default network, in
europe-west1 region as provided by the variables, with an initial node count of 3. It uses the default node pool required to create a K8s cluster. Terraform provides the means to create a separately managed node pool, but it won’t be covered in this article.
By running the
terraform plan we get that it only needs to create one resource, which maps to multiple resources on GCP. As the variable
gcp-project is left without a default when the command runs, unless we append
-var 'gcp_project=<my-project>' to it. Please replace
<my-project> with whatever GCP project identifier you are using.
terraform apply command will efectively perform the cluster setup in Google Cloud.
Deploying a container
hellofromhost is a simple Web application written in Golang, whose image is made available on Docker Hub. It simply provides an HTTP GET endpoint that renders a page saying Hello from whatever host it’s running on. This application includes a file named
kube.yaml that contains the definition of the pod deployment and service associated with this application.
The first part contains the definition of the Service. This tells K8s that the application runs on port 8080 inside the container and will be exposed through a service of type LoadBalancer through a single IP address in port 80.
To know other types of services, please refer to Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?, which greatly summarises it.
The rest of the file (after the
--- delimiter) contains the definition of the Deployment.
We are using
picadoh/hellofromhost:v1 image from Docker Hub and we want only one replica of the application for now. We’ll scale our massively popular application in a second.
Using the new cluster
For this we need the
gcloud command-line utility that is part of the Google Cloud SDK, and then run the following command:
This will make the new cluster available as a configuration context in the
kubectl tool. To see the available contexts, just run the following command:
And, to switch to another cluster:
To list all the existing Pods,
kubectl get pods command can be used:
Deploying the application
kubectl create command-line utility provided with the above
yaml file, we can deploy the container into the cluster in a single step.
It may take a while to have a fully operational application with a Public External IP associated with it. We can check if it’s ready with the following command:
The output will look something like:
When the Public External IP is available, we can just query it using
curl command or any Browser.
If we run the above command several times, the same host,
qc962, will be responding.
Despite there are 3 worker instances in the cluster, only one replica of the application exists.
Scaling the application
If we decide we need one more replica we can just increase it using the
kubectl scale command, by setting a new number of replicas in the deployment.
We can now check the list of Pods by running the command
kubectl get pods. If we
curl it several times now, we can see that two different hosts are responding,
Upgrading to a new version
After a smaller change, the application now includes the “Last updated” date in the bottom of the rendered page, tagged on Docker Hub as
v2. Now we want our cluster to get this new version. For this, we can just replace the image by running the following command:
This sets the container named
hellofromhost to have the
v2 tag of the image,
picadoh/hellofromhost:v2. Behind the scenes, K8s will perform a rolling upgrade by creating new containers, waiting for them to start and terminating the old ones.
Wrapping things up…
In this article a lot of different concepts were covered, that might look unrelated to first sight. Why do we need to understand the DevOps culture for setting up a Kuberentes cluster, right?
For many years I’ve seen siloed teams: A team for development, a team for automation, a team for architecture, a team for domain experts, a team for security. These teams typically communicate with each other once a week, if lucky. Such organizations do not even realize the constraints they put into the business and how badly this affects the customers.
So it’s not just “let’s use containers” and that’s it. If everything else stays the same, failure is guaranteed.
Containers require that developers deeply understand not only the wheel they are doing, but if that wheel will fit a car or a truck. It demands a shift in the mindset, a change in the entire culture.
Everyone needs to understand the importance of building healthy and slim container images, that run without flaws. Running applications are now part of the code itself, there won’t be any more manual infrastructure tricks to compensate for poor designs. Everything is code, automated, and fast-paced.
Having this in mind, we were able to set up a simple Kubernetes cluster and deploy containers into it. This is the simplest version of a Kubernetes cluster that must be customized before going into production. It’s the first step towards a better infrastructure.
Hope this helps getting started.
subscribe via RSS