If you have ever played with Kubernetes then you will know that setting it up manually can sometimes be quite painful. Setting up an automated Kubernetes deployment first time round will greatly assist you with an environment that is easily replicated on future projects. Best practice Devops also recommends that you are able to create and destroy your Kubernetes infrastructure in an automated fashion.
The latter is very handy especially in situations where you need to upgrade your Kubernetes clusters.
Each Kubernetes deployment is quite different depending on its specific use case. There are several ways to deploy Kubernetes. Picking an option and sticking to it is challenging when there are so many different ways to do this.
EKS offers you the possibility of going the manual route.
Kubernetes Operations, is an official project for managing clusters on AWS and other cloud providers.
Offer a managed hosted Kubernetes which can be deployed to GC, AWS or Digitalocean
Supergiant Control automates the deployment of K8’s Clusters on multiple clouds. Quite an interesting project in itself.
Terraform likes to stay up to date and they also already have support for EKS built into Terraform so that you can deploy to EKS using Terraform.
Deciding on the best way forward could be quite overwhelming.
I’ve tried KOPS which is one of my favourites as it is maintained by the Kubernetes Organization and has a very large amount of users, which means that there is plenty of support for it. My personal choice is usually between KOPS and eksctl.
It really depends on your use case, perhaps you prefer not to manage infrastructure or upgrade Kubernetes infrastructure, then EKS might be for you so eksctl would be a viable option. The only downside with EKS is that it is not yet widely available and limited to a few specific AWS regions:
AWS provides a really cool workshop for people new to EKS or Kubernetes in the form of an interactive tutorial. Keep in mind that you need to run it locally.
This is what the tutorial looks like if you run it locally, it is a pretty cool interactive tutorial:
To make things easier I suggest that you try using eksctl when playing around with EKS.
I ran into one issue with eksctl when I first started playing around with it. The issue was that I could only launch cluster’s with huge instance types on the different nodes. I wanted to be able to use t2.medium’s or t2.large when playing around with EKS (just for the sake of frugality). For some reason I couldn’t launch t2.large’s in Stockholm, but for some reason when I tried one of the American regions it worked fine:
$ eksctl create cluster --name my-eks-cluster --nodes 3 --nodes-min 3 --nodes-max 5 --node-type t2.medium --region us-west-2
That command should take quite a while to run as it creates a cloudformation template which takes some time to run. If it completes without any problems then you will be able to run this command:
$ kubectl get nodes
If you have ever worked with ECS (Amazon Elastic Container Service, not to be confused with EKS) or Kubernetes you will either be familiar with ECS’s easy integration with AWS’s three type of load balancers, where in the Kubernetes world you tend to use Nginx Ingress to expose an application to the outside world. EKS is not as integrated with AWS as with, for example, a more well established service such as ECS, but you will be able to create load balancers to expose your services either internally or to the outside world.
As this article in Medium “Setting up Amazon EKS and what you must know” pointed out.
“Most Kubernetes examples will set up an Ingress based on the nginx ingress controller to make themselves visible to the internet. This won’t do anything at all on EKS out of the box.”
This is rather cool if you are new to Kubernetes and coming from ECS and you were trying to figure out how to add load balancers to your services on EKS. EKS makes this pretty easy as you can just define a load balancer in your service like this:
This is from the eks-workshop examples:(type: LoadBalancer will setup an ELB on AWS)
You can then deploy your service to EKS and use the following command do get your endpoint for your ELB:
$ kubectl get service ecsdemo-frontend -o wide
If you decide to manage your own Kubernetes infrastructure then I would suggest that you have a staging environment set-up to use to stage upgrades to the cluster as a whole. This is a good way to find issues that could occur when upgrading the Kubernetes version of your cluster. It is probably a good idea to stage these kinds of upgrades and keep it running for a few days without any problems before attempting it on production.
My tool of choice for managing your own infrastructure would be KOPS. You can do the following to get Kubernetes going on EC2 with KOPS.
Here are some of my notes on how to use KOPS to quickly deploy a small cluster for playing around with Kubernetes on EC2.
Start by creating a bucket, this bucket will be used to store some of the configuration information for Kubernetes. Here I start by creating the bucket:
$ aws s3 mb s3://swipeix.digital --profile swipeix
Now you will need to tell KOPS about this bucket by exporting it as an environment variable:
$ export KOPS_STATE_STORE=s3://swipeix.digital
If you have plenty of AWS profiles in ~/.aws/credentials then I suggest running this command too:
$ export AWS_PROFILE=swipeix
I looked at several tutorials that I could find on using KOPS on AWS, but most of them made the part about setting up your domain or subdomain on route53 seem rather confusing. I will try to explain what I did and what worked for me. Hopefully, this makes more sense than the other KOPS tutorials you’ve read so far.
Most tutorials that I read suggested that you need to create a hosted zone in order to use KOPS with route53. I don’t agree with this, you don’t really need to do that to get KOPS working.lso keep in mind that creating additional hosted zones can incur extra charges on your AWS bill.) If you already have a domain which is a hosted zone on Route53 then try using that, there is no need to do anything special like create subdomains etc. Everything should work if you use your regular hosted zone from route53, in my case my hosted zone is : swipeix.digital.
You will need to run the following to create the cluster configuration:
$ kops create cluster --zones=eu-west-2b swipeix.digital
You can create the cluster with that configuration like this:
$ kops create cluster --zones=eu-west-2b swipeix.digital --yes
After I ran this I was left with a cluster running on c4.large as my master node. This was a bit out of my budget so I had to resize the master node to run on a small instance type. You can do this by checking the list of instance group by running:
$ kops validate cluster
This is the output:
To resize the master node we have to run:
$ kops edit ig master-eu-west-2b
This brings up a text editor with a config that you can edit, mine opened up in vim:
Change the machineType to a smaller size, in my case I changed it to a t2.large:
Now update the cluster to use this new config:
$ kops update cluster --yes
You should see something like this:
Now we need to run the following for the changes to be applied to the cluster:
$ kops rolling-update cluster --yes
You should see something like this:
Here is the master node that we just resized:
If at some point you feel like the cluster that you have tried creating with KOPS is causing you too much of a headache then I suggest that you delete the cluster and try from scratch. If you are still playing around with Kubernetes and you don’t have a specific need to be setup in a specific AWS region, then I would also suggest you delete your cluster or cluster config and try again in another region.
This is how you delete your cluster:
$ kops delete cluster swipeix.digital --yes
In a next upcoming tutorial, we will look at how to set up load balancers in front of a cluster that was created by KOPS, we will also look at how to do a decent deploy from ECR to Kubernetes.