Creating & Deploying own image to Google Kubernetes Engine(GKE) using Terraform(Part 1)

Abhishek Sharma
7 min readMay 31, 2021

Hello Folks,

New week, new learnings…………

Overview

In this blog, we’re going to learn some DevOps tools & try to integrate all those tools to understand the process of development and deployment works. So let’s see what we’re going to learn in this

  • How to create a simple Docker container image using Dockerfile
  • General use of Docker Container registry DockerHub
  • Deployment of own docker image to Kubernetes using YAML file
  • Google Kubernetes Engine Cluster Creation using Terraform Script
  • How to create terraform cluster deployment file with the help of Kubernetes configuration file(YAML).
  • Some optional deployment using terraform like Google Virtual Private Cloud(VPC) Network & more.

Let’s start our learning

Step 1. Creating a simple docker container image

In this step, we’re going to create a simple hello world type of application using the python flask framework & then we’ll create a docker container image of this application using Dockerfile.

Firstly create a workspace where we put all our files, or you can create this in a separate virtual environment.

mkdir blog-workspacecd blog-workspace

Now create a python application file in this directory

nano main.py

Note: I’m using nano editor to create a file, you can prefer what you like.

This is just a simple python code that returns “Hello From Abhishek”, you can change it according to your choice or add some web page codes to feel more realistic.

nano requirements.txt

Every application has its own dependencies, specific module versions for compatibility in which they can work properly, so for that, we have to create one “requirements.txt” file where we put all dependencies & using this file install all required packages when creating docker container image.

Now, Let’s create a Dockerfile to create a docker image for this application

nano Dockerfile

This Dockerfile takes a base image as python 3.7 & creates a working directory /app and copy all contents from the current folder to the new docker container environment, also install all dependencies which we are already defined in requirement.txt file & select an expose port where this container will expose to the outside world (Every Docker container is an isolated environment) and finally running our app with the help of gunicorn server. You can also use CMD [“python”, “main.py”].

Note: we’re ready to create our own image but be sure you’ve all your files which I’ve mentioned above are in the same directory. Here it is blog-workspace, and it has 3 files main.py, requirement.txt & Dockerfile.

for building a docker image from a Dockerfile we have a simple command.

docker build -t <your_image_name>:<image_tag(Optional)> .

This command creates a docker image with your specified name & tag with the help of Dockerfile, you can see all the lines of steps which we’ve mentioned in dockerfile are executing one by one like step 1. Pulling base image python 3.7 for the image then creating directory /app, installing all dependencies which are mentioned in requirements.txt file and all.

After successful execution of the above command, you can see your own docker image listed in your local docker images list using this command

docker images

Let’s test our newly created image by running a container using this image & test it is giving us output(for me, it will give “Hello from Abhishek” as output) or not.

docker run -it -p 2000:80 --name <name_you_want> <your_image_name>:<image_tag(Optional)>

This command runs a container using our own image, open your browser and type http://localhost:2000, or you can just copy the URL from your command output & you can see your desired output. Here I mentioned -p flag which basically used for mapping of container port (here is 80) to the randomly selected port (I used port 2000).

Now our docker image is working fine so let’s push this image to dockerhub so that we can use this image from anywhere or anyone can use this image as you know we generally pull base images from dockerhub.

Step 2. General use of docker container registry Dockerhub

Docker Hub is a service provided by Docker for finding and sharing container images with your team. It is the world’s largest repository of container images with an array of content sources including container community developers, open-source projects, and independent software vendors (ISV) building and distributing their code in containers. (source)

In this section, we are going to make our own docker image publicly available & from here we’re going to pull our image for deployment in the Kubernetes cluster.

If you don’t have a dockerhub account so first create one & login to dockerhub in your local docker CLI using below command

docker login

If prompted, you need to put your dockerhub username and password.

Okay, So now commit & push our own docker image to dockerhub. there are several options available for doing this, but I’m using the re-tagging method. You can find other methods, Here.

docker tag <your-image-name> <hub-username>/<repo-name/you-can-give-any-name>:<version-tag>

This will re-tag your docker's own image. Now finally we’re ready to push the image to the docker image.

docker push <hub-username>/<repo-name/you-can-give-any-name>:<version-tag>

If you don’t get any errors, validate your work by navigating to dockerhub console in your browser, you can see that your image is available in dockerhub & ready to pull.

Note: Cloud providers also have their own container image building & registry tools like the Google cloud platform has their own services for registry & docker image building tools called Container registry & Cloud Build. These services have their own advantages & use cases, So you can also explore these services as well.

Step 3: Deployment of own docker image to Kubernetes using YAML file

Let’s try to deploy this image to a Kubernetes cluster using a YAML file & validate it’s working. Also, we’ll take this YAML file for reference when creating a deployment file for Terraform.

For this step, I’m using cloud shell. If you’re working in your local so first set up your Google Cloud SDK properly with the help of this link & be sure you have kubectl installed in your local environment.

Firstly, set up a region & zone where you want to create the Kubernetes cluster.

gcloud config set compute/region <your google cloud region>gcloud config set compute/region <your google cloud zone>

now, create a one-node cluster named blog-cluster.

gcloud container clusters create <your cluster's name> --num-nodes=<put no of nodes you want>

After the creation of your cluster, get their credentials to the local environment so that you can use kubectl tool for your deployment.

gcloud container clusters get-credentials <your cluster's name> --zone=<your cluster's zone>

Now create a Kubernetes deployment configuration file where we define all the things like container image, deployment, services, labels, etc.

nano config.yaml

This file will create a deployment & load balancer type of service, which will serve over port number 80.

Note: In the case of your own image, there will be chances to change container image name & port numbers for the container as well as load balancer service, so edit the configuration file accordingly.

To deploy this file we’re going to use kubectl tool.

kubectl apply -f config.yaml

This will deploy your own application.

Now let’s check our pods, services & deployment is working fine.

kubectl get all

All works fine, then copy the IP of the service & paste it to your browser, you’ll see your desired output.

Well done, You did it………….

You created your own image & deployed it successfully in the Google Kubernetes Engine.

Note: Doing this step is very helpful for us when we create our deployment file for terraform.

Never forget to delete the cluster after experimenting.

glcoud container clusters delete <your cluster's name> --zone=<your cluster's zone>

Note: This is just for demo purposes, so I don’t focus on machine type, network configurations & advanced configurations so if you’re working on a real-world use case then think about all these, it is very important.

If you’ve any query regarding this & If I explained something wrong then ping me on LinkedIn, I’m very happy to help & correct my odds.

Supporting resources:

Stay Tuned!

There is more learning in the next part…………..

--

--