Getting started with GKE – Google Kubernetes Engine

I have not spend much time with Google Cloud Platform because I have used mostly AWS cloud services like EKS but I wanted to give Google’s GKE – Kubernetes Engine a try to compare both offerings. My first impression is great about how easy it is to create a cluster and to enable options for NetworkPolicy or Istio Service Mesh without the need to manually install these compare to AWS EKS.

The GKE integration into the cloud offering is perfect, there is no need for a Kubernetes dashboard or custom monitoring / logging solutions, all is nicely integrated into the Google cloud services and can be used straight away once you created the cluster.

I created a new project called Kubernetes for deploying the GKE cluster. The command you see below creates a GKE cluster with the defined settings and options, and I really like the simplicity of a single command to create and manage the cluster similar like eksctl does:

gcloud beta container --project "kubernetes-xxxx" clusters create "cluster-1" \
       --region "europe-west1" \
       --no-enable-basic-auth \
       --cluster-version "1.14.8-gke.17" \
       --machine-type "n1-standard-2" \
       --image-type "COS" \
       --disk-type "pd-standard" \
       --disk-size "100" \
       --metadata disable-legacy-endpoints=true \
       --scopes "","","","","","","" \
       --num-nodes "1" \
       --enable-stackdriver-kubernetes \
       --enable-ip-alias \
       --network "projects/kubernetes-xxxxxx/global/networks/default" \
       --subnetwork "projects/kubernetes-xxxxxx/regions/europe-west3/subnetworks/default" \
       --default-max-pods-per-node "110" \
       --enable-network-policy \
       --addons HorizontalPodAutoscaling,HttpLoadBalancing \
       --enable-autoupgrade \
       --enable-autorepair \
       --maintenance-window "02:00"

With the gcloud command you can authenticate and generate a kubeconfig file for your cluster and start using kubectl directly to deploy your applications.

gcloud beta container clusters get-credentials cluster-1 --region europe-west1 --project kubernetes-xxxxxx

There is no need for a Kubernetes dashboard what I have mentioned because it is integrated into the Google Kubernetes Engine console. You are able to see cluster information and deployed workloads, and you are able to drill down to detailed information about running pods:

Google is offering the Kubernetes control-plane for free and which is a massive advantage for GKE because AWS on the other hand charges for the EKS control-plane around $144 per month.

You can keep your GKE control-plane running and scale down your instance pool to zero if no compute capacity is needed and scale up later if required:

# scale down node pool
gcloud container clusters resize cluster-1 --size=0 --region "europe-west1"

# scale up node pool 
gcloud container clusters resize cluster-1 --size=3 --region "europe-west1" --num-nodes "1"

Let’s deploy the Google microservices demo application with Istio Service Mesh enabled:

# label default namespace to inject Envoy sidecar
kubectl label namespace default istio-injection=enabled

# check istio sidecar injector label
kubectl get namespace -L istio-injection

# deploy Google microservices demo manifests
kubectl create -f
kubectl create -f

Get the public IP addresses for the frontend service and ingress gateway to connect with your browser:

# get frontend-external service IP address
kubectl get svc frontend-external --no-headers | awk '{ print $4 }'

# get istio ingress gateway service IP address
kubectl get svc istio-ingressgateway -n istio-system --no-headers | awk '{ print $4 }'

To delete the GKE cluster simply run the following gcloud command:

gcloud beta container --project "kubernetes-xxxxxx" clusters delete "cluster-1" --region "europe-west1"

Googles Kubernetes Engine is in my opinion the better offering compared to AWS EKS which seems a bit too basic.

Deploy OpenShift 3.11 Container Platform on Google Cloud Platform using Terraform

Over the past few days I have converted the OpenShift 3.11 infrastructure on Amazon AWS to run on Google Cloud Platform. I have kept the similar VPC network layout and instances to run OpenShift.

Before you start you need to create a project on Google Cloud Platform, then continue to create the service account and generate the private key and download the credential as JSON file.

Create the new project:

Create the service account:

Give the service account compute admin and storage object creator permissions:

Then create a storage bucket for the Terraform backend state and assign the correct bucket permission to the terraform service account:

Bucket permissions:

To start, clone my openshift-terraform github repository and checkout the google-dev branch:

git clone
cd ./openshift-terraform/ && git checkout google-dev

Add your previously downloaded credentials json file:

cat << EOF > ./credentials.json
  "type": "service_account",
  "project_id": "<--your-project-->",
  "private_key_id": "<--your-key-id-->",
  "private_key": "-----BEGIN PRIVATE KEY-----



There are a few things you need to modify in the and before you can start:

terraform {
  backend "gcs" {
    bucket    = "<--your-bucket-name-->"
    prefix    = "openshift-311"
    credentials = "credentials.json"
variable "gcp_region" {
  description = "Google Compute Platform region to launch servers."
  default     = "europe-west3"
variable "gcp_project" {
  description = "Google Compute Platform project name."
  default     = "<--your-project-name-->"
variable "gcp_zone" {
  type = "string"
  default = "europe-west3-a"
  description = "The zone to provision into"

Add the needed environment variables to apply changes to CloudFlare DNS:


Let’s start creating the infrastructure and verify afterwards the created resources on GCP.

terraform init && terraform apply -auto-approve

VPC and public and private subnets in region europe-west3:

Created instances:

Created load balancers for master and infra nodes:

Copy the ssh key and ansible-hosts file to the bastion host from where you need to run the Ansible OpenShift playbooks.

scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -r ./helper_scripts/id_rsa [email protected]$(terraform output bastion):/home/centos/.ssh/
scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -r ./inventory/ansible-hosts  [email protected]$(terraform output bastion):/home/centos/ansible-hosts

I recommend waiting a few minutes as the cloud-init script prepares the bastion host. Afterwards continue with the pre and install playbooks. You can connect to the bastion host and run the playbooks directly.

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/openshift-pre.yml -i ~/ansible-hosts"
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ./helper_scripts/id_rsa -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/openshift-install.yml -i ~/ansible-hosts"

After the installation is completed, continue to create your project and applications:

When you are finished with the testing, run terraform destroy.

terraform destroy -force 

Please share your feedback and leave a comment.