Deploying OpenShift Container Platform using Terraform and Ansible on Amazon AWS

After my previous articles on OpenShift and Terraform I wanted to show how to create the necessary infrastructure and to deploy an OpenShift Container Platform in a more real-world scenario. I highly recommend reading my other posts about using Terraform to deploy an Amazon AWS VPC and AWS EC2 Instances and Load Balancers. Once the infrastructure is created we will use the Bastion Host to connect to the environment and deploy OpenShift Origin using Ansible.

I think this might be an interesting topic to show what tools like Terraform and Ansible can do together:

I will not go into detail about the configuration and only show the output of deploying the infrastructure. Please checkout my Github repository to see the detailed configuration: https://github.com/berndonline/openshift-terraform

Before we start you need to clone the repository and generate the ssh key used from the bastion host to access the OpenShift nodes:

git clone https://github.com/berndonline/openshift-terraform.git
cd ./openshift-terraform/
ssh-keygen -b 2048 -t rsa -f ./helper_scripts/id_rsa -q -N ""
chmod 600 ./helper_scripts/id_rsa

We are ready to create the infrastructure and run terraform apply:

[email protected]:~/openshift-terraform$ terraform apply

...

Plan: 56 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...

Apply complete! Resources: 19 added, 0 changed, 16 destroyed.

Outputs:

bastion = ec2-34-244-225-35.eu-west-1.compute.amazonaws.com
openshift master = master-35563dddc8b2ea9c.elb.eu-west-1.amazonaws.com
openshift subdomain = infra-1994425986.eu-west-1.elb.amazonaws.com
[email protected]:~/openshift-terraform$

Terraform successfully creates the VPC, load balancers and all needed instances. Before we continue wait 5 to 10 minutes because the cloud-init script takes a bit time and all the instance reboot at the end.

Instances:

Security groups:

Target groups for the Master and the Infra load balancers:

Master and the Infra load balancers:

Terraform also automatically creates the inventory file for the OpenShift installation and adds the hostnames for master, infra and worker nodes to the correct inventory groups. The next step is to copy the private ssh key and the inventory file to the bastion host. I am using the terraform output command to get the public hostname from the bastion host:

scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ./helper_scripts/id_rsa [email protected]$(terraform output bastion):/home/centos/.ssh/
scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ./inventory/ansible-hosts  [email protected]$(terraform output bastion):/home/centos/ansible-hosts
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l centos $(terraform output bastion)

On the bastion node, change to the /openshift-ansible/ folder and start running the prerequisites and the deploy-cluster playbooks:

cd /openshift-ansible/
ansible-playbook ./playbooks/prerequisites.yml -i ~/ansible-hosts
ansible-playbook ./playbooks/deploy_cluster.yml -i ~/ansible-hosts

Here the output from running the prerequisites playbook:

[[email protected] ~]$ cd /openshift-ansible/
[[email protected] openshift-ansible]$ ansible-playbook ./playbooks/prerequisites.yml -i ~/ansible-hosts

PLAY [Initialization Checkpoint Start] ****************************************************************************************************************************

TASK [Set install initialization 'In Progress'] *******************************************************************************************************************
Saturday 15 September 2018  11:04:50 +0000 (0:00:00.407)       0:00:00.407 ****
ok: [ip-10-0-1-237.eu-west-1.compute.internal]

PLAY [Populate config host groups] ********************************************************************************************************************************

TASK [Load group name mapping variables] **************************************************************************************************************************
Saturday 15 September 2018  11:04:50 +0000 (0:00:00.110)       0:00:00.517 ****
ok: [localhost]

TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************************************************************************************************
Saturday 15 September 2018  11:04:51 +0000 (0:00:00.033)       0:00:00.551 ****
skipping: [localhost]

TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********************************************************************************************
Saturday 15 September 2018  11:04:51 +0000 (0:00:00.024)       0:00:00.575 ****
skipping: [localhost]

TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************************************************************************************************
Saturday 15 September 2018  11:04:51 +0000 (0:00:00.024)       0:00:00.599 ****
skipping: [localhost]

...

PLAY RECAP ********************************************************************************************************************************************************
ip-10-0-1-192.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-1-237.eu-west-1.compute.internal : ok=64   changed=15   unreachable=0    failed=0
ip-10-0-1-248.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-5-174.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-5-235.eu-west-1.compute.internal : ok=58   changed=14   unreachable=0    failed=0
ip-10-0-5-35.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-9-130.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-9-51.eu-west-1.compute.internal : ok=58   changed=14   unreachable=0    failed=0
ip-10-0-9-85.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
localhost                  : ok=11   changed=0    unreachable=0    failed=0


INSTALLER STATUS **************************************************************************************************************************************************
Initialization             : Complete (0:00:41)

[[email protected] openshift-ansible]$

Continue with the deploy cluster playbook:

[[email protected] openshift-ansible]$ ansible-playbook ./playbooks/deploy_cluster.yml -i ~/ansible-hosts

PLAY [Initialization Checkpoint Start] ****************************************************************************************************************************

TASK [Set install initialization 'In Progress'] *******************************************************************************************************************
Saturday 15 September 2018  11:08:38 +0000 (0:00:00.102)       0:00:00.102 ****
ok: [ip-10-0-1-237.eu-west-1.compute.internal]

PLAY [Populate config host groups] ********************************************************************************************************************************

TASK [Load group name mapping variables] **************************************************************************************************************************
Saturday 15 September 2018  11:08:38 +0000 (0:00:00.064)       0:00:00.167 ****
ok: [localhost]

TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************************************************************************************************
Saturday 15 September 2018  11:08:38 +0000 (0:00:00.031)       0:00:00.198 ****
skipping: [localhost]

TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********************************************************************************************
Saturday 15 September 2018  11:08:38 +0000 (0:00:00.026)       0:00:00.225 ****
skipping: [localhost]

...

PLAY RECAP ********************************************************************************************************************************************************
ip-10-0-1-192.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-1-237.eu-west-1.compute.internal : ok=591  changed=256  unreachable=0    failed=0
ip-10-0-1-248.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-5-174.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-5-235.eu-west-1.compute.internal : ok=325  changed=145  unreachable=0    failed=0
ip-10-0-5-35.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-9-130.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-9-51.eu-west-1.compute.internal : ok=325  changed=145  unreachable=0    failed=0
ip-10-0-9-85.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
localhost                  : ok=13   changed=0    unreachable=0    failed=0

INSTALLER STATUS **************************************************************************************************************************************************
Initialization             : Complete (0:00:55)
Health Check               : Complete (0:00:01)
etcd Install               : Complete (0:01:03)
Master Install             : Complete (0:05:17)
Master Additional Install  : Complete (0:00:26)
Node Install               : Complete (0:08:24)
Hosted Install             : Complete (0:00:57)
Web Console Install        : Complete (0:00:28)
Service Catalog Install    : Complete (0:01:19)

[[email protected] openshift-ansible]$

Once the deploy playbook finishes we have a working Openshift cluster:

Login with username: demo, and password: demo

For the infra load balancers you cannot access OpenShift routes via the Amazon DNS, this is not allowed. You need to create a wildcard DNS CNAME record like *.paas.domain.com and point to the AWS load balancer DNS record.

Let’s continue to do some basic cluster checks to see the nodes are in ready state:

[[email protected] ~]$ oc get nodes
NAME                                       STATUS    ROLES     AGE       VERSION
ip-10-0-1-192.eu-west-1.compute.internal   Ready     compute   11m       v1.9.1+a0ce1bc657
ip-10-0-1-237.eu-west-1.compute.internal   Ready     master    16m       v1.9.1+a0ce1bc657
ip-10-0-1-248.eu-west-1.compute.internal   Ready         11m       v1.9.1+a0ce1bc657
ip-10-0-5-174.eu-west-1.compute.internal   Ready     compute   11m       v1.9.1+a0ce1bc657
ip-10-0-5-235.eu-west-1.compute.internal   Ready     master    15m       v1.9.1+a0ce1bc657
ip-10-0-5-35.eu-west-1.compute.internal    Ready         11m       v1.9.1+a0ce1bc657
ip-10-0-9-130.eu-west-1.compute.internal   Ready     compute   11m       v1.9.1+a0ce1bc657
ip-10-0-9-51.eu-west-1.compute.internal    Ready     master    14m       v1.9.1+a0ce1bc657
ip-10-0-9-85.eu-west-1.compute.internal    Ready         11m       v1.9.1+a0ce1bc657
[[email protected] ~]$
[[email protected] ~]$ oc get projects
NAME                                DISPLAY NAME   STATUS
default                                            Active
kube-public                                        Active
kube-service-catalog                               Active
kube-system                                        Active
logging                                            Active
management-infra                                   Active
openshift                                          Active
openshift-ansible-service-broker                   Active
openshift-infra                                    Active
openshift-node                                     Active
openshift-template-service-broker                  Active
openshift-web-console                              Active
[[email protected] ~]$
[[email protected] ~]$ oc get pods -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP           NODE
docker-registry-1-8798r    1/1       Running   0          10m       10.128.2.2   ip-10-0-5-35.eu-west-1.compute.internal
registry-console-1-zh9m4   1/1       Running   0          10m       10.129.2.3   ip-10-0-9-85.eu-west-1.compute.internal
router-1-96zzf             1/1       Running   0          10m       10.0.9.85    ip-10-0-9-85.eu-west-1.compute.internal
router-1-nfh7h             1/1       Running   0          10m       10.0.1.248   ip-10-0-1-248.eu-west-1.compute.internal
router-1-pcs68             1/1       Running   0          10m       10.0.5.35    ip-10-0-5-35.eu-west-1.compute.internal
[[email protected] ~]$

At the end just destroy the infrastructure with terraform destroy:

[email protected]:~/openshift-terraform$ terraform destroy

...

Destroy complete! Resources: 56 destroyed.
[email protected]:~/openshift-terraform$

I will continue improving the configuration and I plan to use Jenkins to deploy the AWS infrastructure and OpenShift fully automatically.

Please let me know if you like the article or have questions in the comments below.

Terraform deploying Amazon EC2 Autoscaling Group and AWS Load Balancers

This is the next article about using Terraform to create EC2 autoscaling group and the different load balancing options for EC2 instances. This setup depends on my previous blog post about using Terraform to deploy a AWS VPC so please read this first. In my Github repository you will find all the needed Terraform files ec2.tf and vpc.tf to deploy the full environment.

EC2 resource overview:

Let’s start with the launch configuration and creating the autoscaling group. I am using eu-west-1 and a standard Ubuntu 16.04 AMI. The instances are created in the private subnet and don’t get a public IP address assigned but have internet access via the NAT gateway:

resource "aws_launch_configuration" "autoscale_launch" {
  image_id = "${lookup(var.aws_amis, var.aws_region)}"
  instance_type = "t2.micro"
  security_groups = ["${aws_security_group.sec_web.id}"]
  key_name = "${aws_key_pair.auth.id}"
  user_data = <<-EOF
              #!/bin/bash
              sudo apt-get -y update
              sudo apt-get -y install nginx
              EOF
  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "autoscale_group" {
  launch_configuration = "${aws_launch_configuration.autoscale_launch.id}"
  vpc_zone_identifier = ["${aws_subnet.PrivateSubnetA.id}","${aws_subnet.PrivateSubnetB.id}","${aws_subnet.PrivateSubnetC.id}"]
  load_balancers = ["${aws_elb.elb.name}"]
  min_size = 3
  max_size = 3
  tag {
    key = "Name"
    value = "autoscale"
    propagate_at_launch = true
  }
}

I also created a few security groups to allow the traffic,  please have look for more detail in the ec2.tf.

Autoscaling Group

Now the configuration for a AWS Elastic (Classic) Load Balancer:

resource "aws_elb" "elb" {
  name = "elb"
  security_groups = ["${aws_security_group.sec_lb.id}"]
  subnets            = ["${aws_subnet.PublicSubnetA.id}","${aws_subnet.PublicSubnetB.id}","${aws_subnet.PublicSubnetC.id}"]
  cross_zone_load_balancing   = true
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:80/"
  }
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "80"
    instance_protocol = "http"
  }
}

Elastic Load Balancer (Classic LB)

Use the Application Load Balancing (ALB) for more advanced web load balancing which only support http and https protocols. You start with creating the ALB resource, afterwards creating the target group where you can define stickiness and health checks. The listener defines which protocol type the ALB uses and assigns the target group. In the end you attach the target- with the autoscaling group:

resource "aws_lb" "alb" {  
  name            = "alb"  
  subnets         = ["${aws_subnet.PublicSubnetA.id}","${aws_subnet.PublicSubnetB.id}","${aws_subnet.PublicSubnetC.id}"]
  security_groups = ["${aws_security_group.sec_lb.id}"]
  internal        = false 
  idle_timeout    = 60   
  tags {    
    Name    = "alb"    
  }   
}

resource "aws_lb_target_group" "alb_target_group" {  
  name     = "alb-target-group"  
  port     = "80"  
  protocol = "HTTP"  
  vpc_id   = "${aws_vpc.default.id}"   
  tags {    
    name = "alb_target_group"    
  }   
  stickiness {    
    type            = "lb_cookie"    
    cookie_duration = 1800    
    enabled         = true 
  }   
  health_check {    
    healthy_threshold   = 3    
    unhealthy_threshold = 10    
    timeout             = 5    
    interval            = 10    
    path                = "/"    
    port                = 80
  }
}

resource "aws_lb_listener" "alb_listener" {  
  load_balancer_arn = "${aws_lb.alb.arn}"  
  port              = 80  
  protocol          = "http"
  
  default_action {    
    target_group_arn = "${aws_lb_target_group.alb_target_group.arn}"
    type             = "forward"  
  }
}

resource "aws_autoscaling_attachment" "alb_autoscale" {
  alb_target_group_arn   = "${aws_lb_target_group.alb_target_group.arn}"
  autoscaling_group_name = "${aws_autoscaling_group.autoscale_group.id}"
}

Application Load Balancer (ALB)

ALB Target Group

The Network Load Balancing (NLB) is very similar to the configuration like the ALB only that it supports the TCP protocol which should be only used for performance because of the limited health check functionality:

resource "aws_lb" "nlb" {
  name               = "nlb"
  internal           = false
  load_balancer_type = "network"
  subnets            = ["${aws_subnet.PublicSubnetA.id}","${aws_subnet.PublicSubnetB.id}","${aws_subnet.PublicSubnetC.id}"]
  enable_cross_zone_load_balancing  = true
  tags {
    Name = "nlb"
  }
}

resource "aws_lb_target_group" "nlb_target_group" {  
  name     = "nlb-target-group"  
  port     = "80"  
  protocol = "TCP"  
  vpc_id   = "${aws_vpc.default.id}"   
  tags {    
    name = "nlb_target_group"    
  }     
}

resource "aws_lb_listener" "nlb_listener" {  
  load_balancer_arn = "${aws_lb.nlb.arn}"  
  port              = 80  
  protocol          = "TCP"
  
  default_action {    
    target_group_arn = "${aws_lb_target_group.nlb_target_group.arn}"
    type             = "forward"  
  }
}

resource "aws_autoscaling_attachment" "nlb_autoscale" {
  alb_target_group_arn   = "${aws_lb_target_group.nlb_target_group.arn}"
  autoscaling_group_name = "${aws_autoscaling_group.autoscale_group.id}"
}

Network Load Balancer (NLB)

NLB Target Group

Let’s run terraform apply:

[email protected]:~/aws-terraform$ terraform apply
data.aws_availability_zones.available: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_autoscaling_attachment.alb_autoscale
      id:                                          
      alb_target_group_arn:                        "${aws_lb_target_group.alb_target_group.arn}"
      autoscaling_group_name:                      "${aws_autoscaling_group.autoscale_group.id}"

  + aws_autoscaling_attachment.nlb_autoscale
      id:                                          
      alb_target_group_arn:                        "${aws_lb_target_group.nlb_target_group.arn}"
      autoscaling_group_name:                      "${aws_autoscaling_group.autoscale_group.id}"

...

Plan: 41 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...

aws_lb.nlb: Creation complete after 2m53s (ID: arn:aws:elasticloadbalancing:eu-west-1:...:loadbalancer/net/nlb/235e69c61779b723)
aws_lb_listener.nlb_listener: Creating...
  arn:                               "" => ""
  default_action.#:                  "" => "1"
  default_action.0.target_group_arn: "" => "arn:aws:elasticloadbalancing:eu-west-1:552276840222:targetgroup/nlb-target-group/7b3c10cbdd411669"
  default_action.0.type:             "" => "forward"
  load_balancer_arn:                 "" => "arn:aws:elasticloadbalancing:eu-west-1:552276840222:loadbalancer/net/nlb/235e69c61779b723"
  port:                              "" => "80"
  protocol:                          "" => "TCP"
  ssl_policy:                        "" => ""
aws_lb_listener.nlb_listener: Creation complete after 0s (ID: arn:aws:elasticloadbalancing:eu-west-1:.../nlb/235e69c61779b723/dfde2530387b470f)

Apply complete! Resources: 41 added, 0 changed, 0 destroyed.

Outputs:

alb_dns_name = alb-1295224636.eu-west-1.elb.amazonaws.com
elb_dns_name = elb-611107604.eu-west-1.elb.amazonaws.com
nlb_dns_name = nlb-235e69c61779b723.elb.eu-west-1.amazonaws.com
[email protected]:~/aws-terraform$

Together with the VPC configuration from my previous article, this deploys the different load balancers and provides you the DNS names as an output and ready to use.

Over the coming weeks I will optimise the Terraform code and move some of the resource settings into the variables.tf file to make this more scaleable.

If you like this article, please share your feedback and leave a comment.

Getting started with OpenShift Container Platform

In the recent month I have spend a lot of time around networking and automation but I want to shift more towards running modern container platforms like Kubernetes or OpenShift which both are using networking services and as I have shared in one of my previous article about AVI software load balancer, it all fits nicely into networking in my opinion.

But before we start, please have a look at my previous article about Deploying OpenShift Origin Cluster using Ansible to create a small OpenShift platform for testing.

Create a bash completion file for oc commands:

[[email protected] ~]# oc completion bash > /etc/bash_completion.d/oc
[[email protected] ~]# . /etc/bash_completion.d/oc
  • Let’s start and login to OpenShift as a normal user account
[[email protected] ~]# oc login https://console.lab.hostgate.net:8443/
The server is using a certificate that does not match its hostname: x509: certificate is valid for lab.hostgate.net, not console.lab.hostgate.net
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Authentication required for https://console.lab.hostgate.net:8443 (openshift)
Username: demo
Password:
Login successful.

[[email protected] ~]#

Instead of username and password use token which you can get from the web console:

oc login https://console.lab.hostgate.net:8443 --token=***hash token***
  • Now create the project where we want to run our web application:
[[email protected] ~]# oc new-project webapp
Now using project "webapp" on server "https://console.lab.hostgate.net:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.
[[email protected] ~]#

Afterwards we need to create a build configuration, in my example we use an external Dockerfile without starting the build directly:

[[email protected] ~]#  oc new-build --name webapp-build --binary
warning: Cannot find git. Ensure that it is installed and in your path. Git is required to work with git repositories.
    * A Docker build using binary input will be created
      * The resulting image will be pushed to image stream "webapp-build:latest"
      * A binary build was created, use 'start-build --from-dir' to trigger a new build

--> Creating resources with label build=webapp-build ...
    imagestream "webapp-build" created
    buildconfig "webapp-build" created
--> Success
[[email protected] ~]#

Create Dockerfile:

[[email protected] ~]# vi Dockerfile

Copy and paste the line below into the Dockerfile:

FROM openshift/hello-openshift

Let’s continue and start the build from the Dockerfile we specified previously

[[email protected] ~]#  oc start-build webapp-build --from-file=Dockerfile --follow
Uploading file "Dockerfile" as binary input for the build ...
build "webapp-build-1" started
Receiving source from STDIN as file Dockerfile
Pulling image openshift/hello-openshift ...
Step 1/3 : FROM openshift/hello-openshift
 ---> 7af3297a3fb4
Step 2/3 : ENV "OPENSHIFT_BUILD_NAME" "webapp-build-1" "OPENSHIFT_BUILD_NAMESPACE" "webapp"
 ---> Running in 422f63f69364
 ---> 2cd93085ec93
Removing intermediate container 422f63f69364
Step 3/3 : LABEL "io.openshift.build.name" "webapp-build-1" "io.openshift.build.namespace" "webapp"
 ---> Running in 0c3e6cce6f0b
 ---> cf178dda8238
Removing intermediate container 0c3e6cce6f0b
Successfully built cf178dda8238
Pushing image docker-registry.default.svc:5000/webapp/webapp-build:latest ...
Push successful
[[email protected] ~]#

Alternatively you can directly inject the Dockerfile options in a single command and the build would start immediately:

[[email protected] ~]#  oc new-build --name webapp-build -D $'FROM openshift/hello-openshift'
  • Create the web application
[[email protected] ~]# oc new-app webapp-build
warning: Cannot find git. Ensure that it is installed and in your path. Git is required to work with git repositories.
--> Found image cf178dd (4 minutes old) in image stream "webapp/webapp-build" under tag "latest" for "webapp-build"

    * This image will be deployed in deployment config "webapp-build"
    * Ports 8080/tcp, 8888/tcp will be load balanced by service "webapp-build"
      * Other containers can access this service through the hostname "webapp-build"

--> Creating resources ...
    deploymentconfig "webapp-build" created
    service "webapp-build" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/webapp-build'
    Run 'oc status' to view your app.
[[email protected] ~]#

As you see below, we are currently running a single pod:

[[email protected] ~]#  oc get pod -o wide
NAME                   READY     STATUS      RESTARTS   AGE       IP            NODE
webapp-build-1-build   0/1       Completed   0          8m        10.131.0.27   origin-node-1
webapp-build-1-znk98   1/1       Running     0          3m        10.131.0.29   origin-node-1
[[email protected] ~]#

Let’s check out endpoints and services:

[[email protected] ~]# oc get ep
NAME           ENDPOINTS                           AGE
webapp-build   10.131.0.29:8080,10.131.0.29:8888   1m
[[email protected] ~]# oc get svc
NAME           CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
webapp-build   172.30.64.97           8080/TCP,8888/TCP   1m
[[email protected] ~]#

Running a single pod is not great for redundancy, let’s scale out:

[[email protected] ~]# oc scale --replicas=5 dc/webapp-build
deploymentconfig "webapp-build" scaled
[[email protected] ~]#  oc get pod -o wide
NAME                   READY     STATUS      RESTARTS   AGE       IP            NODE
webapp-build-1-4fb98   1/1       Running     0          15s       10.130.0.47   origin-node-2
webapp-build-1-build   0/1       Completed   0          9m        10.131.0.27   origin-node-1
webapp-build-1-dw6ww   1/1       Running     0          15s       10.131.0.30   origin-node-1
webapp-build-1-lswhg   1/1       Running     0          15s       10.131.0.31   origin-node-1
webapp-build-1-z4nk9   1/1       Running     0          15s       10.130.0.46   origin-node-2
webapp-build-1-znk98   1/1       Running     0          4m        10.131.0.29   origin-node-1
[[email protected] ~]#

We can check our endpoints and services again, and see that we have more endpoints and still one service:

[[email protected] ~]# oc get ep
NAME           ENDPOINTS                                                        AGE
webapp-build   10.130.0.46:8080,10.130.0.47:8080,10.131.0.29:8080 + 7 more...   4m
[[email protected] ~]# oc get svc
NAME           CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
webapp-build   172.30.64.97           8080/TCP,8888/TCP   4m
[[email protected] ~]#

OpenShift uses an internal DNS service called SkyDNS to expose services for internal communication:

[[email protected] ~]# dig webapp-build.webapp.svc.cluster.local

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> webapp-build.webapp.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20933
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;webapp-build.webapp.svc.cluster.local. IN A

;; ANSWER SECTION:
webapp-build.webapp.svc.cluster.local. 30 IN A	172.30.64.97

;; Query time: 1 msec
;; SERVER: 10.255.1.214#53(10.255.1.214)
;; WHEN: Sat Jun 30 08:58:19 UTC 2018
;; MSG SIZE  rcvd: 71

[[email protected] ~]#
  • Let’s expose our web application so that it is accessible from the outside world:
[[email protected] ~]# oc expose svc webapp-build
route "webapp-build" exposed
[[email protected] ~]#

Connect with a browser to the URL you see under routes:

Modify the WebApp and inject variables via a config map into our application:

[[email protected] ~]# oc create configmap webapp-map --from-literal=RESPONSE="My first OpenShift WebApp"
configmap "webapp-map" created
[[email protected] ~]#

Afterwards we need to add the previously created config map to our environment

[[email protected] ~]# oc env dc/webapp-build --from=configmap/webapp-map
deploymentconfig "webapp-build" updated
[[email protected] ~]#

Now when we check our web application again you see that the new variables are injected into the pod and displayed:

I will share more about running OpenShift Container Platform and my experience in the coming month. I hope you find this article useful and please share your feedback and leave a comment.

Running WordPress on OpenShift

This here is just a simple example deploying a web application like WordPress on an OpenShift cluster.

I am doing this via the command line because it is much quicker but you can also do this via the WebUI. First, we need to log in:

[[email protected] ~]$ oc login https://console.paas.domain.com:8443
The server is using a certificate that does not match its hostname: x509: certificate is not valid for any names, but wanted to match console.paas.domain.com
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Authentication required for https://console.paas.domain.com:8443 (openshift)
Username: demo
Password:
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project 

[[email protected] ~]$

OpenShift tells us that we have no project, so let’s create a new project with the name testing:

[[email protected] ~]$ oc new-project testing
Now using project "testing" on server "https://console.paas.domain.com:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.
[[email protected] ~]$

After we have created the project we need to start creating the first app, in our case for WordPress we need MySQL for the database. This is just an example because the MySQL is not using persistent storage and I use sample DB information it created:

[[email protected] ~]$ oc new-app mysql-ephemeral
--> Deploying template "openshift/mysql-ephemeral" to project testing2

     MySQL (Ephemeral)
     ---------
     MySQL database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/5.7/README.md.

     WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing

     The following service(s) have been created in your project: mysql.

            Username: user1M5
            Password: 5KmiOFJ4UH2UIKbG
       Database Name: sampledb
      Connection URL: mysql://mysql:3306/

     For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/5.7/README.md.

     * With parameters:
        * Memory Limit=512Mi
        * Namespace=openshift
        * Database Service Name=mysql
        * MySQL Connection Username=user1M5
        * MySQL Connection Password=5KmiOFJ4UH2UIKbG # generated
        * MySQL root user Password=riPsYFaVEpHBYAWf # generated
        * MySQL Database Name=sampledb
        * Version of MySQL Image=5.7

--> Creating resources ...
    secret "mysql" created
    service "mysql" created
    deploymentconfig "mysql" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/mysql'
    Run 'oc status' to view your app.
[[email protected] ~]$

Next, let us create a PHP app and pulling the latest WordPress install from Github.

[[email protected] ~]$ oc new-app php~https://github.com/wordpress/wordpress
-->; Found image fa73ae7 (5 days old) in image stream "openshift/php" under tag "7.0" for "php"

    Apache 2.4 with PHP 7.0
    -----------------------
    PHP 7.0 available as docker container is a base platform for building and running various PHP 7.0 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts.

    Tags: builder, php, php70, rh-php70

    * A source build using source code from https://github.com/wordpress/wordpress will be created
      * The resulting image will be pushed to image stream "wordpress:latest"
      * Use 'start-build' to trigger a new build
    * This image will be deployed in deployment config "wordpress"
    * Ports 8080/tcp, 8443/tcp will be load balanced by service "wordpress"
      * Other containers can access this service through the hostname "wordpress"

-->; Creating resources ...
    imagestream "wordpress" created
    buildconfig "wordpress" created
    deploymentconfig "wordpress" created
    service "wordpress" created
-->; Success
    Build scheduled, use 'oc logs -f bc/wordpress' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/wordpress'
    Run 'oc status' to view your app.
[[email protected] ~]$

Last but not least I need to expose the PHP WordPress app:

[[email protected] ~]$ oc expose service wordpress
route "wordpress" exposed
[[email protected] ~]$

Here is a short overview of how it looks in the OpenShift web console, you see the deployed MySQL pod and the PHP pod which run WordPress:

Next, you open the URL http://wordpress-testing.paas.domain.com/ and see the WordPress install menu where you start configuring the database connection using the info I got from OpenShift when I created the MySQL pod:

I can customise the DB connection settings if I want to have a more permanent solution:

A voilà, your WordPress is deployed in OpenShift:

This is just a very simple example but it took me less than a minute to deploy the web application. Read also my other post about Deploying OpenShift Origin Cluster using Ansible.

Please share your feedback.

Leave a comment

Deploying OpenShift Origin Cluster using Ansible

Something completely different to my more network related posts, this time it is about Platform as a Service with OpenShift Origin. There is a big push for containerized platform services from development.

I was testing the official OpenShift Origin Ansible Playbook to install a small 5 node cluster and created an OpenShift Vagrant environment for this.

Cluster overview:

I recommend having a look at the official RedHat OpenShift documentation to understand the architecture because it is quite a complex platform.

As a pre-requisite, you need to install the vagrant hostmanager because Openshift needs to resolve hostnames and I don’t want to install a separate DNS server. Here you find more information: https://github.com/devopsgroup-io/vagrant-hostmanager

vagrant plugin install vagrant-hostmanager

sudo bash -c 'cat << EOF > /etc/sudoers.d/vagrant_hostmanager2
Cmnd_Alias VAGRANT_HOSTMANAGER_UPDATE = /bin/cp <your-home-folder>/.vagrant.d/tmp/hosts.local /etc/hosts
%sudo ALL=(root) NOPASSWD: VAGRANT_HOSTMANAGER_UPDATE
EOF'

Next, clone my Vagrant repository and the official OpenShift Origin ansible:

git clone [email protected]:berndonline/openshift-origin-vagrant.git
git clone [email protected]:openshift/openshift-ansible.git

Let’s start first by booting the OpenShift vagrant environment:

cd openshift-origin-vagrant/
./vagrant_up.sh

The vagrant host manager will update dynamically the /etc/hosts file on both the Guest and the Host machine:

...
## vagrant-hostmanager-start id: 55ed9acf-25e9-4b19-bfab-e0812a292dc0
10.255.1.81	origin-master

10.255.1.231	origin-etcd

10.255.1.182	origin-infra

10.255.1.72	origin-node-1

10.255.1.145	origin-node-2

## vagrant-hostmanager-end
...

Let’s have a quick look at the OpenShift inventory file. This has settings for the different node types and custom OpenShift and Vagrant variables. You need to modify a few things like public hostname and default subdomain:

OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=vagrant
ansible_become=yes

deployment_type=origin
openshift_release=v3.7.0
containerized=true
openshift_install_examples=true
enable_excluders=false
openshift_check_min_host_memory_gb=4
openshift_disable_check=docker_image_availability,docker_storage,disk_availability

# use htpasswd authentication with demo/demo
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'demo': '$apr1$.MaA77kd$Rlnn6RXq9kCjnEfh5I3w/.'}

# put the router on dedicated infra node
openshift_hosted_router_selector='region=infra'
openshift_master_default_subdomain=origin.paas.domain.com

# put the image registry on dedicated infra node
openshift_hosted_registry_selector='region=infra'

# project pods should be placed on primary nodes
osm_default_node_selector='region=primary'

# Vagrant variables
ansible_port='22' 
ansible_user='vagrant'
ansible_ssh_private_key_file='/home/berndonline/.vagrant.d/insecure_private_key'

[masters]
origin-master  openshift_public_hostname="console.paas.domain.com"

[etcd]
origin-etcd

[nodes]
# master needs to be included in the node to be configured in the SDN
origin-master
origin-infra openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
origin-node-[1:2] openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

Now that we are ready, we need to check out the latest release and execute the Ansible Playbook:

cd openshift-ansible/
git checkout release-3.7
ansible-playbook ./playbooks/byo/config.yml -i ../openshift-origin-vagrant/inventory

The playbook takes forever to run, so do something else for the next 10 to 15 mins.

...

PLAY RECAP **********************************************************************************************************************************************************
localhost                  : ok=13   changed=0    unreachable=0    failed=0
origin-etcd                : ok=147  changed=47   unreachable=0    failed=0
origin-infra               : ok=202  changed=61   unreachable=0    failed=0
origin-master              : ok=561  changed=224  unreachable=0    failed=0
origin-node                : ok=201  changed=61   unreachable=0    failed=0


INSTALLER STATUS ****************************************************************************************************************************************************
Initialization             : Complete
Health Check               : Complete
etcd Install               : Complete
Master Install             : Complete
Master Additional Install  : Complete
Node Install               : Complete
Hosted Install             : Complete
Service Catalog Install    : Complete

Sunday 21 January 2018  20:55:16 +0100 (0:00:00.011)       0:11:56.549 ********
===============================================================================
etcd : Pull etcd container ---------------------------------------------------------------------------------------------------------------------------------- 79.51s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ----------------------------------------------------------------------------- 31.54s
openshift_node : Pre-pull node image when containerized ----------------------------------------------------------------------------------------------------- 31.28s
template_service_broker : Verify that TSB is running -------------------------------------------------------------------------------------------------------- 30.87s
docker : Install Docker ------------------------------------------------------------------------------------------------------------------------------------- 30.41s
docker : Install Docker ------------------------------------------------------------------------------------------------------------------------------------- 26.32s
openshift_cli : Pull CLI Image ------------------------------------------------------------------------------------------------------------------------------ 23.03s
openshift_service_catalog : wait for api server to be ready ------------------------------------------------------------------------------------------------- 21.32s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ----------------------------------------------------------------------------- 16.27s
restart master api ------------------------------------------------------------------------------------------------------------------------------------------ 10.69s
restart master controllers ---------------------------------------------------------------------------------------------------------------------------------- 10.62s
openshift_node : Start and enable node ---------------------------------------------------------------------------------------------------------------------- 10.42s
openshift_node : Start and enable node ---------------------------------------------------------------------------------------------------------------------- 10.30s
openshift_master : Start and enable master api on first master ---------------------------------------------------------------------------------------------- 10.21s
openshift_master : Start and enable master controller service ----------------------------------------------------------------------------------------------- 10.19s
os_firewall : Install iptables packages --------------------------------------------------------------------------------------------------------------------- 10.15s
os_firewall : Wait 10 seconds after disabling firewalld ----------------------------------------------------------------------------------------------------- 10.07s
os_firewall : need to pause here, otherwise the iptables service starting can sometimes cause ssh to fail --------------------------------------------------- 10.05s
openshift_node : Pre-pull node image when containerized ------------------------------------------------------------------------------------------------------ 7.85s
openshift_service_catalog : oc_process ----------------------------------------------------------------------------------------------------------------------- 7.44s

To publish both the openshift_public_hostname and openshift_master_default_subdomain, I have a Nginx reverse proxy running and publish 8443 from the origin-master and 80, 443 from the origin-infra nodes.

Here a Nginx example:

server {
  listen 8443 ssl;
  listen [::]:8443 ssl;
  server_name console.paas.domain.com;

  ssl on;
  ssl_certificate /etc/nginx/ssl/paas.domain.com-cert.pem;
  ssl_certificate_key /etc/nginx/ssl/paas.domain.com-key.pem;

  access_log  /var/log/nginx/openshift-console_access.log;
  error_log   /var/log/nginx/openshift-console_error.log;

location / {
  proxy_pass https://10.255.1.81:8443;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection 'upgrade';
  proxy_set_header Host $host;
  proxy_cache_bypass $http_upgrade;

  }
}

I will try to write more about OpenShift and Platform as a Service and how to deploy small applications like WordPress.

Have fun testing OpenShift and please share your feedback.

Leave a comment