Deploying OpenShift 3.11 Container Platform on AWS using Terraform

I have done a few changes on my Terraform configuration for OpenShift 3.11 on Amazon AWS. I have downsized the environment because I didn’t needed that many nodes for a quick test setup. I have added CloudFlare DNS to automatically create CNAME for the AWS load balancers on the DNS zone. I have also added an AWS S3 Bucket for storing the backend state. You can find the new Terraform configuration on my Github repository: https://github.com/berndonline/openshift-terraform/tree/dev

From OpenShift 3.10 and later versions the environment variables changes and I modified the ansible-hosts template for the new configuration. You can see the changes in the hosts template: https://github.com/berndonline/openshift-terraform/blob/dev/helper_scripts/ansible-hosts.template.txt

OpenShift 3.11 has changed a few things and put an focus on an Cluster Operator console which is pretty nice and runs on Kubernetes 1.11. I recommend reading the release notes for the 3.11 release for more details: https://docs.openshift.com/container-platform/3.11/release_notes/ocp_3_11_release_notes.html

I don’t wanted to get into too much detail, just follow the steps below and start with cloning my repository, and choose the dev branch:

git clone -b dev https://github.com/berndonline/openshift-terraform.git
cd ./openshift-terraform/
ssh-keygen -b 2048 -t rsa -f ./helper_scripts/id_rsa -q -N ""
chmod 600 ./helper_scripts/id_rsa

You need to modify the cloudflare.tf and add your CloudFlare API credentials otherwise just delete the file. The same for the S3 backend provider, you find the configuration in the main.tf and it can be removed if not needed.

CloudFlare and Amazon AWS credentials can be added through environment variables:

export AWS_ACCESS_KEY_ID='<-YOUR-AWS-ACCESS-KEY->'
export AWS_SECRET_ACCESS_KEY='<-YOUR-AWS-SECRET-KEY->'
export TF_VAR_email='<-YOUR-CLOUDFLARE-EMAIL-ADDRESS->'
export TF_VAR_token='<-YOUR-CLOUDFLARE-TOKEN->'
export TF_VAR_domain='<-YOUR-CLOUDFLARE-DOMAIN->'
export TF_VAR_htpasswd='<-YOUR-OPENSHIFT-DEMO-USER-HTPASSWD->'

Run terraform init and apply to create the environment.

terraform init && terraform apply -auto-approve

Copy the ssh key and ansible-hosts file to the bastion host from where you need to run the Ansible OpenShift community playbooks.

scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ./helper_scripts/id_rsa [email protected]$(terraform output bastion):/home/centos/.ssh/
scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ./inventory/ansible-hosts  [email protected]$(terraform output bastion):/home/centos/ansible-hosts

I recommend waiting around five to ten minutes as the AWS cloud-init script prepares all the nodes and installs the latest patch level. Afterwards continue with prerequisites and deploy_cluster playbooks. You can connect to the bastion host and run the playbooks directly.

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/prerequisites.yml -i ~/ansible-hosts"
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/deploy_cluster.yml -i ~/ansible-hosts"

If for whatever reason the cluster deployment fails, you can run the uninstall playbook to bring the nodes back into a clean state and start from the beginning and run deploy_cluster.

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l centos $(terraform output bastion) -A "cd /openshift-ansible/ && ansible-playbook ./playbooks/adhoc/uninstall.yml -i ~/ansible-hosts"

Here are some screenshots of the new cluster console:

Let’s create a project and import my hello-openshift.yml build configuration:

Successful completed the build and deployed the hello-openshift container:

My example hello openshift application:

When you are finished with the testing, run terraform destroy.

terraform destroy -force 

Please share your feedback and leave a comment.

Terraform CloudFlare Provider Example

This is a short article on how to create DNS records on your CloudFlare DNS zone using Terraform. I have used this in new coming article about OpenShift 3.11 on AWS. You can check out the cloudflare.tf example on my Github repository: https://github.com/berndonline/openshift-terraform/blob/dev/cloudflare.tf

In the cloudflare_record configuration, the variables of the AWS ALB dns names are under resource values. This means, Terraform will start with deploying  the AWS infrastructure and create’s afterwards the specified DNS records on the CloudFlare DNS zone.

provider "cloudflare" {
  email = "[email protected]"
  token = "***YOUR-API-TOKEN***"
}
variable "domain" {
  default = "domain.com"
}
resource "cloudflare_record" "console-paas" {
  domain  = "${var.domain}"
  name    = "console-paas"
  value   = "${aws_lb.master_alb.dns_name}"
  type    = "CNAME"
  proxied = false
}
resource "cloudflare_record" "wildcard-paas" {
  domain  = "${var.domain}"
  name    = "*.paas"
  value   = "${aws_lb.infra_alb.dns_name}"
  type    = "CNAME"
  proxied = false
}

If you verify this on the CloudFlare web console, you see that Terraform created two DNS record’s and pointing to the AWS ALB dns name:

When you run terraform destroy the two DNS records will be automatically removed.

I recommend having a look at the great articles on the CloudFlare blog:

https://blog.cloudflare.com/getting-started-with-terraform-and-cloudflare-part-1/

https://blog.cloudflare.com/getting-started-with-terraform-and-cloudflare-part-2/

Automate Ansible AWX configuration using Tower-CLI

Some time has gone by since my article about Getting started with Ansible AWX (Open Source Tower version) , and I wanted to continue focusing on AWX and show how to automate the configuration of an AWX Tower server.

Before we configure AWX we should install the tower-cli. You can find more information about the Tower CLI here: https://github.com/ansible/tower-cli. I also recommend having a look at the tower-cli documentation: https://tower-cli.readthedocs.io/en/latest/

sudo pip install ansible-tower-cli

The tower-cli is very useful when you want to monitor the running jobs. The web console is not that great when it comes to large playbook and is pretty slow at showing the running job state. See below the basic configuration before you start using the tower-cli:

[email protected]:~$ tower-cli config host 94.130.51.22
Configuration updated successfully.
[email protected]:~$ tower-cli login admin
Password:
{
 "id": 1,
 "type": "o_auth2_access_token",
 "url": "/api/v2/tokens/1/",
 "created": "2018-09-15T17:41:23.942572Z",
 "modified": "2018-09-15T17:41:23.955795Z",
 "description": "Tower CLI",
 "user": 1,
 "refresh_token": null,
 "application": null,
 "expires": "3018-01-16T17:41:23.937872Z",
 "scope": "write"
}
Configuration updated successfully.
[email protected]:~$ 

But now let’s continue and show how we can use the tower-cli to configure and monitor Ansible AWX Tower.

Create a project:

tower-cli project create --name "My Project" --description "My project description" --organization "Default" --scm-type "git" --scm-url "https://github.com/ansible/ansible-tower-samples"

Create an inventory:

tower-cli inventory create --name "My Inventory" --organization "Default"

Add hosts to an inventory:

tower-cli host create --name "localhost" --inventory "My Inventory" --variables "ansible_connection: local"

Create credentials:

tower-cli credential create --name "My Credential" --credential-type "Machine" --user "admin"

Create a Project Job Template:

tower-cli job_template create --name "My Job Template" --project "My Project" --inventory "My Inventory" --job-type "run" --credential "My Credential" --playbook "hello_world.yml" --verbosity "default"

After we successfully created everything let’s now run the job template and monitor the output via the tower-cli:

tower-cli job launch --job-template "My Job Template"
tower-cli job monitor <ID>

Command line output:

[email protected]:~$ tower-cli job launch --job-template "My Job Template"
Resource changed.
== ============ =========================== ======= =======
id job_template           created           status  elapsed
== ============ =========================== ======= =======
26           15 2018-10-12T12:22:48.599748Z pending 0.0
== ============ =========================== ======= =======
[email protected]:~$ tower-cli job monitor 26
------Starting Standard Out Stream------


PLAY [Hello World Sample] ******************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [Hello Message] ***********************************************************
ok: [localhost] => {
    "msg": "Hello World!"
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0

------End of Standard Out Stream--------
Resource changed.
== ============ =========================== ========== =======
id job_template           created             status   elapsed
== ============ =========================== ========== =======
26           15 2018-10-12T12:22:48.599748Z successful 8.861
== ============ =========================== ========== =======
[email protected]:~$

With the tower-cli commands we can write a simple playbook using the Ansible Shell module.

Playbook site.yml:

---
- hosts: localhost
  gather_facts: 'no'

  tasks:
    - name: Add tower project
      shell: |
        tower-cli project create \
        --name "My Project" \
        --description "My project description" \
        --organization "Default" \
        --scm-type "git" \
        --scm-url "https://github.com/ansible/ansible-tower-samples"

    - name: Add tower inventory
      shell: |
        tower-cli inventory create \
        --name "My Inventory" \
        --organization "Default"

    - name: Add host to inventory
      shell: |
        tower-cli host create \
        --name "localhost" \
        --inventory "My Inventory" \
        --variables "ansible_connection: local"
    
    - name: Add credential
      shell: |
        tower-cli credential create \
        --name "My Credential" \
        --credential-type "Machine" \
        --user "admin"
        
    - name: wait 15 seconds to pull project SCM content
      wait_for: timeout=15
      delegate_to: localhost
 
    - name: Add job template
      shell: |
        tower-cli job_template create \
        --name "My Job Template" \
        --project "My Project" \
        --inventory "My Inventory" \
        --job-type "run" \
        --credential "My Credential" \
        --playbook "hello_world.yml" \
        --verbosity "default"

Let’s run the playbook:

[email protected]:~/awx-provision$ ansible-playbook site.yml

PLAY [localhost] **************************************************************************************************************************************************

TASK [Add tower project] ******************************************************************************************************************************************
changed: [localhost]

TASK [Add tower inventory] ****************************************************************************************************************************************
changed: [localhost]

TASK [Add host to inventory] **************************************************************************************************************************************
changed: [localhost]

TASK [Add credential] *********************************************************************************************************************************************
changed: [localhost]

TASK [wait 15 seconds to pull project SCM content] ****************************************************************************************************************
ok: [localhost -> localhost]

TASK [Add job template] *******************************************************************************************************************************************
changed: [localhost]

PLAY RECAP ********************************************************************************************************************************************************
localhost : ok=6 changed=5 unreachable=0 failed=0

[email protected]:~/awx-provision$

If you like this article, please share your feedback and leave a comment.

Part one: Ansible URI module and Jinja2 templating

This article about the Ansible URI module. I have recently spend a lot of time around automation for AVI software defined load balancers and wanted to share some useful information about how to use Ansible to interacting with REST API’s. Please check out my other articles around AVI Networks.

Let’s start with the playbook:

---
- hosts: controller
  gather_facts: false
  roles:
    - { role: "config" }

The config role needs the following folders:

config/
├── defaults    # Useful for default variables
├── tasks       # Includes Ansible tasks using the URI module
├── templates   # Jinja2 json templates
└── vars        # Variables to load json j2 templates

I will use defaults just as an example for variables which I use in the task and the json template.

Here’s the content of defaults/main.yml:

---
dns_servers:
  - 8.8.8.8
  - 8.8.4.4
dns_domain: domain.com
ntp_servers:
  - 0.uk.pool.ntp.org
  - 1.uk.pool.ntp.org
username: admin
password: demo
api_version: 17.2.11

Next the Json Jinja2 template, the example below is the system configuration from AVI load balancers but this can be any json content you want to push to a REST API, templates/systemconfiguration_json.j2:

{
  "dns_configuration": {
    {% if dns_domain is defined %}
    "search_domain": "{{ dns_domain }}"
    {% endif %}
    {% if dns_servers is defined %}
    {% for item in dns_servers %}
    "server_list": [
      {
         "type": "V4",
         "addr": "{{ item }}"
      }
      {% if not loop.last %}
      ,
      {% endif %}
      {% endfor %}
      {% endif %}
    ]
  },
  "ntp_configuration": {
    {% if ntp_servers is defined %}
    {% for item in ntp_servers %}
    "ntp_servers": [
      {
        "server": {
          "type": "DNS",
          "addr": "{{ item }}"
        }
      }
      {% if not loop.last %}
      ,
      {% endif %}
      {% endfor %}
      {% endif %}  
    ]
  },
  "portal_configuration": {
    "password_strength_check": true,
    "use_uuid_from_input": false,
    "redirect_to_https": true,
    "enable_clickjacking_protection": true,
    "enable_https": true,
    "disable_remote_cli_shell": false,
    "http_port": 80,
    "enable_http": true,
    "allow_basic_authentication": true,
  }
}

After we have specified the default variables and created the j2 template, let’s continue and see how we load the json template into a single variables in vars/main.yml:

---
systemconfiguration_json: "{{ lookup('template', 'systemconfiguration_json.j2') }}"

The step is the task itself using the Ansible URI module, tasks/main.yml:

---
- block:
  - name: Config | Systemconfiguration | Configure DNS, NTP and Portal settings
    uri:
      url: "https://{{ ansible_host }}/api/systemconfiguration"
      method: PUT
      user: "{{ username }}"
      password: "{{ password }}"
      return_content: yes
      body: "{{ systemconfiguration_json }}"
      force_basic_auth: yes
      validate_certs: false
      status_code: 200, 201
      timeout: 180
      headers:
        X-Avi-Version: "{{ api_version }}"
  when: '( inventory_hostname == group["controller"][0] )'

I like to use blocks in my Ansible tasks because you can group your tasks and use a single WHEN statement when you have multiple similar tasks.

I hope you find this article useful and please try it out and let me now in the comments below if you have questions.

Deploying OpenShift 3.9 Container Platform using Terraform and Ansible on Amazon AWS

After my previous articles on OpenShift and Terraform I wanted to show how to create the necessary infrastructure and to deploy an OpenShift Container Platform in a more real-world scenario. I highly recommend reading my other posts about using Terraform to deploy an Amazon AWS VPC and AWS EC2 Instances and Load Balancers. Once the infrastructure is created we will use the Bastion Host to connect to the environment and deploy OpenShift Origin using Ansible.

I think this might be an interesting topic to show what tools like Terraform and Ansible can do together:

I will not go into detail about the configuration and only show the output of deploying the infrastructure. Please checkout my Github repository to see the detailed configuration: https://github.com/berndonline/openshift-terraform

Before we start you need to clone the repository and generate the ssh key used from the bastion host to access the OpenShift nodes:

git clone https://github.com/berndonline/openshift-terraform.git
cd ./openshift-terraform/
ssh-keygen -b 2048 -t rsa -f ./helper_scripts/id_rsa -q -N ""
chmod 600 ./helper_scripts/id_rsa

We are ready to create the infrastructure and run terraform apply:

[email protected]:~/openshift-terraform$ terraform apply

...

Plan: 56 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...

Apply complete! Resources: 19 added, 0 changed, 16 destroyed.

Outputs:

bastion = ec2-34-244-225-35.eu-west-1.compute.amazonaws.com
openshift master = master-35563dddc8b2ea9c.elb.eu-west-1.amazonaws.com
openshift subdomain = infra-1994425986.eu-west-1.elb.amazonaws.com
[email protected]:~/openshift-terraform$

Terraform successfully creates the VPC, load balancers and all needed instances. Before we continue wait 5 to 10 minutes because the cloud-init script takes a bit time and all the instance reboot at the end.

Instances:

Security groups:

Target groups for the Master and the Infra load balancers:

Master and the Infra load balancers:

Terraform also automatically creates the inventory file for the OpenShift installation and adds the hostnames for master, infra and worker nodes to the correct inventory groups. The next step is to copy the private ssh key and the inventory file to the bastion host. I am using the terraform output command to get the public hostname from the bastion host:

scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ./helper_scripts/id_rsa [email protected]$(terraform output bastion):/home/centos/.ssh/
scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ./inventory/ansible-hosts  [email protected]$(terraform output bastion):/home/centos/ansible-hosts
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l centos $(terraform output bastion)

On the bastion node, change to the /openshift-ansible/ folder and start running the prerequisites and the deploy-cluster playbooks:

cd /openshift-ansible/
ansible-playbook ./playbooks/prerequisites.yml -i ~/ansible-hosts
ansible-playbook ./playbooks/deploy_cluster.yml -i ~/ansible-hosts

Here the output from running the prerequisites playbook:

[[email protected] ~]$ cd /openshift-ansible/
[[email protected] openshift-ansible]$ ansible-playbook ./playbooks/prerequisites.yml -i ~/ansible-hosts

PLAY [Initialization Checkpoint Start] ****************************************************************************************************************************

TASK [Set install initialization 'In Progress'] *******************************************************************************************************************
Saturday 15 September 2018  11:04:50 +0000 (0:00:00.407)       0:00:00.407 ****
ok: [ip-10-0-1-237.eu-west-1.compute.internal]

PLAY [Populate config host groups] ********************************************************************************************************************************

TASK [Load group name mapping variables] **************************************************************************************************************************
Saturday 15 September 2018  11:04:50 +0000 (0:00:00.110)       0:00:00.517 ****
ok: [localhost]

TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************************************************************************************************
Saturday 15 September 2018  11:04:51 +0000 (0:00:00.033)       0:00:00.551 ****
skipping: [localhost]

TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********************************************************************************************
Saturday 15 September 2018  11:04:51 +0000 (0:00:00.024)       0:00:00.575 ****
skipping: [localhost]

TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************************************************************************************************
Saturday 15 September 2018  11:04:51 +0000 (0:00:00.024)       0:00:00.599 ****
skipping: [localhost]

...

PLAY RECAP ********************************************************************************************************************************************************
ip-10-0-1-192.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-1-237.eu-west-1.compute.internal : ok=64   changed=15   unreachable=0    failed=0
ip-10-0-1-248.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-5-174.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-5-235.eu-west-1.compute.internal : ok=58   changed=14   unreachable=0    failed=0
ip-10-0-5-35.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-9-130.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
ip-10-0-9-51.eu-west-1.compute.internal : ok=58   changed=14   unreachable=0    failed=0
ip-10-0-9-85.eu-west-1.compute.internal : ok=56   changed=14   unreachable=0    failed=0
localhost                  : ok=11   changed=0    unreachable=0    failed=0


INSTALLER STATUS **************************************************************************************************************************************************
Initialization             : Complete (0:00:41)

[[email protected] openshift-ansible]$

Continue with the deploy cluster playbook:

[[email protected] openshift-ansible]$ ansible-playbook ./playbooks/deploy_cluster.yml -i ~/ansible-hosts

PLAY [Initialization Checkpoint Start] ****************************************************************************************************************************

TASK [Set install initialization 'In Progress'] *******************************************************************************************************************
Saturday 15 September 2018  11:08:38 +0000 (0:00:00.102)       0:00:00.102 ****
ok: [ip-10-0-1-237.eu-west-1.compute.internal]

PLAY [Populate config host groups] ********************************************************************************************************************************

TASK [Load group name mapping variables] **************************************************************************************************************************
Saturday 15 September 2018  11:08:38 +0000 (0:00:00.064)       0:00:00.167 ****
ok: [localhost]

TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************************************************************************************************
Saturday 15 September 2018  11:08:38 +0000 (0:00:00.031)       0:00:00.198 ****
skipping: [localhost]

TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********************************************************************************************
Saturday 15 September 2018  11:08:38 +0000 (0:00:00.026)       0:00:00.225 ****
skipping: [localhost]

...

PLAY RECAP ********************************************************************************************************************************************************
ip-10-0-1-192.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-1-237.eu-west-1.compute.internal : ok=591  changed=256  unreachable=0    failed=0
ip-10-0-1-248.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-5-174.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-5-235.eu-west-1.compute.internal : ok=325  changed=145  unreachable=0    failed=0
ip-10-0-5-35.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-9-130.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
ip-10-0-9-51.eu-west-1.compute.internal : ok=325  changed=145  unreachable=0    failed=0
ip-10-0-9-85.eu-west-1.compute.internal : ok=132  changed=57   unreachable=0    failed=0
localhost                  : ok=13   changed=0    unreachable=0    failed=0

INSTALLER STATUS **************************************************************************************************************************************************
Initialization             : Complete (0:00:55)
Health Check               : Complete (0:00:01)
etcd Install               : Complete (0:01:03)
Master Install             : Complete (0:05:17)
Master Additional Install  : Complete (0:00:26)
Node Install               : Complete (0:08:24)
Hosted Install             : Complete (0:00:57)
Web Console Install        : Complete (0:00:28)
Service Catalog Install    : Complete (0:01:19)

[[email protected] openshift-ansible]$

Once the deploy playbook finishes we have a working Openshift cluster:

Login with username: demo, and password: demo

For the infra load balancers you cannot access OpenShift routes via the Amazon DNS, this is not allowed. You need to create a wildcard DNS CNAME record like *.paas.domain.com and point to the AWS load balancer DNS record.

Let’s continue to do some basic cluster checks to see the nodes are in ready state:

[[email protected] ~]$ oc get nodes
NAME                                       STATUS    ROLES     AGE       VERSION
ip-10-0-1-192.eu-west-1.compute.internal   Ready     compute   11m       v1.9.1+a0ce1bc657
ip-10-0-1-237.eu-west-1.compute.internal   Ready     master    16m       v1.9.1+a0ce1bc657
ip-10-0-1-248.eu-west-1.compute.internal   Ready         11m       v1.9.1+a0ce1bc657
ip-10-0-5-174.eu-west-1.compute.internal   Ready     compute   11m       v1.9.1+a0ce1bc657
ip-10-0-5-235.eu-west-1.compute.internal   Ready     master    15m       v1.9.1+a0ce1bc657
ip-10-0-5-35.eu-west-1.compute.internal    Ready         11m       v1.9.1+a0ce1bc657
ip-10-0-9-130.eu-west-1.compute.internal   Ready     compute   11m       v1.9.1+a0ce1bc657
ip-10-0-9-51.eu-west-1.compute.internal    Ready     master    14m       v1.9.1+a0ce1bc657
ip-10-0-9-85.eu-west-1.compute.internal    Ready         11m       v1.9.1+a0ce1bc657
[[email protected] ~]$
[[email protected] ~]$ oc get projects
NAME                                DISPLAY NAME   STATUS
default                                            Active
kube-public                                        Active
kube-service-catalog                               Active
kube-system                                        Active
logging                                            Active
management-infra                                   Active
openshift                                          Active
openshift-ansible-service-broker                   Active
openshift-infra                                    Active
openshift-node                                     Active
openshift-template-service-broker                  Active
openshift-web-console                              Active
[[email protected] ~]$
[[email protected] ~]$ oc get pods -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP           NODE
docker-registry-1-8798r    1/1       Running   0          10m       10.128.2.2   ip-10-0-5-35.eu-west-1.compute.internal
registry-console-1-zh9m4   1/1       Running   0          10m       10.129.2.3   ip-10-0-9-85.eu-west-1.compute.internal
router-1-96zzf             1/1       Running   0          10m       10.0.9.85    ip-10-0-9-85.eu-west-1.compute.internal
router-1-nfh7h             1/1       Running   0          10m       10.0.1.248   ip-10-0-1-248.eu-west-1.compute.internal
router-1-pcs68             1/1       Running   0          10m       10.0.5.35    ip-10-0-5-35.eu-west-1.compute.internal
[[email protected] ~]$

At the end just destroy the infrastructure with terraform destroy:

[email protected]:~/openshift-terraform$ terraform destroy

...

Destroy complete! Resources: 56 destroyed.
[email protected]:~/openshift-terraform$

I will continue improving the configuration and I plan to use Jenkins to deploy the AWS infrastructure and OpenShift fully automatically.

Please let me know if you like the article or have questions in the comments below.