Why Productising Your Platform Is Critical for Engineering Success

It’s been a while since I’ve written anything publicly—like many in platform engineering, I’ve been head down, building, scaling and supporting. But after presenting at the OpenShift Commons Gathering alongside KubeCon 2025 in London, I felt it was time to share some of that journey—particularly the importance of product thinking in platform engineering.

In today’s cloud-native world, building an internal platform isn’t enough. If your platform isn’t treated like a product—with real users, feedback loops and continuous iteration—it will fail to gain adoption, cause friction and ultimately slow down delivery.

I’ve seen this firsthand. In 2020, I led the design and build of a multi-tenant Kubernetes platform—long before “platform as a product” was a common concept. What started as a response to infrastructure bottlenecks became a mature self-service platform that is now used globally by dozens of engineering teams.

This blog isn’t just about what we built—it’s about why thinking like a product team is the foundation of modern platform engineering success.

The Case for Product Thinking in Platform Engineering

Too often internal platforms are built with an infrastructure-first mindset: lots of YAML, few users and almost no UX. But modern engineering teams expect more:

  • APIs that just work

  • Documentation that’s up to date

  • Rapid self-service delivery

  • Immediate feedback

  • A clear path from idea to production

That’s why your platform needs to be compelling, self-service, and accessible—just like a great product.

How Success Looks in Practice

In 2020, my team and I set out to solve serious pain points:

  • 4–8 month compute lead times

  • TicketOps and manual change gates

  • External dependencies and missing knowledge

  • No clear self-service paths

  • Frequent config drift, duplication and lack of consistency

We didn’t want to patch these problems—we wanted to reimagine how a platform should work.

The result was our internal Platform Product, a multi-tenant OpenShift-based Kubernetes platform built as a product, not just infrastructure.

Here’s what made it successful:

1. Self-Service and GitOps by Default

Developers create and manage their own workloads without opening tickets. Platform features are exposed via Kubernetes-native APIs and Custom Resources. All requests are declarative and go through GitOps with automated validation.

2. Multi-Tenancy with Guard Rails

Each team is onboarded via a simple pull request that provisions their namespace and registers ownership. All changes are subject to automated checks using validating webhooks, object schema linting and RBAC enforcement.

3. Focus on Developer Experience

We designed the platform to be the path of least resistance:

  • Free sandbox environments

  • Pre-filled onboarding templates

  • Built-in best practices

  • Documentation and a developer portal

  • Persistent support channels and short feedback loops

4. Observability and Policy Everywhere

The product is highly secure (PCI-DSS compliant), fully observable and tightly integrated with cloud infrastructure. Each feature includes sensible defaults and field-tested configuration to help users succeed without needing deep Kubernetes expertise.

The Benefits of Productising Your Platform

Treating your platform like a product brings clear, measurable advantages:

Platform as Infrastructure Platform as a Product
Ticket-driven operations Self-service automation
One-size-fits-all features Iterative roadmap based on user needs
Inconsistent experience Standardized onboarding & guardrails
Siloed knowledge Published docs & support channels
Siloed teams Transparent ownership & feedback

In our case, this mindset has helped:

  • Onboard tenants in minutes, not weeks

  • Cut incident volume due to misconfiguration

  • Empower teams to deploy faster and more safely

  • Scale the platform globally with a lean team

Final Thoughts: Platform Teams Need Product DNA

Success in platform engineering isn’t just about Kubernetes, pipelines or YAML. It’s about building something people want to use. That’s what product thinking brings to the table.

If you’re leading or contributing to an internal platform today, start thinking like a product team:

  • Understand your users deeply

  • Design for ease of use and autonomy

  • Invest in documentation, automation and support

  • Create feedback loops and iterate often

You’re not just shipping infrastructure—you’re building a product that enables every other product in your company. Treat it like one.

Deploy Amazon EC2 Autoscaling Group and AWS Load Balancers with Terraform

This is the next article about using Terraform to create EC2 autoscaling group and the different load balancing options for EC2 instances. This setup depends on my previous blog post about using Terraform to deploy a AWS VPC so please read this first. In my Github repository you will find all the needed Terraform files ec2.tf and vpc.tf to deploy the full environment.

EC2 resource overview:

Let’s start with the launch configuration and creating the autoscaling group. I am using eu-west-1 and a standard Ubuntu 16.04 AMI. The instances are created in the private subnet and don’t get a public IP address assigned but have internet access via the NAT gateway:

resource "aws_launch_configuration" "autoscale_launch" {
  image_id = "${lookup(var.aws_amis, var.aws_region)}"
  instance_type = "t2.micro"
  security_groups = ["${aws_security_group.sec_web.id}"]
  key_name = "${aws_key_pair.auth.id}"
  user_data = <<-EOF
              #!/bin/bash
              sudo apt-get -y update
              sudo apt-get -y install nginx
              EOF
  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "autoscale_group" {
  launch_configuration = "${aws_launch_configuration.autoscale_launch.id}"
  vpc_zone_identifier = ["${aws_subnet.PrivateSubnetA.id}","${aws_subnet.PrivateSubnetB.id}","${aws_subnet.PrivateSubnetC.id}"]
  load_balancers = ["${aws_elb.elb.name}"]
  min_size = 3
  max_size = 3
  tag {
    key = "Name"
    value = "autoscale"
    propagate_at_launch = true
  }
}

I also created a few security groups to allow the traffic,  please have look for more detail in the ec2.tf.

Autoscaling Group

Now the configuration for a AWS Elastic (Classic) Load Balancer:

resource "aws_elb" "elb" {
  name = "elb"
  security_groups = ["${aws_security_group.sec_lb.id}"]
  subnets            = ["${aws_subnet.PublicSubnetA.id}","${aws_subnet.PublicSubnetB.id}","${aws_subnet.PublicSubnetC.id}"]
  cross_zone_load_balancing   = true
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:80/"
  }
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "80"
    instance_protocol = "http"
  }
}

Elastic Load Balancer (Classic LB)

Use the Application Load Balancing (ALB) for more advanced web load balancing which only support http and https protocols. You start with creating the ALB resource, afterwards creating the target group where you can define stickiness and health checks. The listener defines which protocol type the ALB uses and assigns the target group. In the end you attach the target- with the autoscaling group:

resource "aws_lb" "alb" {  
  name            = "alb"  
  subnets         = ["${aws_subnet.PublicSubnetA.id}","${aws_subnet.PublicSubnetB.id}","${aws_subnet.PublicSubnetC.id}"]
  security_groups = ["${aws_security_group.sec_lb.id}"]
  internal        = false 
  idle_timeout    = 60   
  tags {    
    Name    = "alb"    
  }   
}

resource "aws_lb_target_group" "alb_target_group" {  
  name     = "alb-target-group"  
  port     = "80"  
  protocol = "HTTP"  
  vpc_id   = "${aws_vpc.default.id}"   
  tags {    
    name = "alb_target_group"    
  }   
  stickiness {    
    type            = "lb_cookie"    
    cookie_duration = 1800    
    enabled         = true 
  }   
  health_check {    
    healthy_threshold   = 3    
    unhealthy_threshold = 10    
    timeout             = 5    
    interval            = 10    
    path                = "/"    
    port                = 80
  }
}

resource "aws_lb_listener" "alb_listener" {  
  load_balancer_arn = "${aws_lb.alb.arn}"  
  port              = 80  
  protocol          = "http"
  
  default_action {    
    target_group_arn = "${aws_lb_target_group.alb_target_group.arn}"
    type             = "forward"  
  }
}

resource "aws_autoscaling_attachment" "alb_autoscale" {
  alb_target_group_arn   = "${aws_lb_target_group.alb_target_group.arn}"
  autoscaling_group_name = "${aws_autoscaling_group.autoscale_group.id}"
}

Application Load Balancer (ALB)

ALB Target Group

The Network Load Balancing (NLB) is very similar to the configuration like the ALB only that it supports the TCP protocol which should be only used for performance because of the limited health check functionality:

resource "aws_lb" "nlb" {
  name               = "nlb"
  internal           = false
  load_balancer_type = "network"
  subnets            = ["${aws_subnet.PublicSubnetA.id}","${aws_subnet.PublicSubnetB.id}","${aws_subnet.PublicSubnetC.id}"]
  enable_cross_zone_load_balancing  = true
  tags {
    Name = "nlb"
  }
}

resource "aws_lb_target_group" "nlb_target_group" {  
  name     = "nlb-target-group"  
  port     = "80"  
  protocol = "TCP"  
  vpc_id   = "${aws_vpc.default.id}"   
  tags {    
    name = "nlb_target_group"    
  }     
}

resource "aws_lb_listener" "nlb_listener" {  
  load_balancer_arn = "${aws_lb.nlb.arn}"  
  port              = 80  
  protocol          = "TCP"
  
  default_action {    
    target_group_arn = "${aws_lb_target_group.nlb_target_group.arn}"
    type             = "forward"  
  }
}

resource "aws_autoscaling_attachment" "nlb_autoscale" {
  alb_target_group_arn   = "${aws_lb_target_group.nlb_target_group.arn}"
  autoscaling_group_name = "${aws_autoscaling_group.autoscale_group.id}"
}

Network Load Balancer (NLB)

NLB Target Group

Let’s run terraform apply:

berndonline@lab:~/aws-terraform$ terraform apply
data.aws_availability_zones.available: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_autoscaling_attachment.alb_autoscale
      id:                                          
      alb_target_group_arn:                        "${aws_lb_target_group.alb_target_group.arn}"
      autoscaling_group_name:                      "${aws_autoscaling_group.autoscale_group.id}"

  + aws_autoscaling_attachment.nlb_autoscale
      id:                                          
      alb_target_group_arn:                        "${aws_lb_target_group.nlb_target_group.arn}"
      autoscaling_group_name:                      "${aws_autoscaling_group.autoscale_group.id}"

...

Plan: 41 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

...

aws_lb.nlb: Creation complete after 2m53s (ID: arn:aws:elasticloadbalancing:eu-west-1:...:loadbalancer/net/nlb/235e69c61779b723)
aws_lb_listener.nlb_listener: Creating...
  arn:                               "" => ""
  default_action.#:                  "" => "1"
  default_action.0.target_group_arn: "" => "arn:aws:elasticloadbalancing:eu-west-1:552276840222:targetgroup/nlb-target-group/7b3c10cbdd411669"
  default_action.0.type:             "" => "forward"
  load_balancer_arn:                 "" => "arn:aws:elasticloadbalancing:eu-west-1:552276840222:loadbalancer/net/nlb/235e69c61779b723"
  port:                              "" => "80"
  protocol:                          "" => "TCP"
  ssl_policy:                        "" => ""
aws_lb_listener.nlb_listener: Creation complete after 0s (ID: arn:aws:elasticloadbalancing:eu-west-1:.../nlb/235e69c61779b723/dfde2530387b470f)

Apply complete! Resources: 41 added, 0 changed, 0 destroyed.

Outputs:

alb_dns_name = alb-1295224636.eu-west-1.elb.amazonaws.com
elb_dns_name = elb-611107604.eu-west-1.elb.amazonaws.com
nlb_dns_name = nlb-235e69c61779b723.elb.eu-west-1.amazonaws.com
berndonline@lab:~/aws-terraform$

Together with the VPC configuration from my previous article, this deploys the different load balancers and provides you the DNS names as an output and ready to use.

Over the coming weeks I will optimise the Terraform code and move some of the resource settings into the variables.tf file to make this more scaleable.

If you like this article, please share your feedback and leave a comment.

Using HashiCorp Terraform to deploy Amazon AWS VPC

Before I start deploying the AWS VPC with HashCorp’s Terraform I want to explain the design of the Virtual Private Cloud. The main focus here is primarily for redundancy to ensure that if one Availability Zone (AZ) becomes unavailable that it is not interrupting the traffic and causing outages in your network, the NAT Gateway for example run per AZ so you need to make sure that these services are spread over multiple AZs.

AWS VPC network overview:

Before you start using Terraform you need to install the binary and it is also very useful to install the AWS command line interface. Please don’t forget to register the AWS CLI and add access and secure key.

pip install awscli --upgrade --user
wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
unzip terraform_0.11.7_linux_amd64.zip
sudo mv terraform /usr/local/bin/

Terraform is a great product and creates infrastructure as code, and is independent from any cloud provider so there is no need to use AWS CloudFormation like in my example. My repository for the Terraform files can be found here: https://github.com/berndonline/aws-terraform

Let’s start with the variables file, which defines the needed settings for deploying the VPC. Basically you only need to change the variables to deploy the VPC to another AWS region:

...
variable "aws_region" {
  description = "AWS region to launch servers."
  default     = "eu-west-1"
}
...
variable "vpc_cidr" {
    default = "10.0.0.0/20"
  description = "the vpc cdir range"
}
variable "public_subnet_a" {
  default = "10.0.0.0/24"
  description = "Public subnet AZ A"
}
variable "public_subnet_b" {
  default = "10.0.4.0/24"
  description = "Public subnet AZ A"
}
variable "public_subnet_c" {
  default = "10.0.8.0/24"
  description = "Public subnet AZ A"
}
...

The vpc.tf file is the Terraform template which deploys the private and public subnets, the internet gateway, multiple NAT gateways and the different routing tables and adds the needed routes towards the internet:

# Create a VPC to launch our instances into
resource "aws_vpc" "default" {
    cidr_block = "${var.vpc_cidr}"
    enable_dns_support = true
    enable_dns_hostnames = true
    tags {
      Name = "VPC"
    }
}

resource "aws_subnet" "PublicSubnetA" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.public_subnet_a}"
  tags {
        Name = "Public Subnet A"
  }
 availability_zone = "${data.aws_availability_zones.available.names[0]}"
}
...

In the main.tf you define which provider to use:

# Specify the provider and access details
provider "aws" {
  region = "${var.aws_region}"
}

# Declare the data source
data "aws_availability_zones" "available" {}

Now let’s start deploying the environment, first you need to initialise Terraform “terraform init“:

berndonline@lab:~/aws-terraform$ terraform init

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (1.25.0)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 1.25"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
berndonline@lab:~/aws-terraform$

Next, let’s do a dry run “terraform plan” to see all changes Terraform would apply:

berndonline@lab:~/aws-terraform$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.aws_availability_zones.available: Refreshing state...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_eip.natgw_a
      id:                                          
      allocation_id:                               
      association_id:                              
      domain:                                      
      instance:                                    
      network_interface:                           
      private_ip:                                  
      public_ip:                                   
      vpc:                                         "true"

...

  + aws_vpc.default
      id:                                          
      assign_generated_ipv6_cidr_block:            "false"
      cidr_block:                                  "10.0.0.0/20"
      default_network_acl_id:                      
      default_route_table_id:                      
      default_security_group_id:                   
      dhcp_options_id:                             
      enable_classiclink:                          
      enable_classiclink_dns_support:              
      enable_dns_hostnames:                        "true"
      enable_dns_support:                          "true"
      instance_tenancy:                            "default"
      ipv6_association_id:                         
      ipv6_cidr_block:                             
      main_route_table_id:                         
      tags.%:                                      "1"
      tags.Name:                                   "VPC"


Plan: 27 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

berndonline@lab:~/aws-terraform$

Because nothing is deployed, Terraform would apply 27 changes, so let’s do this by running “terraform apply“. Terraform will check the state and will ask you to confirm and then apply the changes:

berndonline@lab:~/aws-terraform$ terraform apply
data.aws_availability_zones.available: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_eip.natgw_a
      id:                                          
      allocation_id:                               
      association_id:                              
      domain:                                      
      instance:                                    
      network_interface:                           
      private_ip:                                  
      public_ip:                                   
      vpc:                                         "true"

...

  + aws_vpc.default
      id:                                          
      assign_generated_ipv6_cidr_block:            "false"
      cidr_block:                                  "10.0.0.0/20"
      default_network_acl_id:                      
      default_route_table_id:                      
      default_security_group_id:                   
      dhcp_options_id:                             
      enable_classiclink:                          
      enable_classiclink_dns_support:              
      enable_dns_hostnames:                        "true"
      enable_dns_support:                          "true"
      instance_tenancy:                            "default"
      ipv6_association_id:                         
      ipv6_cidr_block:                             
      main_route_table_id:                         
      tags.%:                                      "1"
      tags.Name:                                   "VPC"


Plan: 27 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_eip.natgw_c: Creating...
  allocation_id:     "" => ""
  association_id:    "" => ""
  domain:            "" => ""
  instance:          "" => ""
  network_interface: "" => ""
  private_ip:        "" => ""
  public_ip:         "" => ""
  vpc:               "" => "true"
aws_eip.natgw_a: Creating...
  allocation_id:     "" => ""
  association_id:    "" => ""
  domain:            "" => ""
  instance:          "" => ""
  network_interface: "" => ""
  private_ip:        "" => ""
  public_ip:         "" => ""
  vpc:               "" => "true"

...

aws_route_table_association.PrivateSubnetB: Creation complete after 0s (ID: rtbassoc-174ba16c)
aws_nat_gateway.public_nat_c: Still creating... (1m40s elapsed)
aws_nat_gateway.public_nat_c: Still creating... (1m50s elapsed)
aws_nat_gateway.public_nat_c: Creation complete after 1m56s (ID: nat-093319a1fa62c3eda)
aws_route_table.private_route_c: Creating...
  propagating_vgws.#:                         "" => ""
  route.#:                                    "" => "1"
  route.4170986711.cidr_block:                "" => "0.0.0.0/0"
  route.4170986711.egress_only_gateway_id:    "" => ""
  route.4170986711.gateway_id:                "" => ""
  route.4170986711.instance_id:               "" => ""
  route.4170986711.ipv6_cidr_block:           "" => ""
  route.4170986711.nat_gateway_id:            "" => "nat-093319a1fa62c3eda"
  route.4170986711.network_interface_id:      "" => ""
  route.4170986711.vpc_peering_connection_id: "" => ""
  tags.%:                                     "" => "1"
  tags.Name:                                  "" => "Private Route C"
  vpc_id:                                     "" => "vpc-fdffb19b"
aws_route_table.private_route_c: Creation complete after 1s (ID: rtb-d64632af)
aws_route_table_association.PrivateSubnetC: Creating...
  route_table_id: "" => "rtb-d64632af"
  subnet_id:      "" => "subnet-17da194d"
aws_route_table_association.PrivateSubnetC: Creation complete after 1s (ID: rtbassoc-35749e4e)

Apply complete! Resources: 27 added, 0 changed, 0 destroyed.
berndonline@lab:~/aws-terraform$

Terraform successfully applied all the changes so let’s have a quick look in the AWS web console:

You can change the environment and run “terraform apply” again and Terraform would deploy the changes you have made. In my example below I didn’t, so Terraform would do nothing because it tracks the state that is deployed and that I have defined in the vpc.tf:

berndonline@lab:~/aws-terraform$ terraform apply
aws_eip.natgw_c: Refreshing state... (ID: eipalloc-7fa0eb42)
aws_vpc.default: Refreshing state... (ID: vpc-fdffb19b)
aws_eip.natgw_a: Refreshing state... (ID: eipalloc-3ca7ec01)
aws_eip.natgw_b: Refreshing state... (ID: eipalloc-e6bbf0db)
data.aws_availability_zones.available: Refreshing state...
aws_subnet.PublicSubnetC: Refreshing state... (ID: subnet-d6e4278c)
aws_subnet.PrivateSubnetC: Refreshing state... (ID: subnet-17da194d)
aws_subnet.PrivateSubnetA: Refreshing state... (ID: subnet-6ea62708)
aws_subnet.PublicSubnetA: Refreshing state... (ID: subnet-1ab0317c)
aws_network_acl.all: Refreshing state... (ID: acl-c75f9ebe)
aws_internet_gateway.gw: Refreshing state... (ID: igw-27652940)
aws_subnet.PrivateSubnetB: Refreshing state... (ID: subnet-ab59c8e3)
aws_subnet.PublicSubnetB: Refreshing state... (ID: subnet-4a51c002)
aws_route_table.public_route_b: Refreshing state... (ID: rtb-a45d29dd)
aws_route_table.public_route_a: Refreshing state... (ID: rtb-5b423622)
aws_route_table.public_route_c: Refreshing state... (ID: rtb-0453277d)
aws_nat_gateway.public_nat_b: Refreshing state... (ID: nat-0376fc652d362a3b1)
aws_nat_gateway.public_nat_a: Refreshing state... (ID: nat-073ed904d4cf2d30e)
aws_route_table_association.PublicSubnetA: Refreshing state... (ID: rtbassoc-b14ba1ca)
aws_route_table_association.PublicSubnetB: Refreshing state... (ID: rtbassoc-277d975c)
aws_route_table.private_route_a: Refreshing state... (ID: rtb-0745317e)
aws_route_table.private_route_b: Refreshing state... (ID: rtb-a15a2ed8)
aws_route_table_association.PrivateSubnetB: Refreshing state... (ID: rtbassoc-174ba16c)
aws_route_table_association.PrivateSubnetA: Refreshing state... (ID: rtbassoc-60759f1b)
aws_nat_gateway.public_nat_c: Refreshing state... (ID: nat-093319a1fa62c3eda)
aws_route_table_association.PublicSubnetC: Refreshing state... (ID: rtbassoc-307e944b)
aws_route_table.private_route_c: Refreshing state... (ID: rtb-d64632af)
aws_route_table_association.PrivateSubnetC: Refreshing state... (ID: rtbassoc-35749e4e)

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
berndonline@lab:~/aws-terraform$

To remove the environment use run “terraform destroy“:

berndonline@lab:~/aws-terraform$ terraform destroy
aws_eip.natgw_c: Refreshing state... (ID: eipalloc-7fa0eb42)
data.aws_availability_zones.available: Refreshing state...
aws_eip.natgw_a: Refreshing state... (ID: eipalloc-3ca7ec01)
aws_vpc.default: Refreshing state... (ID: vpc-fdffb19b)
aws_eip.natgw_b: Refreshing state... (ID: eipalloc-e6bbf0db)

...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - aws_eip.natgw_a

  - aws_eip.natgw_b

  - aws_eip.natgw_c

...

Plan: 0 to add, 0 to change, 27 to destroy.

Do you really want to destroy?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_network_acl.all: Destroying... (ID: acl-c75f9ebe)
aws_route_table_association.PrivateSubnetA: Destroying... (ID: rtbassoc-60759f1b)
aws_route_table_association.PublicSubnetC: Destroying... (ID: rtbassoc-307e944b)
aws_route_table_association.PublicSubnetA: Destroying... (ID: rtbassoc-b14ba1ca)
aws_route_table_association.PublicSubnetB: Destroying... (ID: rtbassoc-277d975c)
aws_route_table_association.PrivateSubnetC: Destroying... (ID: rtbassoc-35749e4e)
aws_route_table_association.PrivateSubnetB: Destroying... (ID: rtbassoc-174ba16c)
aws_route_table_association.PrivateSubnetB: Destruction complete after 0s

...

aws_internet_gateway.gw: Destroying... (ID: igw-27652940)
aws_eip.natgw_c: Destroying... (ID: eipalloc-7fa0eb42)
aws_subnet.PrivateSubnetC: Destroying... (ID: subnet-17da194d)
aws_subnet.PrivateSubnetC: Destruction complete after 1s
aws_eip.natgw_c: Destruction complete after 1s
aws_internet_gateway.gw: Still destroying... (ID: igw-27652940, 10s elapsed)
aws_internet_gateway.gw: Destruction complete after 11s
aws_vpc.default: Destroying... (ID: vpc-fdffb19b)
aws_vpc.default: Destruction complete after 0s

Destroy complete! Resources: 27 destroyed.
berndonline@lab:~/aws-terraform$

I hope this article was informative and explains how to deploy a VPC with Terraform. In the coming weeks I will add additional functions like deploying EC2 Instances and Load Balancing.

Please share your feedback and leave a comment.

Ansible Interface Playbook (ios_config): Cisco interface config

Here an Ansible Playbook with three different examples how to configure Cisco router interfaces:

1. Static IP address configuration in playbook
2. Configuration comes out of Jinja2 template, can be run dynamic with variables
3. Loop in playbook configures multiple interfaces

There are some disadvantages to work with templates, if you use commands like “no shutdown” to enable the interface. They are not shown in the running-configuration which means Ansible will assume that the configuration is not matching and execute the template again.
Another disadvantage with templates is that you cannot run “before” or “after” commands to remove existing configuration all this needs to be implemented in your Jinja2 template.

Here the Ansible Playbook:

- name: Cisco interface config
  connection: local
  hosts: all
  gather_facts: false
  vars:
    cli:
      username: "{{ username }}"
      password: "{{ password }}"
      host: "{{ device_ip }}"
  tasks:
    - name: configure IP address
      ios_config:
        before: 
          - default interface FastEthernet1/0
        lines: 
          - ip address 10.1.1.1 255.255.255.0
        after: 
          - no shutdown
        match: strict
        parents: interface FastEthernet1/0
        provider: "{{ cli }}"

    - name: configure IP out of template
      ios_config:
        src: "interfaces.j2"
        provider: "{{ cli }}"

    - name: configure IP with loop
      ios_config:
        provider: "{{ cli }}"
        before:
          - "default interface {{ item.interface }}"
        lines:
          - "ip address {{ item.address }} 255.255.255.0"
        after:
          - no shutdown
        parents: "interface {{ item.interface }}"
      with_items:
        - { interface : FastEthernet2/0, address : 10.3.3.3 }
        - { interface : FastEthernet2/1, address : 10.4.4.4 }

Read my new posts about Ansible Playbook for Cisco ASAv Firewall Topology or Ansible Playbook for Cisco BGP Routing Topology.

Ansible Semaphore

I spend lot of time working with Ansible in the last weeks to automate the deployment of Cisco router or Cumulus switches. (Waiting for Ansible 2.2 to support Cisco ASA devices..)
Ansible is a great tool but if you have multiple YAML files and various roles it can get pretty messy and would be nice to have central tool to trigger your tasks and structure your environment variables or inventories.

I exactly found this tool with Ansible Semaphore: https://github.com/ansible-semaphore/semaphore

The install is pretty easy and provides an API to trigger your tasks remotely.

You can create different projects and include your Ansible YAML files.

screen-shot-2016-10-15-at-21-59-57

The source is a Git repository where your files are stored:

screen-shot-2016-10-15-at-21-58-18

Here your environment variables:

screen-shot-2016-10-15-at-21-58-43

Inventory definition:

screen-shot-2016-10-15-at-21-59-02

Finally the you can execute your Ansible YAML files via the Web UI or API:

screen-shot-2016-10-15-at-22-00-26

screen-shot-2016-10-15-at-22-00-53

Have fun playing around with Semaphore 🙂