Earlier this yr, we introduced Cloud Load Balancer support for Cloud Run. You may surprise, aren’t Cloud Run companies already load-balanced? Sure, every *.run.app endpoint load balances site visitors between an autoscaling set of containers. Nevertheless, with the Cloud Balancing integration for serverless platforms, now you can superb tune decrease ranges of your networking stack. On this article, we are going to clarify the use instances for such a arrange and construct an HTTPS load balancer from floor up for Cloud Run utilizing Terraform.

Why use a Load Balancer for Cloud Run?

Each Cloud Run service comes with a load-balanced *.run.app endpoint that’s secured with HTTPS. Moreover, Cloud Run additionally helps you to map your custom domains to your companies. Nevertheless, if you wish to customise different particulars about how your load balancing works, it is advisable to provision a Cloud HTTP load balancer your self.

Listed below are a couple of causes to run your Cloud Run service behind a Cloud Load Balancer:

  • Serving static assets with CDN since Cloud CDN integrates with Cloud Load Balancing
  • Serving traffic from multiple regions since Cloud Run is a regional service however you’ll be able to provision a load balancer with a worldwide anycast IP and route customers to the closest out there area.
  • Serve content material from combined backends, for instance your /static path might be served from a storage bucket, /api can go to a Kubernetes cluster.
  • Convey your personal TLS certificates, reminiscent of wildcard certificates you might need bought.
  • Customise networking settings, reminiscent of TLS variations and ciphers supported.
  • Authenticating and imposing authorization for particular customers or teams with Cloud IAP (this doesn’t work but with Cloud Run, nonetheless, keep tuned)
  • Configure WAF or DDoS safety with Cloud Armor.

The listing goes on, Cloud HTTP Load Balancing has various features.

Why use Terraform for this?

The quick reply is {that a} Cloud HTTP Load Balancer consists of many networking resources that it is advisable to create and join to one another. There’s no single “load balancer” object in GCP APIs.

To grasp the upcoming activity, let’s check out the sources concerned:

As you might imagine, it is very tedious to provision and connect these resources just to achieve a simple task like enabling CDN.

You could write a bash script with the gcloud command-line tool to create these resources; however, it will be cumbersome to check corner cases like if a resource already exists, or modified manually later. You would also need to write a cleanup script to delete what you provisioned.

This is where Terraform shines. It lets you declaratively configure cloud resources and create/destroy your stack in different GCP projects efficiently with just a few commands.

Building a load balancer: The hard way

The goal of this article is to intentionally show you the hard way for each resource involved in creating a load balancer using Terraform configuration language.

We’ll start with a few Terraform variables:

  • var.name: used for naming the load balancer resources
  • var.project: GCP project ID
  • var.region: region to deploy the Cloud Run service
  • var.domain: a domain name for your managed SSL certificate

First, let’s define our Terraform providers:



Leave a Reply

Your email address will not be published. Required fields are marked *