If a managed instance group is full or inaccessible, the load balancer will forward traffic to another group with free capacity. Here, the following question may arise: What about users outside of those two regions? For example, what instance group would serve traffic coming from America?
To find an answer, let’s look at the following screenshot. It presents an HTTP(s) load balancer configuration for my-load-balancer in the Load balancing view in the Networking services section of Google Cloud Console. The load balancer has been configured as an HTTP load balancer with a static global IP address:
Figure 9.35 – Global external HTTP(S) load balancer view
On the backend side, there is a single backend service, my-backend-service; there are also two managed instance groups – sydney-mig with instances in australia-southeast1 and waw-mig with instances in europe-central2. During the load balancer setup, a health check service called hc2 is configured to monitor the instance’s ability to receive new connections:
Figure 9.36 – The Backend section of the global external HTTP(S) load balancer view
Assuming the service is public and accessible from all over the globe, we can verify how the traffic flows to our backend in the MONITORING tab:
Figure 9.37 – The MONITORING tab of the global external HTTP(S) load balancer view
We can see it originates from America, Asia, and Europe. The stream from Asia lands in sydney-mig, the stream from Europe lands in waw-mig, and the stream from America also lands in waw-mig. In this case, waw-mig was selected as the closest instance group to users in America:
Figure 9.38 – The MONITORING tab of the global external HTTP(S) load balancer view showing traffic flowing to backend instances
There are other global services available besides the global external HTTP(S) load balancer. The next section will give more details about the two remaining global load balancers: SSL and TCP proxies.
Global external TCP/SSL proxies
A global external HTTP(S) load balancer works on Layer 7, balancing workloads across regions on ports 80 and 8080 for HTTP and port 443 for HTTPS. But in cases where an application uses TCP/SSL and runs on other ports, a TCP or SSL proxy could be used. Those load balancers also use a single public IP address to access backends globally, which minimizes latency between a user and a backend. Both support a multi-regional distribution of traffic and integrate with Cloud Armor to protect their backends. The difference is that they don’t preserve a user’s IP address. Instead, SSL or TCP connections are terminated by a load balancer and then proxied to an available backend in the closest region. A TCP proxy should be used when an application uses a TCP protocol and doesn’t need SSL offloading. Alternatively, an SSL proxy offers SSL offloading so that instances on the backend don’t have to decrypt SSL traffic, saving CPU cycles that can be used to serve more users.