Setting up Applications on GKE

Wenbo Zong
4 min readJan 15, 2023

--

Ingress, service mesh, workload identity, Cloud SQL Auth proxy and more.

Disclaimer: Kubernetes (K8S) is a complex topic and I’d not claim to be an expert in it. This article specifically focuses on how to set up applications on Google Kubernetes Engine (GKE). Needless to say, there are many ways to achieve the same goal and this is just my approach.

GKE, being Google’s K8S implementation, supports the standard K8S API and also adds Google-specific API to make it easier to integrate with other Google services. From an application developer’s standpoint, the interactions with the standard K8S API are relatively easier to handle. In contrast, it is the interactions with the Google-specific K8S API that often gives me a hard time. Specifically, I find the ingress (load balancer and beyond) and the access to Google-managed resources the most tricky topics. Hence, in this article I’ll only highlight a few key learnings on these topics and for the curious, please check out the sample code in my Github repository.

This article is structured as follows:

  • Platform lock-in?
  • Ingress and communication within the GKE cluster
  • Access (and be accessed by) Google-managed resources

Avoid Platform Lock-in. Or really?

Developers hate platform lock-in and there’s no exception with K8S: We always try to use the standard K8S API and shun the platform-specific ones. However, my experience with GKE is such that it’s practically impossible to avoid using platform-specific features. Even if it’s possible, it would mean a lot of extra efforts (sometimes hackish) and much more complex configurations, often end up in some unchartered territory. The gain (being platform neutral) vs. the cost (complexity, efforts) is quite unjustified to my liking.

Ingress and communication within the GKE cluster

Let’s start with a wishlist for the ingress and the internal traffic management:

  • HTTP to HTTPS redirect: Must-have
  • Domain redirect: Must-have
  • Fully managed SSL certificates: Must-have
  • Web application firewall: Strongly desired
  • Session affinity: Must-have
  • Custom http headers: Nice to have
  • mutual TLS (mTLS) between pods/workloads: Nice to have

The list of course could be longer, but these are what I am most concerned with. Now let’s map these items to what is available on GKE and from the opensource community, as shown in the following table:

Apparently, if you really want to avoid platform lock-in, you can use Nginx. Regarding internal traffic management, I had trouble enabling Istio on GKE back in 2021, and later found out that Google actually deprecated Istio on GKE starting 2022 and replaced it with Anthos Service Mesh (see here). If you really want the service mesh features on GKE, then ASM should be the way to go.

PS: I really hope that the K8S Gateway API will unify ingress and service mesh and Google will implement it with native integrations with other Google-managed resources, such as Cloud Armor.

Access (and be accessed by) Google-managed resources

It is almost inevitable that our GKE applications need to access other Google-managed resources, e.g. Cloud Storage, Cloud SQL. The recommended way is Workload Identity, which basically involves creating a GCP (IAM) service account with the appropriate permissions, creating a K8S service account and binding the two together.

Access Cloud SQL

For connecting to a Cloud SQL database, the recommended way is to use Cloud SQL Auth proxy with the sidecar pattern.

Access the GKE application

Another issue is that you may need to access your GKE applications from outside of the GKE cluster. For example, you may have a Cloud Function (in a different region) that needs to call your GKE application. In this case, you can create an internal LoadBalancer service and annotate it to enable access from other Google Cloud regions.

apiVersion: v1
kind: Service
metadata:
name: ${INTERNAL_LB_NAME}
annotations:
networking.gke.io/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app: ${SVC_NAME}
spec:
type: LoadBalancer
selector:
app: ${SVC_NAME}
ports:
- port: 80
targetPort: 8080
protocol: TCP

Wrap up

The above is a quick recap of the few key learnings from setting up applications on GKE. Nothing beats code and practice. For those curious, please check out two complete (although simple) samples on my Github repo.

--

--

No responses yet