Overview of Google Kubernetes Engine (GKE): Management and Orchestration System

  1. Home
  2. Google
  3. Overview of Google Kubernetes Engine (GKE): Management and Orchestration System
google kubernetes engine

Google has always provided valuable services to generate user-friendly solutions. With evolving technology sector, now it offers the best way for automatically deploying, scaling, and managing Kubernetes using its GKE (Google Kubernetes Engine) service. However, here Kubernetes will be well known with the term Kubernetes and for some this is new. 

So, to help you in understanding the complete process of Kubernetes in GKE, in this blog, we will learn the basic overview, features, and various use cases of Google Kubernetes Engine.

What is Google Kubernetes Engine (GKE)?

Google Kubernetes Engine (GKE) is a platform for running Kubernetes that is created by engineering contributors to K8s. Using GKE, you can begin with single-click clusters and have an option for scaling up to 15000 nodes. Moreover, you will get support for a high-availability control plane with multi-zonal and regional clusters. GKE can remove the operational overhead with industry-first four-way auto-scaling and performs a secure scanning of container images and data encryption. Further, using Google Kubernetes Engine (GKE), you get options for:

  • Firstly, developing a large variety of apps with support for stateful, serverless, and application accelerators.
  • Secondly, using Kubernetes-native CI/CD tooling for securing and speeding every stage of the build-and-deploy life cycle.
  • Thirdly, select the suitable channel as per the business needs.
  • Lastly, getting back time for focusing on your applications using Google Site Reliability Engineers (SREs).
Kubernetes: Overview

With a large adoption of containers in organizations, Kubernetes a container-centric management software is used for deploying and operating containerized applications. However, Kubernetes is a free system source for deploying, scaling and managing containerized applications anywhere. They have the capabilities of automating operational tasks of container management with built-in commands for

  • Firstly, deploying applications
  • Secondly, rolling out modifications to your applications
  • Thirdly, scaling your applications up and down to fit changing requirments
  • Lastly, monitoring your applications.

Use cases of Google Kubernetes Engine

1. Continuous delivery pipeline

GKE lets you enable rapid application development and iteration using its capabilities for easily deploying, updating, and controlling applications and services. Here, you can set up GKE, Cloud Source Repositories, Cloud Build, and Spinnaker for Google Cloud services for automatically creating, testing, and deploying an app. However, the modifications trigger the continuous delivery pipeline for automatically rebuilding, retesting, and redeploying the new version when the app code is edited.

Google Kubernetes Engine Diagram for developers building a continuous delivery pipeline
Image Source: Google Cloud

2. Migrating a two-tier application to GKE

Utilizing Migrate for Anthos for moving and transforming workloads into containers in GKE. Moreover, using both application and database VMs you can migrate a two-tiered LAMP stack application from VMware to GKE. Further, you can enhance the security by making the database accessible from the application container only and by replacing SSH access with authenticated shell access through kubectl.

Diagram showcasing how to move and convert workloads into Google Kubernetes Engine
Image Source: Google Cloud

Moving on, below are some of the top-most features of Google Kubernetes Engine which makes it a powerful service.

What are the features of GKE?

Some of the key features of Google Kubernetes Engine are:

1. Two modes of operation: GKE

GKE now has two operating modes:

  • Firstly, the standard is the GKE experience we’ve been working on since the beginning. This offers complete control over the nodes and the ability to fine-tune and execute custom administrative workloads.
  • Secondly, the all-new Autopilot mode is a completely managed, hands-off solution that runs your whole cluster’s architecture without any requirement for configuration or monitoring. Moreover,it has Autopilots per-pod invoicing that ensures that you only pay for the pods that are really functioning.
2. Pod and cluster autoscaling

GKE is the first completely managed Kubernetes service in the market, providing support for the,

  • Kubernetes API
  • 4-way autoscaling
  • Release channels
  • Multi-cluster deployments.

Further, autoscaling of horizontal pods can be based on CPU usage or other metrics. Whereas, vertical pod autoscaling works on a per-node-pool basis, while cluster autoscaling works on a per-node-pool basis.

3. Prebuilt Kubernetes applications and templates

With prebuilt deployment templates, you’ll have access to enterprise-ready containerized solutions with portability, streamlined licensing, and unified billing. Moreover, these are Google-built, and commercial programs that help developers work more efficiently. And, also Google Cloud Marketplace allows you to deploy on-premises or in third-party clouds.

4. Container native networking and security

GKE Sandbox adds a second layer of defense between containerized workloads on GKE for improved workload security. Moreover, the GKE clusters provide built-in support for Kubernetes Network Policy, which allows pod-level firewall rules for controlling traffic.

5. Migrating traditional workloads to GKE containers with ease

Traditional programs get easily modernize by migrating them away from virtual machines and into native containers with Migrate for Anthos and GKE. However, the GKE automated solution separates the important application pieces from the VM, allowing you to simply insert those components into containers in Google Kubernetes Engine or Anthos clusters without any requirement for VM layers (such as Guest OS).

6. Identity and access management

In this, you get an option for controlling access in the cluster using Google accounts and role permissions.

google devops engineer exam
7. Hybrid networking

In this, you get an option for reserving an IP address range for your cluster. This as a result enables cluster IPs for coexisting with private network IPs via Google Cloud VPN.

8. Security and compliance

GKE is HIPAA and PCI DSS certified and is backed by a Google security team of over 750 specialists.

9. Integrated logging and monitoring

In this, you get an option for allowing Cloud Logging and Cloud Monitoring with simple checkbox configurations. This makes it easy for gaining insight into how your application is running.

10. Cluster options

In this, you get an option for selecting clusters tailored to the,

  • Firstly, availability
  • Secondly, version stability
  • Thirdly, isolation
  • Lastly, pod traffic requirements of your workloads.
11. Auto upgrade and repair

In this, you get an option for automatically keeping your cluster updated with the latest release version of Kubernetes. And, further, if a node fails a health check when auto repair get enables, then, GKE starts a repair process for that node.

12. Resource limits

Kubernetes enables you to define how much CPU and memory (RAM) each container requires. This helps in better organizing workloads inside your cluster.

13. Stateful application support

In GKE you can link persistent storage to containers, and host complete databases.

14. GPU and TPU support

GKE provides complete support of GPUs and TPUs. Moreover, it makes it easy for running ML, GPGPU, HPC, and other workloads that benefit from specialized hardware accelerators.

15. Built-in dashboard

Cloud Console provides useful dashboards for the clusters and resources in your project. These dashboards allow you to look at, inspect, manage, and delete resources in your clusters.

16. Persistent disks support

For data integrity, flexibility to resize storage without interruption, and automated encryption, data is stored redundantly. Persistent discs can be created in both HDD and SSD formats. Further, you may also take snapshots of your persistent drive and use them to create new persistent discs.

17. Local SSD support

GKE provides local solid-state drive (SSD) block storage that is always encrypted. In comparison to persistent drives, local SSDs are physically coupled to the server that runs the virtual machine instance, allowing for very high input/output operations per second (IOPS) and very low latency.

18. Hybrid and multi-cloud support

You can get the GKE experience with rapid, managed, and simple installs through Anthos GKE. And, you will also get upgrades that have been verified by Google. GKE can operate both Windows Server and Linux nodes with full support for both Linux and Windows workloads.

19. Serverless containers

In this, you get the option for running stateless serverless containers abstracting away all infrastructure management, and automatically scaling them with Cloud Run.

Google Kubernetes Engine pricing

Google Kubernetes Engine (GKE) pricing for computing resources and cluster management.

1. Autopilot mode

After the free tier, the Autopilot clusters price is $0.10/h per cluster for each cluster, plus the CPU, memory, and ephemeral storage compute resources that are allocated for your Pods. Moreover, the Kubernetes PodSpec’s resource requirements decide the number of pod resources. There is no minimum price and all Autopilot resources bill in 1-second increments. GKE also comes with a Service Level Agreement (SLA). This is backed up financially by ensuring that the control plane of Autopilot clusters and Autopilot pods in several zones is always available.

2. Standard mode

After the free tier, Standard mode results in a maintenance fee of $0.10 per cluster per hour, regardless of cluster size or topology. GKE cluster management fees, on the other hand, do not apply to Anthos clusters. In addition, in the Standard mode, GKE employs Compute Engine instances as worker nodes in the cluster. And, until the nodes are deleted, you must pay for each instance according to Compute Engine’s price. In addition, computing Engine resources charges are on a per-second basis, with a one-minute minimum usage cost.

3. Cluster management fee and free tier

The $0.10 per cluster per hour cluster administration fee applies to all GKE clusters, regardless of the mode of operation, cluster size, or topology. The GKE free tier, on the other hand, provides $74.40 in monthly credits per billing account, which you can utilize on zonal and Autopilot clusters. If you only utilize one Zonal or Autopilot cluster every month, this credit will cover the entire cost of that cluster. Free tier credits that are not in use will not carry over and will not apply to other SKUs. Further, the below conditions apply to the cluster management fee:

  • Firstly, the fee is flat, ignoring the cluster size and topology whether it is a
    • single-zone cluster
    • multi-zonal cluster
    • regional or Autopilot cluster.
  • Secondly, the fee does not apply to Anthos clusters.
4. Multi-Cluster Ingress

There is no additional price to use Multi-Cluster Ingress in an Anthos on the Google Cloud cluster because it is included as part of Anthos. When you use Multi-Cluster Ingress, you are paying at the standalone price rate if you have GKE clusters that are not licensed for Anthos. Whether you utilize Multi-Cluster Ingress with Anthos license or on its own, the functionality is the same.

Final Words

Above we have gained knowledge about the Google Kubernetes Engine (GKE) by understanding its overview, features, use cases, and pricing using examples. However, this service is providing benefits to many top companies by improving time to market for app development, converting e-commerce infrastructure and speeds response time, accommodating major events seamlessly, and more. So, get start learning GKE and move towards the more advanced level of technology.

Google Kubernetes Engine gcp devops exam
Menu