Understanding Working of DevOps on the GCP

  1. Home
  2. Google
  3. Understanding Working of DevOps on the GCP
Understanding Working of DevOps on the GCP

Google Cloud has always provided solutions for organizations and companies to enhance their service by providing tons of benefits and cost controlling methods. And, now, by offering the organizational and cultural movement for increasing software delivery velocity, improving service reliability, and building shared ownership among software stakeholders, GCP came up with DevOps.

What is DevOps in GCP?

GCP DevOps helps in accomplishing elite performance in your software development and delivery. This expands the speed of your deployments by having the best teams for deploying 208x more frequently with a lead time 106x faster when compared to low performers. Moreover, it also boosts software stability. Google has the best DevOps teams for recovering from incidents faster with a lower failure rate. Lastly, DevOps in Google comes with in-built security that for high performers spend less time fixing security issues compared to low performers.

To better understand the concepts of DevOps in GCP, in this blog, we will talk about the features, products, and DevOps working with services.

How does DevOps work on GCP?

DevOps is an approach to software development and delivery that emphasizes collaboration, communication, and automation between development and operations teams. Google Cloud Platform (GCP) provides a variety of tools and services that support DevOps practices. Here’s an overview of how DevOps works on GCP:

  1. Continuous Integration and Delivery (CI/CD): GCP provides various CI/CD tools such as Cloud Build, Jenkins, and GitLab. These tools help developers automate the build, test, and deployment processes for their applications. This allows for faster and more reliable delivery of software.
  2. Containerization: GCP provides a managed container service called Google Kubernetes Engine (GKE), which allows developers to deploy and manage containerized applications. With GKE, developers can use tools like Docker and Kubernetes to build, package, and deploy their applications in a consistent and scalable manner.
  3. Infrastructure as Code (IaC): GCP provides infrastructure management tools such as Google Cloud Deployment Manager and Terraform, which allow developers to define their infrastructure as code. This makes it easy to manage and automate infrastructure changes, resulting in faster and more reliable deployments.
  4. Monitoring and Logging: GCP provides various tools for monitoring and logging such as Stackdriver Monitoring, Logging, and Trace. These tools provide developers with real-time insights into the health and performance of their applications, enabling them to quickly identify and resolve issues.
  5. Security: GCP provides a variety of security tools such as Cloud IAM, Cloud Security Scanner, and Cloud Armor. These tools help developers ensure that their applications and infrastructure are secure, and comply with industry and regulatory standards.

In summary, DevOps on GCP involves using a combination of automation, containerization, infrastructure management, monitoring, and security tools to build, deploy, and manage applications in a faster, more reliable, and more secure manner.

GCP DevOps: Glossary

Here are some common terms and acronyms used in GCP DevOps:

  1. GCP: Google Cloud Platform, the cloud computing platform offered by Google.
  2. DevOps: A software development methodology that emphasizes collaboration and communication between development and operations teams to enable faster and more reliable software releases.
  3. CI/CD: Continuous Integration/Continuous Deployment, a DevOps practice of automating the software delivery process to ensure frequent and reliable software releases.
  4. Kubernetes: An open-source container orchestration platform used to automate the deployment, scaling, and management of containerized applications.
  5. Docker: A platform that allows developers to package their applications and dependencies into containers for deployment.
  6. Terraform: An open-source infrastructure-as-code tool used to provision and manage infrastructure on various cloud platforms, including GCP.
  7. YAML: A human-readable data serialization format commonly used for configuration files in DevOps.
  8. Git: A version control system used to manage source code and collaborate on software development projects.
  9. Monitoring: The practice of collecting, analyzing, and visualizing metrics and logs to monitor the performance and availability of software applications and infrastructure.
  10. Alerting: The practice of sending notifications to relevant stakeholders when certain conditions are met or exceeded, such as an application outage or a security breach.
  11. SRE: Site Reliability Engineering, a discipline that applies software engineering principles to operations to create scalable and reliable systems.
  12. IaC: Infrastructure as Code, a practice of defining and managing infrastructure through code instead of manual processes.
  13. API: Application Programming Interface, a set of protocols and tools used for building software applications and defining how different components interact with each other.
  14. IAM: Identity and Access Management, a practice of controlling access to resources and services based on user roles and permissions.
  15. SSL: Secure Sockets Layer, a protocol used to secure communication over the internet by encrypting data transmitted between servers and clients.

These are just a few of the many terms and acronyms you may encounter when working with GCP DevOps. It’s important to understand these terms to effectively communicate and collaborate with your team members.

Benefits of DevOps on Google Cloud Platform

DevOps on Google Cloud Platform (GCP) provides several benefits for organizations looking to improve their software delivery processes. Here are some of the key benefits of DevOps on GCP:

  1. Scalability: GCP provides a scalable infrastructure for running and managing applications. With Google Kubernetes Engine (GKE), developers can easily deploy and manage containerized applications, scaling them up or down based on demand. This makes it easy to handle sudden increases in traffic, ensuring that applications remain available and responsive.
  2. Automation: GCP provides a suite of automation tools, including Cloud Build and Cloud Deployment Manager, which allow developers to automate their build, test, and deployment processes. This reduces the likelihood of human error and speeds up the software delivery process.
  3. Flexibility: GCP supports a variety of programming languages and platforms, making it easy for developers to choose the tools and technologies that best fit their needs. GCP also supports open-source tools such as Kubernetes, which enables developers to avoid vendor lock-in.
  4. Collaboration: GCP provides a collaborative environment for development and operations teams to work together on software delivery. Tools like Cloud Source Repositories and Stackdriver provide a centralized location for sharing code and monitoring application performance.
  5. Security: GCP provides robust security features, including DDoS protection, identity and access management, and encryption. This helps ensure that applications and infrastructure are protected from security threats.
  6. Cost-effectiveness: GCP’s pay-as-you-go model allows organizations to only pay for the resources they use. This means that organizations can scale up or down as needed, without incurring unnecessary costs.

In summary, DevOps on GCP provides benefits such as scalability, automation, flexibility, collaboration, security, and cost-effectiveness, which can help organizations improve their software delivery processes and achieve their business objectives more efficiently.

GCP DevOps: Exam Resources

Here are some resources to help you prepare for the GCP DevOps exam:

  1. Official Exam Guide: The GCP DevOps exam has an official exam guide published by Google. This guide covers the exam objectives in detail and provides sample questions and answers.
  2. Online Courses: There are many online courses available that cover the topics tested in the GCP DevOps exam. Some popular platforms include Udemy, Coursera, and LinkedIn Learning.
  3. Practice Exams: Taking practice exams can help you to get familiar with the types of questions you may encounter on the GCP DevOps exam. Some examples include ExamTopics, Whizlabs, and Udemy Practice Tests.
  4. Documentation: Google Cloud Platform has extensive documentation on its website that covers all the services and features offered by the platform. Studying this documentation can help you to better understand the concepts and technologies used in GCP DevOps.
  5. Blogs and Communities: There are many blogs and communities dedicated to GCP DevOps that offer valuable insights and best practices.

What are the key capabilities of DevOps in GCP?

Google DevOps helps in improving technical and cultural capabilities for providing improved performance. Using these capabilities, learn how to improve the speed, stability, availability, and security of your software delivery. Google includes DORA’s State of DevOps research program which validated a number of technical, process, measurement, and cultural capabilities that drive higher software delivery and organizational performance. However, some of the capabilities are:

1. Version control

Version control systems provide a logical way for organizing files and coordinating their creation, controlled access, updating, and deletion over teams and organizations. This is associated with automation. That is to say, automation and continuous integration depend on these files for the sourcing code of the automation. Further, version control helps in improving the software delivery by providing source code, test and deployment scripts, infrastructure and application configuration information, and the many libraries and packages they depend upon.

2. Trunk-based development

Trunk-based development is used for driving higher software delivery and organizational performance.

However, there are two main patterns for developer teams to work together using version control. 

  • Firstly, using feature branches, In this, either a developer or a group of developers builds a branch usually from the trunk and then works in isolation on that branch until the feature they are creating completes. After it is ready, then they join the feature branch back to the trunk.
  • Secondly, trunk-based development. Here every developer divides their own work into small batches and joins that work into the trunk at least once a day. However, the feature branches typically require multiple developers and take days or even weeks of work. In contrast, branches in trunk-based development typically last no more than a few hours, with many developers joining their individual changes into the trunk frequently.
3. Continuous integration

Software systems can be considered composite with simple, and self-contained changes in which a single file can have unintended reactions on the overall system. However, when a large number of developers work on related systems, then, coordinating code updates is a hard problem, and changes from different developers can be incompatible. As a result, continuous integration (CI) was created for addressing these problems. This follows the principle that if something takes a lot of time and energy, you should do it more often. This operates by building a rapid feedback loop and ensuring that developers work in small batches, CI allows teams for producing high-quality software, reducing the cost of ongoing software development and maintenance, and increasing the productivity of the teams.

4. Deployment automation

Deployment automation allows you to deploy your software to testing and production environments with the push of a button. Automation is important for reducing the risk of production deployments. Moreover, it also provides fast feedback on the quality of your software by enabling teams to do comprehensive testing as soon as possible after changes. Further, an automated deployment process consists of the following inputs:

  • Firstly, packages, built by the continuous integration (CI) process.
  • Secondly, scripts for configuring the environment, deploying the packages, and performing a deployment test.
  • Lastly, environment-specific configuration information.
5. Continuous delivery

Continuous delivery is the ability for releasing changes of all kinds on demand quickly, safely, and sustainably. However, the teams practicing continuous delivery get the ability for releasing software and making changes to production in a low-risk way at any time without impacting users. Further, you can apply the principles and practices of continuous delivery to any software context including:

  • Firstly, updating services in a complex distributed system.
  • Secondly, upgrading mainframe software.
  • Thirdly, making infrastructure configuration and database schema changes.
  • Lastly, updating firmware automatically.
6. Empowering teams for choosing tools

When an organization allows teams to choose tools, it’s essential for balancing the privilege of selecting tools with the cost of acquiring and supporting them. However, the following are some ways you might empower teams to choose their own tools.

  • Firstly, establishing a cross-team baseline for the set of tools to be large and diverse enough for addressing the majority of the needs of your organization.
  • Secondly, periodically reviewing the tools for evaluating the baseline toolset to examine their effectiveness. 
  • Lastly, defining a process for exceptions. That is to say, build a clearly defined process for changing from the base toolset.
7. Test data management

Automated testing is a key element of modern software delivery practices. The ability for executing a comprehensive set of unit, integration, and system tests is important for verifying that your app or service behaves as expected, and is secured for deploying to production. However, for ensuring that your tests are examining realistic scenarios, it’s critical to supply the tests with realistic data.

8. Shifting left on security

Security is a major priority and responsibility. According to DevOps research, high-performing teams spend less time correcting security issues than low-performing teams. However, by better integrating information security (InfoSec) objectives into daily work, teams can accomplish higher levels of software delivery performance and build more secure systems. This idea is called shifting left.

Further, in software development, there are four processes. They are as follow: 

  • Design
  • Develop
  • Test
  • Release. 

And, in a traditional software development cycle, testing starts after development is complete. This typically means that a team finds significant problems, including architectural flaws, that are expensive to fix. And, after discovering the defects, developers must then find the contributing factors and how to fix them.

Cloud Native Tools

Cloud Native is an approach to software development and deployment that emphasizes building applications with services and tools that are designed to run natively on cloud platforms. Here are some popular cloud native tools:

  1. Kubernetes: Kubernetes is an open-source container orchestration platform that is widely used in cloud native environments. It automates the deployment, scaling, and management of containerized applications, making it easier to manage and operate applications in a cloud environment.
  2. Istio: Istio is an open-source service mesh that provides traffic management, security, and observability for microservices in a cloud environment. It helps developers and operators to manage and operate a large number of microservices with ease.
  3. Prometheus: Prometheus is an open-source monitoring system that collects and stores metrics from a variety of sources. It provides a powerful query language and a flexible data model that makes it easy to monitor complex, distributed systems in a cloud environment.
  4. Fluentd: Fluentd is an open-source data collector that is used to aggregate and forward log data in a cloud environment. It provides a unified logging layer that makes it easy to collect, aggregate, and analyze logs from multiple sources.
  5. Helm: Helm is an open-source package manager for Kubernetes that makes it easy to deploy, manage, and upgrade applications in a cloud native environment. It provides a simple way to define, package, and deploy complex applications with multiple components.
  6. Envoy: Envoy is an open-source edge and service proxy that provides advanced load balancing, routing, and security features for microservices in a cloud environment. It can be used with Kubernetes and other container orchestration platforms to manage and secure network traffic.
  7. Open Policy Agent (OPA): OPA is an open-source policy engine that provides a unified way to enforce policies across cloud native applications. It can be used to define and enforce policies related to security, compliance, and governance.

In summary, cloud native tools are designed to help developers and operators manage and operate applications in a cloud environment. Popular cloud native tools include Kubernetes, Istio, Prometheus, Fluentd, Helm, Envoy, and Open Policy Agent.

DevOps products and integrations

Google Cloud allows you to create and deploy new cloud applications, store artifacts, and monitor app security and reliability. However, the DevOps products and integrations cover the following:

1. Cloud Build

This is for defining custom workflows for creating, testing, and deploying over multiple environments.

2. Artifact Registry

This s for storing, managing, and securing your container images and language packages.

3. Binary Authorization

This makes sure that only trusted container images are deployed on Google Kubernetes Engine.

4. Tekton

This refers to an open-source framework for creating continuous integration and delivery (CI/CD) systems.

5. Spinnaker

This can be considered as an open-source, multi-cloud continuous delivery platform.

6. Operations Suite

This is for monitoring, troubleshooting, and improving infrastructure and app performance.

Above, we have covered the basic concepts and capabilities of DevOps in GCP. In the next section, we will get started with DevOps by using some of its use cases.

Getting Started with the use cases for DevOps

1. Exploring the Cloud Build

In this, we will learn the process of using Cloud Build for building a Docker image and pushing the image to Artifact Registry. Then, Artifact Registry provides a single location for controlling private packages and Docker container images. First build the image using a Dockerfile, which is the Docker configuration file. After that, build the same image using the Cloud Build configuration file.

Preparing source files

There is a need for some sample source code for packaging into a container image. Here, we will create a simple shell script and a Dockerfile (text document containing instructions for Docker to create an image).

  • Firstly, open a terminal window.
  • Secondly, create a new directory named quickstart-docker and navigate into it:
mkdir quickstart-docker
cd quickstart-docker
  • Thirdly, create a file named quickstart.sh with the contents that include:
echo "Hello, world! The time is $(date)."
  • Then, create a file named Dockerfile with the following contents:
FROM alpine
COPY quickstart.sh /
CMD ["/quickstart.sh"]
  • Lastly, run the command to make quickstart.sh executable:
chmod +x quickstart.sh
Creating a Docker repository in Artifact Registry
  • Firstly, with the description “Docker repository” create a new Docker repository named quickstart-docker-repo in the location us-central1:
gcloud artifacts repositories create quickstart-docker-repo --repository-format=docker \
    --location=us-central1 --description="Docker repository"
  • Then, confirm that your repository was created:
gcloud artifacts repositories list
Building using Dockerfile

Cloud Build enables you for building a Docker image using a Dockerfile without any need for a separate Cloud Build config file.

For building using a Dockerfile:

  • Firstly, get your Cloud project ID by running the command:
gcloud config get-value project
  • Secondly, check for project-id, Cloud project ID  and run the command from the directory containing quickstart.sh and Dockerfile:
gcloud builds submit --tag us-central1-docker.pkg.dev/project-id/quickstart-docker-repo/quickstart-image:tag1
  • After building, the output will be generated:
DONE
------------------------------------------------------------------------------------------------------------------------------------
ID                                    CREATE_TIME                DURATION  SOURCE   IMAGES     STATUS
545cb89c-f7a4-4652-8f63-579ac974be2e  2020-11-05T18:16:04+00:00  16S       gs://gcb-docs-project_cloudbuild/source/1604600163.528729-b70741b0f2d0449d8635aa22893258fe.tgz  us-central1-docker.pkg.dev/gcb-docs-project/quickstart-docker-repo/quickstart-image:tag1  SUCCESS
Building using a build config file

In this, we will use a Cloud Build config file for building the same Docker image. However, the build config file instructs Cloud Build for performing tasks depending on your specifications.

  • Firstly, create a file named cloudbuild.yaml with the contents in the same directory that contains quickstart.sh and the Dockerfile. However, this file is your build config file. Cloud Build automatically replaces $PROJECT_ID with your project ID during build time.
steps:
- name: 'gcr.io/cloud-builders/docker'
  args: [ 'build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/quickstart-docker-repo/quickstart-image:tag1', '.' ]images:
- 'us-central1-docker.pkg.dev/$PROJECT_ID/quickstart-docker-repo/quickstart-image:tag1'
  • Secondly, begin the build by running the following command:
gcloud builds submit --config cloudbuild.yaml
  • After building, the output will be generated:
DONE
------------------------------------------------------------------------------------------------------------------------------------
ID                                    CREATE_TIME                DURATION  SOURCE          IMAGES          STATUS
046ddd31-3670-4771-9336-8919e7098b11  2020-11-05T18:24:02+00:00  15S       gs://gcb-docs-project_cloudbuild/source/1604600641.576884-8153be22c94d438aa86c78abf11403eb.tgz  us-central1-docker.pkg.dev/gcb-docs-project/quickstart-docker-repo/quickstart-image:tag1  SUCCESS
Viewing the build details
  • Firstly, in the Google Cloud Console open the Cloud Build page.
  • Secondly, select your project and click Open. Here, the Build history page will appear.
  • Thirdly, click on a particular build, and then, Build details page will appear.
  • Lastly, click Build Artifacts for viewing the artifacts of your build, under Build Summary.
google devops engineer

2. Jenkins on Kubernetes Engine

Jenkins can be considered as an open-source automation server that allows you to flexibly orchestrate the building, testing, and deployment of the pipelines. And, Kubernetes Engine refers to a hosted version of Kubernetes. Where Kubernetes can be considered as a strong cluster manager and orchestration network for containers. Further, for setting up a continuous delivery (CD) pipeline, deploying Jenkins on Kubernetes Engine offers essential over standard VM-based deployment. That is to say:

  • Firstly, when your build process uses containers, then one virtual host can run jobs against different operating systems.
  • Secondly, Kubernetes Engine offers transient build executors which allow each build for running in a clean environment.
  • Lastly, Kubernetes Engine supports the Google global load balancer for routing web traffic to your instance. 
Creating Jenkins services

Jenkins offers two services that the cluster requires access to. Deploy these services separately for individually managing and naming them.

  • Firstly, An externally-exposed NodePort service on port 8080. This allows pods and external users for accessing the Jenkins user interface. Moreover, this can be load-balanced by an HTTP load balancer.
  • Secondly, an internal, private ClusterIP service on port 50000. This is used by Jenkins executors for communicating with the Jenkins controller from inside the cluster.
Connecting to Jenkins

After creating the Jenkins pod, you can create a load balancer endpoint for connecting to it from outside of the Cloud Platform. Use the following best practices.

  • Firstly, using a Kubernetes ingress resource for an easy-to-configure L7 load balancer with SSL termination.
  • Secondly, providing SSL certs to the load balancer using Kubernetes secrets. Further, using tls.cert and tls.key values, and referencing the values in your ingress resource configuration.
Configuring Jenkins
  • Firstly, after connecting to Jenkins for the first time, it’s essential to secure Jenkins. For this, use the Jenkins standard security setup for a simple procedure that supports an internal user database. 
  • Secondly, install plugins. However, you can install the following plugins for enhancing the interactions between Jenkins and Kubernetes Engine.
    • Firstly, the Kubernetes plugin. This allows the use of Kubernetes service accounts for authentication, and building labeled executor configurations with different base images. 
    • Secondly, the Google Authenticated Source plugin. This allows the use of service account credentials when accessing Cloud Platform services like Cloud Source Repositories.
Building Docker Images in Jenkins

Cloud Build can be used from inside Jenkins jobs for building Docker any need for hosting your own Docker daemon. However, for this, your Jenkins job must contain the service account credentials that have been granted the cloudbuild.builds.editor role. For example, Jenkins Pipeline file.

Further, Kaniko is another option for users looking for building containers inside their clusters. This does not need any Docker daemon to be present in order for building and pushing images to a remote registry. For example, of using Kaniko in Jenkins.

3. Monitoring a Compute Engine instance

In this, we will learn the process of monitoring a Compute Engine virtual machine (VM) instance with Cloud Monitoring.

Creating a Compute Engine instance
  • Firstly, go to Compute in the Cloud Console. Then, select Compute Engine.
  • Secondly, for creating a VM instance, click Create.
  • Thirdly, fill in the fields for your instance:
    • Firstly, enter lamp-1-vm in the Name field.
    • Secondly, select Small in the Machine type field.
    • Thirdly, make sure that the Boot disk is configured for Debian GNU/Linux.
    • Then, select both Allow HTTP traffic and Allow HTTPS traffic in the Firewall field.
  • After that, click Create.
  • Now, click SSH for opening a terminal to your instance, in the Connect column.
  • Then, update the package lists on your instance.
sudo apt-get update
  • Next, set up the Apache2 HTTP Server.
sudo apt-get install apache2 php7.0
  • Lastly, open your browser and connect to your Apache2 HTTP server by using the URL http://[External IP]. Then, replace [External IP] with the external IP address of your Compute Engine instance. 
Installing agents

The Cloud Monitoring and Logging agents move logs and metrics from your VM instance to Monitoring and Logging:

  • Firstly, move to the terminal connected to your VM instance or create a new one.
  • Secondly, install and begin the Cloud Monitoring agent:
  • Put in the package repository and update the package list:
curl -sSO https://dl.google.com/cloudagents/add-monitoring-agent-repo.sh
sudo bash add-monitoring-agent-repo.sh
sudo apt-get update
  • Then, install the agent:
sudo apt-get install stackdriver-agent
  • Begin the agent:
sudo service stackdriver-agent start
  • Thirdly, install, configure, and begin the Cloud Logging agent:
  • Put in the package repository and update the package list:
curl -sSO https://dl.google.com/cloudagents/add-logging-agent-repo.sh
sudo bash add-logging-agent-repo.sh
sudo apt-get update
  • Then, install the agent:
sudo apt-get install google-fluentd
  • After that, install the default agent configuration for ingesting structured data to Cloud Logging:
sudo apt-get install google-fluentd-catch-all-config-structured
  • However, the Cloud Logging agent must be configured for ingesting structured data or unstructured data. 
  • Lastly, begin the agent
Creating a dashboard and chart

Create a chart and a dashboard for displaying the metrics collected by Monitoring:

Creating a dashboard using the preview editor:

  • Firstly, go to the Monitoring page in the Google Cloud Console.
  • Secondly, click Create dashboard on the Dashboards Overview page.
  • Thirdly, drag that entry from the Widget library to the graph area for adding a Line chart widget to the dashboard.
  • Then, in the section titled What data do you want to view?, perform the following:
    • Firstly,  for the resource type, select G​C​E VM Instance.
    • Secondly, enter CPU, and then choose CPU Utilization for the metric.
  • Lastly, for adding a second Line chart widget to the dashboard:
    • Firstly, click the dashboard toolbar to enable the Widget library
    • Then, select a plot type
    • After that, drag that entry from the Widget library to the graph area. 
Viewing your logs
  • Firstly, go to the dashboard that shows the chart of interest.
  • Secondly, click More in the chart. And, click View logs.
  • On the other hand, you can go to Logging, and then define the filter parameters:
    • Firstly, go to Logging in the Cloud Console.
    • Secondly, modify the Logs Viewer settings for seeing the logs you want:
      • Firstly, click the menu and select Clear filters. Then, go back to basic mode.
      • Secondly, select G​C​E VM Instance, lamp-1-vm in the first drop-down list.
      • Then, select syslog in the second drop-down list. And, click OK.
      • Lastly, leave the other fields with their default values. 
Google Professional Cloud DevOps Engineer  Online Tutorials

Final Words

By providing the best performance for software development and delivery, Google DevOps is providing benefits to many top organizations and businesses. DevOps in Google offers DORA research capabilities that provide customers to improve their DevOps practices as well as for continuous delivery improvement and boosting up software stability. So, if you interest in learning about DevOps best practices then, explore the Google DevOps Documentation and start learning using the resources and tutorials.

Enhance your developer skills and become Google Professional Cloud DevOps Engineer!

Google Professional Cloud DevOps Engineer   Practice Tests
Menu