Setting up a CI/CD Pipeline on GCP with Terraform

Photo by Rowan Freeman on Unsplash

In this article, we will deploy a CI/CD pipeline as code. We will use Terraform to provision this pipeline infrastructure on Google Cloud Platform (GCP). Teams that work with automated CI/CD pipelines are more agile, deploy code changes more quickly and have the benefits of improving software quality continuously.

CI/CD pipelines generally consist of stages for building, unit/integration testing, publishing, and deploying applications on multiple environments (e.g., dev, staging, prod). However, we will focus on a single environment in this article.

The complete source code and terraform files can be found in my GitHub repository: https://github.com/genekuo/pipeline.git

First, we will need to complete the following requirements on GCP for the next section related to Terraform.

Please clone the code from https://github.com/genekuo/pipeline.git and change to the infrastructure directory before running the following commands. We will generate the access.json file that we will need in the the infrastructure directory.

  • Creating a GCP project as follows
PROJECT_ID=cd-pipeline // You can define your own project
gcloud projects create $PROJECT_ID
  • Linking the billing account to the project
BILLING_ACCOUNT=$(gcloud beta billing accounts list --filter open=true --uri | awk -F / '{print $NF}')gcloud beta billing projects link $PROJECT_ID --billing-account $BILLING_ACCOUNT
  • Creating a service account in the project
SERVICE_ACCOUNT_NAME=pipelinegcloud config set core/project $PROJECT_IDgcloud iam service-accounts create $SERVICE_ACCOUNT_NAME
  • Granting a project owner IAM role to the service account
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com --role roles/owner
  • Creating and downloading the access key for the service account
gcloud iam service-accounts keys create access.json --iam-account=$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
  • Enabling the base services APIs
gcloud services enable cloudresourcemanager.googleapis.com serviceusage.googleapis.com

We will push our source code in the one end and go through stages and tasks defined in the pipeline and finally deploy a Docker container on GCP to get feedback from users. The main services of GCP we will use as follows:

  • Cloud Source Repository: A Git source repository for source code version control.
  • Cloud Build: A CI/CD service for testing, building, releasing, and deploying code.
  • Container Registry: A service for storing container images.
  • Cloud Run: A service based on Knative enables running and managing serverless workloads that automatically scale, load-balance, and resolve DNS for containers. We will use the Cloud Run service to simplify this pipeline scenario.
  • Variables

We would like this code to be reusable, so we define some variables in variables.tf and provide values in tarraform.tfvars.

variables.tf

variable "project_id" {
description = "The GCP project id"
type = string
}
variable "region" {
default = "us-central1"
description = "GCP region"
type = string
}
variable "namespace" {
description = "The namespace for resource naming"
type = string
}

terraform.tfvars

project_id = "cd-pipeline"
namespace = "pipeline"
region = "us-central1"

We then declare the Google provider in providers.tf and make use of variables defined above.

providers.tf

provider "google" {
credentials = file("access.json")
project = var.project_id
region = var.region
}
  • google_project_service

We will use a Terraform resource called google_project_service to enable APIs required for constructing the pipeline. We define local variables in main.tf to define a list of services to enable and make use of for_each meta argument with the set of services in local variables to create a dynamic configuration.

main.tf

locals {
services = [
"sourcerepo.googleapis.com",
"cloudbuild.googleapis.com",
"run.googleapis.com",
"iam.googleapis.com",
]
}
  • Resource Provisioners

We will use creation-time resource provisioner and destruction-time resource provisioner to coordinate the enabling of the list of services. This is to avoid the inconsistency where resources are marked “created” that they will need more time to be eventually “created” when we run terraform apply .

The following listing defines two local-exec provisioners that wait for 60 seconds after enabling each of google project services, and when destruction occurs, perform disabling each of google project services after 15 seconds.

main.tf

resource "google_project_service" "enabled_service" {
for_each = toset(local.services)
project = var.project_id
service = each.key
provisioner "local-exec" {
command = "sleep 60"
}
provisioner "local-exec" {
when = destroy
command = "sleep 15"
}
}
  • Pipeline

For the construction of the pipeline, we will configure the following resources: google_sourcerepo_repository, google_cloudbuild_trigger, google_cloud_run_service. Those resources need to depend on the previously enabled google_project_service.

  • google_sourcerepo_repository

main.tf

resource "google_sourcerepo_repository" "repo" {
depends_on = [
google_project_service.enabled_service["sourcerepo.googleapis.com"]
]
name = "${var.namespace}-repo"
}
  • google_cloudbuild_trigger

We will then set up to trigger the steps of the pipeline from a push to the master branch of the source repository created by the google_sourcerepo_repository block defined previously.

These steps consist of building/testing source code, building Docker images using Dockerfile, publishing the image to the container registry, and deploying a container onto Cloud Run.

main.tf

resource "google_cloudbuild_trigger" "trigger" {
depends_on = [
google_project_service.enabled_service["cloudbuild.googleapis.com"]
]
trigger_template {
branch_name = "master"
repo_name = google_sourcerepo_repository.repo.name
}
build {
step {
name = "gcr.io/cloud-builders/go"
args = ["test"]
env = ["PROJECT_ROOT=${var.namespace}"]
}
step {
name = "gcr.io/cloud-builders/docker"
args = ["build", "-t", local.image, "."]
}
step {
name = "gcr.io/cloud-builders/docker"
args = ["push", local.image]
}
step {
name = "gcr.io/cloud-builders/gcloud"
args = ["run", "deploy", google_cloud_run_service.service.name, "--image", local.image, "--region", var.region, "--platform", "managed", "-q"]
}
}
}
  • google_project_iam_member

We will need to give the Cloud Build service account the run.admin and iam.serviceAccountUser roles to allow Cloud Build to deploy services on Cloud Run after google_cloudbuild_trigger is created.

main.tf

resource "google_project_iam_member" "cloudbuild_roles" {
depends_on = [google_cloudbuild_trigger.trigger]
for_each = toset(["roles/run.admin", "roles/iam.serviceAccountUser"])
project = var.project_id
role = each.key
member = "serviceAccount:${@cloudbuild.gserviceaccount.com">data.google_project.project.number}@cloudbuild.gserviceaccount.com"
}
  • google_cloud_run_service

We will configure the Cloud Run service and specify an default hello image to run as a container and prevent the error when running terraform apply later since we have have a image yet.

main.tf

resource "google_cloud_run_service" "service" {
depends_on = [
google_project_service.enabled_service["run.googleapis.com"]
]
name = var.namespace
location = var.region
template {
spec {
containers {
image = "us-docker.pkg.dev/cloudrun/container/hello"
}
}
}
}
  • google_cloud_run_service_iam_policy

To enable user access to the web service after the pipeline deploys a container on the Cloud Run service, we will define an IAM policy that grants all users the run.invoker role and attach to the Cloud Run service created previously.

data "google_iam_policy" "admin" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "policy" {
location = var.region
project = var.project_id
service = google_cloud_run_service.service.name
policy_data = data.google_iam_policy.admin.policy_data
}
  • Output

We will finally need to output the URLs from the source repository and Cloud run service. The URLs will be used later when we demonstrate how we interact with the source repository with code changes that trigger the pipeline steps and also allow us to access the deployed web service.

outputs.tf

output "urls" {
value = {
repo = google_sourcerepo_repository.repo.url
app = google_cloud_run_service.service.status[0].url
}
}

After we define the infrastructure for the pipeline in main.tf , we can run the following Terraform commands from the infrastructure directory to build the pipeline on GCP.

terraform initterraform planterraform apply -auto-approve

The sample source code we are using is in the service directory which contains main.go, main_test.go, Dockerfile, go.mod and go.sum. It is a simple greeting web service that responds “Hello World!” when we use curl commands with the urls.app attribute from outputs.tf previously.

We will trigger and run our CI/CD pipeline set up earlier in the event of source code changes and verify the web service deployed on the Cloud Run service.

From the service directory, we will run the following Git commands to initiate the CI/CD pipeline for the first time.

git initgit add -Agit commit -m "initial commit"git config --global credential.https://source.developers.google.com.helper gcloud.shgit remote add google <value from urls.repo attribute from outputs.tf previously>gcloud auth logingit push --all google

When the pipeline completes, you can use the following command with the application URL (from the urls.app output attribute defined in outputs.tf) to verify the CI/CD scenario is successful.

We can now go to Cloud Build Dashboard on the GCP console to view the build result from our pipeline and click to view Build History and see more detailed results from each step.

We can design the CI/CD pipeline infrastructure using features of Terraform, such as Resource Providers, Resource Provisioner, and Terraform HCL.

We also make use of several GCP services such as Google Source Repository, Cloud Build, and Cloud Run with Google IAM policy and permissions to automate the CI/CD pipeline and run the application as a container.

There might be more advanced CI/CD pipelines that will meet more requirements and more sophisticated scenarios. This will serve as a basic CI/CD pipeline to be extended further.

Thanks for reading.

Solutions Architect, AWS CSAA/CDA: microservices, kubernetes, algorithms, Java, Rust, Golang, React, JavaScript…