Single-Cluster Reference Architecture for Production Purposes

⚠️ Gitpod Self-hosted has been replaced with Gitpod Dedicated, a self-hosted, single-tenant managed service that runs in your private cloud account but is managed by us.
Try out Gitpod Dedicated.

Status:
Alpha
Intended for: Continous usage of Gitpod at a company-wide scale in a reliable way by leveraging popular cloud provider services such as S3 and RDS.
Limitations: - This is bound to a single cluster. Deploying in several regions currently requires setting up several Gitpod installations
- Creates external dependencies for Gitpod componenents (object storage, registry, database)
- This is not highly available and requires downtime to upgrade (high availability requires a governed workspace cluster, which is beyond the scope of this reference architecture)
Terraform: - Example Terraform configuration for GCP
- Example Terraform configuration for AWS
Cost Estimates: High-level cost estimates*:
- GCP
- AWS

This guide describes a single-cluster reference architecture for Gitpod aimed at production environments: continuous deployments of Gitpod used in anger by your engineers. It consists of a Kubernetes cluster, cert-manager, external MySQL database, external OCI image registry, and external object storage. It includes instructions on how to set up this reference architecture on the officially supported cloud providers.

This reference architecture can be used as a blueprint for your Gitpod installation: Start with this reference architecture and adapt it to your needs. The reference architecture as described in this guide is what Gitpod supports, and is used to test against every self-hosted Gitpod release.

To use Gitpod, you also need a Git source code management system (SCM) like GitLab, GitHub, or Bitbucket. You will find the supported SCMs in the product compatibility matrix your own SCM is beyond the scope of this guide. However, you can simply use the cloud versions of GitLab, GitHub, or Bitbucket as well as the possible existing installation in your corporate network.

Overview

Reference Architecture Overview

The diagram above gives an overview of the reference architecture. Starting from the user’s workstation, access is provided using a layer 4 (L4) load balancer. An internal proxy distributes this traffic within Gitpod.

The cluster-external components are accessed by a specific set of components as shown in the diagram. The external components are:

  • MySQL database
  • Source Control Management (SCM), e.g. GitLab, GitHub, GitHub Enterprise, BitBucket, or BitBucket Server
  • Object Storage, e.g. Google Cloud Storage or Amazon S3
  • OCI Image Registry, e.g. Google Artifact Registry.
    Note: This registry is used by Gitpod to cache images, and store images it builds on behalf of users. This is not the registry that contains the images of Gitpod’s services.

In addition, the diagram indicates the different node pools within the cluster. Notice that we separate any user workloads from Gitpod’s services (except for ws-daemon). In this reference architecture, we create two node pools: the services node pool (upper half in the diagram) and the workspaces node pool (lower half in the diagram).

Cloud Provider Preparations

You need to prepare your workstation and your cloud provider (e.g. creating a project and preparing service accounts) to be able to replicate this reference architecture.

Independent of the cloud provider you are using, you need to have kubectl installed on your workstation and configured to access your cluster after creation.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

In order to deploy Gitpod on the Google Kubernetes Engine (GKE) of the Google Cloud Platform (GCP), you need to create and configure a project for your installation. In this guide, we give you examples of how to create the needed resources by using the command line tool gcloud. To follow these examples make sure you have installed the gcloud CLI and logged in to your Google Cloud account. You can also use the GCP Console or the API instead. In that case, please refer to the linked Google docs.

First, create a GCP project and enable billing (you have to enable billing to enable GKE). You can freely choose a name for your project (hereinafter referred to as environment variable PROJECT_NAME). You also need the billing account ID (referred to as BILLING_ACCOUNT). To see available lDs, run gcloud alpha billing accounts list.

language icon language: 
bash
PROJECT_NAME=gitpod
gcloud projects create "${PROJECT_NAME}" --set-as-default

BILLING_ACCOUNT=0X0X0X-0X0X0X-0X0X0X
gcloud alpha billing projects link "${PROJECT_NAME}" \
    --billing-account "${BILLING_ACCOUNT}"

You can verify that the proper project has been set as default with this command:

language icon language: 
bash
gcloud config get-value project

After you created your project, you need to enable the following services in this project:

Services
cloudbilling.googleapis.com Google Billing API Billing is required to set up a GKE cluster.
containerregistry.googleapis.com Docker container images registry Enable this service such that Gitpod can push workspace images to that repository.
iam.googleapis.com Identity and Access Management (IAM) API To create and use service accounts for the setup.
compute.googleapis.com Google Compute Engine API The Google Compute Engine empowers to run virtual machines (VMs) for the Kubernetes cluster.
container.googleapis.com Kubernetes Engine API The Kubernetes engine is where we will deploy Gitpod to.
dns.googleapis.com Cloud DNS Cloud DNS is used in this reference architecture so set up the domain name resolution.
sqladmin.googleapis.com Cloud SQL Admin API Cloud SQL for MySQL is used as database service in this reference architecture.

Run these commands to enable the services:

language icon language: 
bash
gcloud services enable cloudbilling.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable iam.googleapis.com
gcloud services enable compute.googleapis.com
gcloud services enable container.googleapis.com
gcloud services enable dns.googleapis.com
gcloud services enable sqladmin.googleapis.com

Now, you are prepared to create your Kubernetes cluster.

Kubernetes Cluster

The heart of this reference architecture is a Kubernetes cluster where all Gitpod components are deployed to. This cluster consists of three node pools:

  1. Services Node Pool: The Gitpod “app” with all its services is deployed to these nodes. These services provide the users with the dashboard and manage the provisioning of workspaces.
  2. Regular Workspaces Node Pool: Gitpod deploys the actual workspaces (where the actual developer work is happening) to these nodes.
  3. Headless Workspace Node Pool: Gitpod deploys the imagebuild and prebuild workspaces (where build work generally demands more CPU and disk) to these needs.

Gitpod services, headless, and regular workspaces have vastly differing resource and isolation requirements. These workloads are separated onto different node pools to provide a better quality of service and security guarantees.

You need to assign the following labels to the node pools to enforce that the Gitpod components are scheduled to the proper node pools:

Node Pool Labels
Services Node Pool gitpod.io/workload_meta=true,
gitpod.io/workload_ide=true,
gitpod.io/workload_workspace_services=true
Regular Workspace Node Pool gitpod.io/workload_workspace_regular=true
Headless Workspace Node Pool gitpod.io/workload_workspace_headless=true

The following table gives an overview of the node types for the different cloud providers that are used by this reference architecture.

GCP AWS Azure
Services Node Pool n2d-standard-4 m6i.xlarge Standard_D4_v4
Regular Workspace Node Pool n2d-standard-16 m6i.4xlarge Standard_D16_v4
Headless Workspace Node Pool n2d-standard-16 m6i.4xlarge Standard_D16_v4
Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

First, we create a service account for the cluster. The service account needs to have the following roles:

Roles
roles/storage.admin
roles/logging.logWriter
roles/monitoring.metricWriter
roles/container.admin

Run the following commands to create the service account:

language icon language: 
bash
GKE_SA=gitpod-gke
GKE_SA_EMAIL="${GKE_SA}"@"${PROJECT_NAME}".iam.gserviceaccount.com
gcloud iam service-accounts create "${GKE_SA}" --display-name "${GKE_SA}"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/storage.admin"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/logging.logWriter"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/monitoring.metricWriter"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" --member serviceAccount:"${GKE_SA_EMAIL}" --role="roles/container.admin"

After that, we create a Kubernetes cluster.

Image Type UBUNTU_CONTAINERD
Machine Type e2-standard-2
Cluster Version Choose latest from regular channel
Enable Autoscaling,
Autorepair,
IP Alias,
Network Policy
Disable Autoupgrade
metadata=disable-legacy-endpoints=true
Create Subnetwork gitpod-${CLUSTER_NAME}
Max Pods per Node 10
Default Max Pods per Node 110
Min Nodes 0
Max Nodes 1
Addons HorizontalPodAutoscaling,
NodeLocalDNS,
NetworkPolicy
Region Choose your region and zones
language icon language: 
bash
CLUSTER_NAME=gitpod
REGION=us-central1
GKE_VERSION=1.22.12-gke.1200

gcloud container clusters \
    create "${CLUSTER_NAME}" \
    --disk-type="pd-ssd" --disk-size="50GB" \
    --image-type="UBUNTU_CONTAINERD" \
    --machine-type="e2-standard-2" \
    --cluster-version="${GKE_VERSION}" \
    --region="${REGION}" \
    --service-account "${GKE_SA_EMAIL}" \
    --num-nodes=1 \
    --no-enable-basic-auth \
    --enable-autoscaling \
    --enable-autorepair \
    --no-enable-autoupgrade \
    --enable-ip-alias \
    --enable-network-policy \
    --create-subnetwork name="gitpod-${CLUSTER_NAME}" \
    --metadata=disable-legacy-endpoints=true \
    --max-pods-per-node=110 \
    --default-max-pods-per-node=110 \
    --min-nodes=0 \
    --max-nodes=1 \
    --addons=HorizontalPodAutoscaling,NodeLocalDNS,NetworkPolicy

Unfortunately, you cannot create a cluster without the default node pool. Since we need a custom node pool, you need to remove the default one.

language icon language: 
bash
gcloud --quiet container node-pools delete default-pool \
    --cluster="${CLUSTER_NAME}" --region="${REGION}"

Now, we are creating a node pool for the Gitpod services.

Image Type UBUNTU_CONTAINERD
Machine Type n2d-standard-4
Enable Autoscaling
Autorepair
IP Alias
Network Policy
Disable Autoupgrade
metadata=disable-legacy-endpoints=true
Create Subnetwork gitpod-${CLUSTER_NAME}
Number of nodes 1
Min Nodes 1
Max Nodes 50
Max Pods per Node 110
Scopes gke-default,
https://www.googleapis.com/auth/ndev.clouddns.readwrite
Region Choose your region and zones
Node Labels gitpod.io/workload_meta=true,
gitpod.io/workload_ide=true
language icon language: 
bash
gcloud container node-pools \
    create "workload-services" \
    --cluster="${CLUSTER_NAME}" \
    --disk-type="pd-ssd" \
    --disk-size="100GB" \
    --image-type="UBUNTU_CONTAINERD" \
    --machine-type="n2d-standard-4" \
    --num-nodes=1 \
    --no-enable-autoupgrade \
    --enable-autorepair \
    --enable-autoscaling \
    --metadata disable-legacy-endpoints=true \
    --scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
    --node-labels="gitpod.io/workload_meta=true,gitpod.io/workload_ide=true,gitpod.io/workload_workspace_services=true" \
    --max-pods-per-node=110 \
    --min-nodes=1 \
    --max-nodes=4 \
    --region="${REGION}"

We are also creating a node pool for the Gitpod regular workspaces.

Image Type UBUNTU_CONTAINERD
Machine Type n2d-standard-16
Enable Autoscaling,
Autorepair,
IP Alias,
Network Policy
Disable Autoupgrade
metadata=disable-legacy-endpoints=true
Create Subnetwork gitpod-${CLUSTER_NAME}
Number of nodes 1
Min Nodes 1
Max Nodes 50
Max Pods per Node 110
Scopes gke-default,
https://www.googleapis.com/auth/ndev.clouddns.readwrite
Region Choose your region and zones
Node Labels gitpod.io/workload_workspace_regular=true
language icon language: 
bash
gcloud container node-pools \
    create "workload-regular-workspaces" \
    --cluster="${CLUSTER_NAME}" \
    --disk-type="pd-ssd" \
    --disk-size="512GB" \
    --image-type="UBUNTU_CONTAINERD" \
    --machine-type="n2d-standard-16" \
    --num-nodes=1 \
    --no-enable-autoupgrade \
    --enable-autorepair \
    --enable-autoscaling \
    --metadata disable-legacy-endpoints=true \
    --scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
    --node-labels="gitpod.io/workload_workspace_regular=true" \
    --max-pods-per-node=110 \
    --min-nodes=1 \
    --max-nodes=50 \
    --region="${REGION}"

We are also creating a node pool for the Gitpod headless workspaces.

Image Type UBUNTU_CONTAINERD
Machine Type n2d-standard-16
Enable Autoscaling,
Autorepair,
IP Alias,
Network Policy
Disable Autoupgrade
metadata=disable-legacy-endpoints=true
Create Subnetwork gitpod-${CLUSTER_NAME}
Number of nodes 1
Min Nodes 1
Max Nodes 50
Max Pods per Node 110
Scopes gke-default,
https://www.googleapis.com/auth/ndev.clouddns.readwrite
Region Choose your region and zones
Node Labels gitpod.io/workload_workspace_headless=true
language icon language: 
bash
gcloud container node-pools \
    create "workload-headless-workspaces" \
    --cluster="${CLUSTER_NAME}" \
    --disk-type="pd-ssd" \
    --disk-size="512GB" \
    --image-type="UBUNTU_CONTAINERD" \
    --machine-type="n2d-standard-16" \
    --num-nodes=1 \
    --no-enable-autoupgrade \
    --enable-autorepair \
    --enable-autoscaling \
    --metadata disable-legacy-endpoints=true \
    --scopes="gke-default,https://www.googleapis.com/auth/ndev.clouddns.readwrite" \
    --node-labels="gitpod.io/workload_workspace_headless=true" \
    --max-pods-per-node=110 \
    --min-nodes=1 \
    --max-nodes=50 \
    --region="${REGION}"

Now, you can connect kubectl to your newly created cluster.

language icon language: 
bash
gcloud container clusters get-credentials --region="${REGION}" "${CLUSTER_NAME}"

After that, you need to create cluster role bindings to allow the current user to create new RBAC rules.

language icon language: 
bash
kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user="$(gcloud config get-value core/account)"

Networking

For each Gitpod installation, you need a domain. In this guide, we use gitpod.example.com as a placeholder for your domain. Gitpod also uses different subdomains for some components as well as dynamically for the running workspaces. That’s why you need to configure your DNS server and your TLS certificates for your Gitpod domain with the following wildcards:

gitpod.example.com
*.gitpod.example.com
*.ws.gitpod.example.com

Cluster ports

The entry point for all traffic is the proxy component which has a service of type LoadBalancer that allows inbound traffic on ports 80 (HTTP) and 443 (HTTPS) as well as port 22 (SSH access to the workspaces).

SSH access is required to work with desktop IDEs, such as VS Code Desktop and JetBrains via JetBrains Gateway. To enable SSH, your load balancer needs to be capable of working with L4 protocols.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

In this guide, we use load balancing through a standalone network endpoint group (NEG). For this, the Gitpod proxy service will get the following annotation by default:

language icon language: 
bash
cloud.google.com/neg: '{"exposed_ports": {"80":{},"443": {}}}'

For Gitpod, we support Calico as CNI only. You need to make sure that you DO NOT use GKE Dataplan V2. That means, do not add the --enable-dataplane-v2 flag during the cluster creation.

External DNS

You also need to configure your DNS server. If you have your own DNS server for your domain, make sure the domain with all wildcards points to your load balancer.

Creating a dedicated DNS zone is recommended when using cert-manager or external-dns but is not required. A pre-existing DNS zone may be used as long as the cert-manager and/or external-dns services are authorized to manage DNS records within that zone. If you are providing your own TLS certificates and will manually create A records pointing to Gitpod’s public load balancer IP addresses then creating a zone is unnecessary.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

In this reference architecture, we use Google Cloud DNS for domain name resolution. To automatically configure Cloud DNS, we use External DNS for Kubernetes.

First, we need a service account with role roles/dns.admin. This service account is needed by cert-manager to alter the DNS settings for the DNS-01 resolution.

language icon language: 
bash
DNS_SA=gitpod-dns01-solver
DNS_SA_EMAIL="${DNS_SA}"@"${PROJECT_NAME}".iam.gserviceaccount.com
gcloud iam service-accounts create "${DNS_SA}" --display-name "${DNS_SA}"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" \
    --member serviceAccount:"${DNS_SA_EMAIL}" --role="roles/dns.admin"

Save the service account key to the file ./dns-credentials.json:

language icon language: 
bash
gcloud iam service-accounts keys create --iam-account "${DNS_SA_EMAIL}" \
    ./dns-credentials.json

After that, we create a managed zone.

language icon language: 
bash
DOMAIN=gitpod.example.com
gcloud dns managed-zones create "${CLUSTER_NAME}" \
    --dns-name "${DOMAIN}." \
    --description "Automatically managed zone by kubernetes.io/external-dns"

Now we are ready to install External DNS. Please refer to the External DNS GKE docs.

Example on how to install External DNS with helm
language icon language: 
bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm upgrade \
    --atomic \
    --cleanup-on-fail \
    --create-namespace \
    --install \
    --namespace external-dns \
    --reset-values \
    --set provider=google \
    --set google.project="${PROJECT_NAME}" \
    --set logFormat=json \
    --set google.serviceAccountSecretKey=dns-credentials.json \
    --wait \
    external-dns \
    bitnami/external-dns

Depending on what your DNS setup for your domain looks like, you most probably want to configure the nameservers for your domain. Run the following command to get a list of nameservers used by your Cloud DNS setup:

language icon language: 
bash
gcloud dns managed-zones describe ${CLUSTER_NAME} --format json | jq '.nameServers'

cert-manager

Gitpod uses TLS secure external traffic bound for Gitpod as well as identifying, authorizing, and securing internal traffic between Gitpod’s internal components. While you can provide your own TLS certificate for securing external connections to Gitpod, cert-manager is required to generate internal TLS certificates.

Refer to the cert-manager DNS01 docs for more information.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

Example on how to install cert-manager on GCP:

language icon language: 
bash
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade \
    --atomic \
    --cleanup-on-fail \
    --create-namespace \
    --install \
    --namespace cert-manager \
    --reset-values \
    --set installCRDs=true \
    --set 'extraArgs={--dns01-recursive-nameservers-only=true,--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' \
    --wait \
    cert-manager \
    jetstack/cert-manager

TLS certificate

In this reference architecture, we use cert-manager to also create TLS certificates for the Gitpod domain. Since we need wildcard certificates for the subdomains, you must use the DNS-01 challenge.

Using a certificate issued by Let’s Encrypt is recommended as it minimizes overhead involving TLS certificates and managing CA certificate trust, but is not required. If you already have TLS certificates for your Gitpod installation with suitable DNS names you can skip this step and use your own certificates during the installation.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

Now, we are configuring Google Cloud DNS for the DNS-01 challenge. For this, we need to create a secret that contains the key for the DNS service account:

language icon language: 
bash
CLOUD_DNS_SECRET=clouddns-dns01-solver
kubectl create secret generic "${CLOUD_DNS_SECRET}" \
    --namespace=cert-manager \
    --from-file=key.json="./dns-credentials.json"

After that, we are telling cert-manager which service account it should use:

language icon language: 
bash
kubectl annotate serviceaccount --namespace=cert-manager cert-manager \
    --overwrite "iam.gke.io/gcp-service-account=${DNS_SA_EMAIL}"

The next step is to create an issuer. In this guide, we create a cluster issuer. Create a file issuer.yaml like this:

language icon language: 
yml
# Replace $LETSENCRYPT_EMAIL with your email and $PROJECT_NAME with your GCP project name
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
    name: gitpod-issuer
spec:
    acme:
        email: $LETSENCRYPT_EMAIL
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
            name: issuer-account-key
        solvers:
            - dns01:
                  cloudDNS:
                      project: $PROJECT_NAME

… and run:

language icon language: 
bash
kubectl apply -f issuer.yaml

Object Storage

Gitpod uses object storage to store blob data. This includes workspace backups that are created when a workspace stops and are used to restore state upon restart. Different user settings like IDE preferences are also stored this way.

This reference architecture uses managed object storage commonly offered by all cloud providers.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

For each Gitpod user, their own bucket will be created at runtime. For this reason, Gitpod needs proper rights to create buckets in the object storage. Create a service account that has the following roles:

Roles
roles/storage.admin
roles/storage.objectAdmin
language icon language: 
bash
OBJECT_STORAGE_SA=gitpod-storage
OBJECT_STORAGE_SA_EMAIL="${OBJECT_STORAGE_SA}"@"${PROJECT_NAME}".iam.gserviceaccount.com
gcloud iam service-accounts create "${OBJECT_STORAGE_SA}" --display-name "${OBJECT_STORAGE_SA}"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" \
    --member serviceAccount:"${OBJECT_STORAGE_SA_EMAIL}" --role="roles/storage.admin"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" \
    --member serviceAccount:"${OBJECT_STORAGE_SA_EMAIL}" --role="roles/storage.objectAdmin"

Save the service account key to the file ./gs-credentials.json:

language icon language: 
bash
gcloud iam service-accounts keys create --iam-account "${OBJECT_STORAGE_SA_EMAIL}" \
    ./gs-credentials.json

OCI Image Registry

Kubernetes clusters pull their components from an image registry. In Gitpod, image registries are used for three different purposes:

  1. Pulling the actual Gitpod software (components like server, image-builder, etc.).
  2. Pulling base images for workspaces. This is either a default workspace-full image or the image that is configured in the .gitpod.yml resp. .gitpod.Dockerfile in the repo.
  3. Pushing individual workspace images that are built for workspaces during image start. That are for example custom images that are defined in a .gitpod.Dockerfile in the repo. These images are pulled by Kubernetes after image building to provision the workspace. This is the only case where Gitpod needs write access to push images.

We use a different registry for each of the three items in this reference architecture. The Gitpod images (1) are pulled from a public Google Container Registry we provide. The workspace base image (2) is pulled from Docker Hub (or from the location that is set in the Dockerfile of the corresponding repo). For the individual workspace images (3), we create an image registry that is provided by the used cloud provider. You could also configure Gitpod to use the same registry for all cases. That is particularly useful for air-gap installations where you have access to an internal image registry only.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

By enabling the service containerregistry.googleapis.com (see above), your project provides you with an OCI Image Registry. As credentials, we need the object storage service account key that we will create below. Therefore, there is no further action needed to use the registry in Gitpod.

Database

Gitpod uses a relational database management system to store structural data. Gitpod supports MySQL. The database is a central component in Gitpod where all metadata about users and workspaces as well as settings of the Gitpod instance (such as auth providers) are stored. That makes the database a critical component. In case of a database outage, you will not be able to log in, use the Gitpod dashboard, or start workspaces.

In this reference architecture, we use managed MYSQL databases provided by cloud providers.

Gitpod requires your database instance to have a database named gitpod in it.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

As a relational database, we create a Google Cloud SQL instance with MySQL 5.7. Use the following commands to create the database instance:

language icon language: 
bash
MYSQL_INSTANCE_NAME=gitpod-mysql
gcloud sql instances create "${MYSQL_INSTANCE_NAME}" \
    --database-version=MYSQL_5_7 \
    --storage-size=20 \
    --storage-auto-increase \
    --tier=db-n1-standard-2 \
    --region="${REGION}" \
    --replica-type=FAILOVER \
    --enable-bin-log

gcloud sql instances patch "${MYSQL_INSTANCE_NAME}" --database-flags \
            explicit_defaults_for_timestamp=off

After that, we create the database named gitpod as well as a dedicated Gitpod database user with a random password.

language icon language: 
bash
gcloud sql databases create gitpod --instance="${MYSQL_INSTANCE_NAME}"

MYSQL_GITPOD_USERNAME=gitpod
MYSQL_GITPOD_PASSWORD=$(openssl rand -base64 20)
gcloud sql users create "${MYSQL_GITPOD_USERNAME}" \
    --instance="${MYSQL_INSTANCE_NAME}" \
    --password="${MYSQL_GITPOD_PASSWORD}"

Finally, you need to create a service account that has the roles/cloudsql.client role:

language icon language: 
bash
MYSQL_SA=gitpod-mysql
MYSQL_SA_EMAIL="${MYSQL_SA}"@"${PROJECT_NAME}".iam.gserviceaccount.com
gcloud iam service-accounts create "${MYSQL_SA}" --display-name "${MYSQL_SA}"
gcloud projects add-iam-policy-binding "${PROJECT_NAME}" \
    --member serviceAccount:"${MYSQL_SA_EMAIL}" --role="roles/cloudsql.client"

Save the service account key to the file ./mysql-credentials.json:

language icon language: 
bash
gcloud iam service-accounts keys create --iam-account "${MYSQL_SA_EMAIL}" \
    ./mysql-credentials.json

Install Gitpod

Congratulations. You have set up your cluster. Now, you are ready to install Gitpod. Follow the instructions in the installation guide.

Cloud provider specific instructions
  • GCP
  • AWS
  • Azure

If you followed the steps to create your infrastructure on GCP of this guide, you need to use the following config settings for your Gitpod installation:

General settings
Domain name value of $DOMAIN_NAME

Un-select the in-cluster container registry checkbox.

Container registry
In-cluster no
Container registry URL gcr.io/${PROJECT_NAME}/gitpod
(replace ${PROJECT_NAME} with your GCP project name)
Container registry server gcr.io
Container registry username _json_key
Container registry password Content of file ./gs-credentials.json
Remove linebreaks, e.g. with jq -c . ./gs-credentials.json

Un-select the in-cluster MySQL checkbox.

Database
In-cluster no
Google Cloud SQL Proxy yes
CloudSQL connection name ${PROJECT_NAME}:${REGION}:${MYSQL_INSTANCE_NAME}
Replace variables with actual values!
Username value of ${MYSQL_GITPOD_USERNAME}
Password value of ${MYSQL_GITPOD_PASSWORD}
GCP service account key Upload file ./mysql-credentials.json

Select GCP as object storage provider.

Object storage
Storage provider GCP
Storage region value of ${REGION}
Project ID value of ${PROJECT_NAME}
Service account key Upload file ./gs-credentials.json

Keep cert-manager selected for the TLS certificates options.

TLS certificates
Self-signed TLS certificate no
cert-manager yes
Issuer name gitpod-issuer
Issuer type Select “cluster issuer”

Was this helpful?