Skip to content

How to Install Temporal.io


Temporal.io

Introduction

Kubernetes

GKE

In this section we will focus on installation to Kubernetes specifically to the GKE version. If installing to other platforms the methodology will be the same but the precise commands will differ.

Set Some Variables

Variables defining cluster name and project name where we will install.

export GPROJECT=jax-cube-prd-ctrl-01
export CLUSTER=cube-prd
export CLOUD_SQL_INSTANCE=mpd-metasoft-analysis-plotting-database-02
export TEMPORAL_NAME=mpd-temporal
export NAMESPACE=temporal

Ensure Connection to Tools

  1. Install the gcloud tool from google: https://cloud.google.com/sdk/docs/install
  2. Ensure you are connected to the right google project and kubernetes cluster.
# Make sure right project
gcloud init

# Docker (may not be needed if you have docker tooling)
gcloud auth configure-docker

# kubectl
gcloud components install kubectl
gcloud container clusters get-credentials $CLUSTER --zone us-east1-b --project $GPROJECT

# namespace
kubectl create namespace $NAMESPACE

Installing Temporal

To install temporal: @see https://github.com/temporalio/helm-charts however we are going to make some changes to the default installation, namely we will not use the Cassandra database running in the cluster but keep our data in Cloud SQL. This is more durable and we have seen issues running Temporal long term with the default helm chart, Temporal.io are also clear on their web page that this is not okay.

Make the helm installation ready, but do not use it.

git clone https://github.com/temporalio/helm-charts
cd helm-charts
helm dependencies update

# See https://github.com/temporalio/helm-charts, full installation with sidecars
# Full installation
helm dependencies update

Create a secret

We must have a secret in order to access sidecar to get to cloud sql. To store google credentials as a secret:

kubectl -n $NAMESPACE create secret generic gcs-key --from-file=service-account.json
Where service-account.json is your credential file which can be generated at google console->Cloud Storage->Settings->INTEROPERABILITY.

Cloud SQL Proxy

Edit the helm yaml from temporal values/values.cloudsqlproxy.yaml to have our secret and cloud sql database. Example:

server:
  sidecarContainers:
    - name: cloud-sql-proxy
      image: gcr.io/cloudsql-docker/gce-proxy:1.17
      command:
        - "/cloud_sql_proxy"
        # instances is your database id! : instances=PROJECT:us-east1:DATABASE_INSTANCE
        - "-instances=jax-cube-prd-ctrl-01:us-east1:mpd-metasoft-analysis-plotting-database-02=tcp:5432"
        - "-credential_file=/var/secrets/service-account.json"
      securityContext:
        runAsNonRoot: true
      volumeMounts:
        - mountPath: /var/secrets
          name: gcs-key
          readOnly: true

  additionalVolumes:
    - name: gcs-key
      secret:
        secretName: gcs-key

Cloud SQL as Database

See https://github.com/temporalio/helm-charts section: Install with your own PostgreSQL, we use postgresql from cloud sql.

The tables must be configured in your database for Temporal to use. There must be a connection to postgresql cloud sql in order to do this. 1. Redirect cloud sql through local host using: cloud_sql_proxy -instances=$GPROJECT:us-east1:$CLOUD_SQL_INSTANCE=tcp:5432

  1. Compile the temporal-sql-tool by checking out https://github.com/temporalio/temporal and using make. You must have recent go installed (brew install go worked for me). Once compiled you will have the schema creation tools for the table.

  2. Create the databases using localhost as the hostname because in 1. we redirected locally. Follow https://github.com/temporalio/helm-charts section: Install with your own PostgreSQL.

  3. Check that the database that tables 'temporal' and 'temporal_visibility' popped up in your cloud sql instance.

Installing Temporal.io

  1. Make sure that the values/values.postgresql.yaml file has been edited to reflect the user (do not check this file into git!):
  2. Install tctl command for configuring Temporal.
    brew install tctl
    
  3. Install rest of temporal using cloud sql in postgres mode as follows:

    # Full / Normal but more expensive/month
    helm install -f values/values.cloudsqlproxy.yaml -f values/values.postgresql.yaml --namespace $NAMESPACE $TEMPORAL_NAME . --timeout 900s
    
    # (Smaller and less expensive)
      helm install -f values/values.cloudsqlproxy.yaml -f values/values.postgresql.yaml --namespace $NAMESPACE $TEMPORAL_NAME . --timeout 900s \
              --set elasticsearch.enabled=false 
    
      # (Minimal and least expensive)
      helm install -f values/values.cloudsqlproxy.yaml -f values/values.postgresql.yaml --namespace $NAMESPACE $TEMPORAL_NAME . --timeout 900s \
            --set server.replicaCount=1 \
            --set prometheus.enabled=false \
              --set grafana.enabled=false \
              --set elasticsearch.enabled=false 
    
    # Forward temporal frontend port to localhost using kubectl e.g.
    kubectl -n $NAMESPACE port-forward $TEMPORAL_NAME-frontend-7bf5bfbcc7-wpd7k 7233:7233
    
    # Create default namespace using this connection
    # Set variables
    export TEMPORAL_CLI_ADDRESS=localhost:7233
    export TEMPORAL_CLI_NAMESPACE=default
    
    # Set namespace with residence time in days
    tctl --ns <YOUR_NS> namespace register -rd 30
    # e.g. namespaces 'default' 'prd' and 'dev' are usually created.
    
    4. Check that the workers and services are started in your namespace. e.g. kubectl --namespace=temporal-space get pods -l "app.kubernetes.io/instance=img-temporal"

Exposing the web with a balancer

Optionally the queue UI can be exposed either publicly or internally by deleting the default load balancer.

kubectl -n $NAMESPACE delete service $TEMPORAL_NAME-web
kubectl -n $NAMESPACE expose deployment $TEMPORAL_NAME-web --port=8088 --target-port=8088 --type=LoadBalancer
You can also make the web internal only by following the procedure below.

Exposing frontend between clusters

It is not required to have one temporal for each application. This is an expensive ($200+/month) addition to most products. Therefore the frontend can be redeployed as an internal load balancer.

kubectl -n $NAMESPACE delete service $TEMPORAL_NAME-frontend
kubectl -n $NAMESPACE expose deployment $TEMPORAL_NAME-frontend  --port=7233 --target-port=7233 --type=LoadBalancer
THIS WILL CREATE A PUBLIC balancer. However we want an internal one so we add this annotation to the YAML:

networking.gke.io/load-balancer-type: "Internal"
Also in the YAML delete the external configuration. If done right, the services IP address reports as an internal one which can be seen between clusters.

Uninstalling Temporal.io

` # Uninstall helm uninstall --namespace $NAMESPACE $TEMPORAL_NAME