Skip to content

Creating a Deployment

This page will walk you through the process of creating a Kubernetes deployment for your application.

Introduction

Deployment Conceptual Understanding

At the core of Kubernetes is the concept of a "deployment." This is a declarative model that allows developers to specify their application's desired state, such as which images to run, the required number of instances (replicas), and the strategy for updating those instances. Kubernetes constantly works to maintain this desired state, managing the lifecycle of containers across a cluster of machines.

YAML and Declarative Resources

In Kubernetes, the configuration of workloads and services is managed through declarative resources, which allow developers to specify their desired state for the system using human-readable YAML (YAML Ain't Markup Language) files.

This declarative approach means that instead of providing a set of instructions on how to achieve a certain state, developers define what the end state should look like, and Kubernetes takes care of the rest, ensuring that the actual state matches the desired state.

YAML files are used to define everything from pods, deployments, and services to more complex orchestrations such as volumes and networking policies. These files are then applied to the cluster using the Kubernetes CLI tool, kubectl, which communicates with the cluster's API server to update the current state of resources to match the desired state outlined in the YAML files.

File Organization

Just as the standard method of organizing source code in a repository is to place it in a src directory, the standard method of organizing Kubernetes resources is to place them in a deploy/k8s directory.

This is an important convention that allows developers and reliability engineers to easily find and manage the resources that define their application.

The YAML files that define Kubernetes resources are typically organized into a directory structure that reflects the structure of the application. For example, a simple application might have a directory structure like this:

deploy/k8s/
        ├── deployment.yaml
        ├── service.yaml
        ├── ingress.yaml
        ├── configmap.yaml
        ├── secret.yaml
        ├── volume.yaml
        └── networkpolicy.yaml

This structure makes it easy to manage and deploy the application, as well as to understand the relationships between different resources.

Creating a Deployment

To create a deployment, you will need to create a YAML file that defines the deployment resource. This file will specify the desired state of the deployment, including the container image to use, the number of replicas, and the strategy for updating the deployment.

Example Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: devops-knowledgebase
  labels:
    app: devops-knowledgebase
spec:
  selector:
    matchLabels:
      app: devops-knowledgebase
  template:
    metadata:
      labels:
        app: devops-knowledgebase
    spec:
      volumes:
        - name: devops-nginx-conf
          configMap:
            name: devops-nginx-conf
            items:
              - key: default.conf
                path: default.conf
      containers:
        - name: devops-knowledgebase
          image: devops-knowledge-base-nginx
          ports:
            - containerPort: 80

Environment Overlays

In a typical development workflow, you will have a set of base Kubernetes resources that define your application. These resources will be the same across all environments, but you will need to make small changes to them to adapt them to the specific requirements of each environment. For example, you might need to change the number of replicas, or the name of a secret, or the URL of a service.

To manage these small changes, you can use a tool called kustomize, which is built into kubectl. Kustomize allows you to define a set of "overlays" that can be applied on top of your base resources to make small changes to them. This allows you to keep your base resources in a single location, and then define a set of overlays that can be applied to them to adapt them to different environments.

To see an example of this, check out the deploy directory of this project (the devops-knowledgebase):

Note the use of the kustomization.yaml file to define overlay usage.

File Organization (update)

When using overlays to manage your environments in kubernetes, the standard organizational sctructure is to add two additional sub-directories to the deploy/k8s directory: base and overlays.

The deploy/k8s/base directory will hold your base definitions, and the deploy/k8s/overlays directory will hold your overlay definitions, with each environment having its own sub-directory.

deploy/k8s/
        ├──/base/ 
        │   ├── kustomization.yaml
        │   ├── deployment.yaml
        │   ├── service.yaml
        │   ├── ingress.yaml
        │   ├── configmap.yaml
        │   ├── secret.yaml
        │   ├── volume.yaml
        │   └── networkpolicy.yaml
        └──/overlays/
            ├── /jax-cluster-dev-10--dev/
            │   ├── kustomization.yaml
            │   └── configmap.yaml
            └── /jax-cluster-prod-10--prod/
                ├── kustomization.yaml
                ├── ingress.yaml
                └── configmap.yaml

This structure makes it easy to manage and deploy the application, as well as to understand the relationships between different resources.

Note: each environment overlay's subdirectory is explicitly named after the exact environment that overlay is meant to deploy to. This convention is important for clarity and consistency. You should use the naming convention $CLUSTER_NAME--$ENVIRONMENT.

Skaffold: Tying It All Together