Load Balancers and Storage

How to create a load balancer

When you create a service, you can optionally create a load balancer to distribute service traffic among the nodes assigned to that service. The key fields in the configuration of a load balancer are the type of service being created and the ports that the load balancer will listen to.

Consider the following configuration file, nginx_lb.yaml, which defines a deployment (kind: Deployment) for the nginx app, followed by a service definition that uses type LoadBalancer to balance http traffic on port 80 for traffic to the nginx app.

apiVersion: v1
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
---
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

The first part of the configuration file defines an Nginx deployment, requesting that it be hosted on 3 pods running the nginx:1.7.9 image, and accept traffic to the containers on port 80.

The second part of the configuration file defines the Nginx service, which uses type LoadBalancer to balance Nginx traffic on port 80 amongst the available pods.

To create the deployment and service defined in nginx_lb.yaml while connected to your Kubernetes cluster, enter the command:

$ kubectl apply -f nginx_lb.yaml

 

This command outputs the following upon successful creation of the deployment and the load balancer:

deployment "my-nginx" created
service "my-nginx-svc" created

 

The load balancer may take a few minutes to go from a pending state to being fully operational. You can view the current state of your cluster by entering kubectl get all, where your output looks similar to the following:

$ kubectl get all
NAME                                  READY     STATUS    RESTARTS   AGE
po/my-nginx-431080787-0m4m8           1/1       Running   0          3m
po/my-nginx-431080787-hqqcr           1/1       Running   0          3m
po/my-nginx-431080787-n8125           1/1       Running   0          3m

NAME               CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
svc/kubernetes     203.0.113.1     <NONE>           443/TCP        3d
svc/my-nginx-svc   203.0.113.7     192.0.2.22       80:30269/TCP   3m

NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/my-nginx           3         3         3            3           3m

NAME                            DESIRED   CURRENT   READY     AGE
rs/my-nginx-431080787           3         3         3         3m

The above output shows that the my-nginx deployment is running on 3 pods (the po/my-nginx entries), that the load balancer is running (svc/my-nginx-svc) and has an external IP (192.0.2.22) that clients can use to connect to the app that's deployed on the pods.

How to create a load balancer that supports SSL

You can create a load balancer with SSL termination, allowing https traffic to an app to be distributed among the nodes in a cluster. This example provides a walkthrough of the configuration and creation of a load balancer with SSL support.

Consider the following configuration file, nginx-demo-svc-ssl.yaml, which defines an Nginx deployment and exposes it via a load balancer that serves http on port 80, and https on port 443. This sample creates an Oracle Cloud Infrastructure load balancer, by defining a service with a type of LoadBalancer (type: LoadBalancer).

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: nginx-service
  annotations:
    service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/oci-load-balancer-tls-secret: ssl-certificate-secret
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 80

The Load Balancer's annotations are of particular importance. The ports on which to support https traffic are defined by the value of oci-load-balancer-ssl-ports. You can declare multiple SSL ports by using a comma-separated list for the annotation's value. For example, you could set the annotation's value to "443, 3000" to support SSL on ports 443 and 3000.

The required TLS secret, ssl-certificate-secret, needs to be created in Kubernetes. This example creates and uses a self-signed certificate. However, in a production environment, the most common scenario is to use a public certificate that's been signed by a certificate authority.

The following command creates a self-signed certificate, tls.crt, with its corresponding key, tls.key:

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

Now that you created the certificate, you need to store both it and its key as a secret in Kubernetes. The name of the secret must match the name from the oci-load-balancer-tls-secret annotation of our load balancer's definition. Use the following command to create a TLS secret in Kubernetes, whose key and certificate values are set by --key and --cert, respectively.

$ kubectl create secret tls ssl-certificate-secret --key tls.key --cert tls.crt

You must create the Kubernetes secret before you can create the service, since the service references the secret in its definition. Create the service using the following command:

$ kubectl create -f manifests/demo/nginx-demo-svc-ssl.yaml

Watch the service and wait for a public IP address (EXTERNAL-IP) to be assigned to the Nginx service (nginx-service). This is the load balancer IP to use to connect to the service.

$ kubectl get svc --watch
NAME            CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
nginx-service   192.0.2.1      198.51.100.1     80:30274/TCP   5m

The load balancer is now running, which means the service can now be accessed using either http or https, as demonstrated by the following commands:

$ curl http://198.51.100.1
$ curl --insecure https://198.51.100.1

The "--insecure" flag is used to access the service using https due to the use of self-signed certificates in this example. Do not use this flag in a production environment where the public certificate was signed by a certificate authority.

Note: When a cluster is deleted, a load balancer that's dynamically created when a service is created will not be removed. Before deleting a cluster, delete the service, which in turn will result in the cloud provider removing the load balancer. The syntax for this command is:

kubectl delete svc SERVICE_NAME

For example, to delete the service from example above, enter the command:

kubectl delete svc nginx-service

How to create a persistent volume claim

Container storage via a container's root file system is ephemeral, and can disappear upon container deletion and creation. To provide a durable location to store data and prevent it from being lost, you can create and use persistent volumes to store data outside of containers.

You can define and apply a persistent volume claim to your cluster, which in turn creates a persistent volume that's bound to the claim. A claim is a block storage volume in the underlying IaaS provider that's durable and offers persistent storage, enabling your data to remain intact, regardless of whether the containers that the storage is connected to are terminated.

A persistent volume claim (PVC) is a request for storage, similar to how a pod requests compute resources. A PVC provides an abstraction layer to underlying storage. For example, an administrator could create a number of static persistent volumes (PVs) that can later be bound to one or more persistent volume claims. This is analgous to an administrator creating cluster nodes to which pods are later assigned. If none of the static persistent volumes match the user's PVC request, the cluster may attempt to dynamically create a PV that matches the PVC request. This example uses the latter approach, and it assumes that the cluster administrator has not created any suitable PVs that match the PVC request—meaning that the PVCs will dynamically create the PVs for this example.

The following YAML defines two PVCs that each request 50 gigabytes of persistent storage (storage: 50Gi). You use names of the PVCs (for example, mysqlclaim) when defining which claims to use as the volumes of a deployment.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysqlclaim
spec:
  storageClassName: "oci"
  selector:
    matchLabels:
      oci-availability-domain: "US-ASHBURN-AD-1"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
     
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wordpressclaim
spec:
  storageClassName: "oci"
  selector:
    matchLabels:
      oci-availability-domain: "US-ASHBURN-AD-2"
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi

Enter the following command to create the PVC from the YAML file:

$ kubectl create -f https://raw.githubusercontent.com/wercker/oke_examples/master/kubernetes_examples/persistent_volume_claims.yaml
persistentvolumeclaim "mysqlclaim" created
persistentvolumeclaim "wordpressclaim" created

You can verify that the PVCs have been created and bound to persistent volumes by calling kubectl get pvc:

$ kubectl get pvc
NAME             STATUS    VOLUME                                                         CAPACITY   ACCESSMODES   STORAGECLASS   AGE
mysqlclaim       Bound     abyhqljrerxpanjto7b5zlxjesy4aedghc5c52f5v43xcrymo77ktdl6ibjq   50Gi       RWO           oci            4m
wordpressclaim   Bound     abyhqljt3rzldcclootxn7yrfgv36s7rnggcobennjohevykqpitzkinspka   50Gi       RWO           oci            4m

You can use these persistent volumes when creating other objects, such as deployments. For example, the following deployment definition instructs the system to use the mysqlclaim PVC as the mysql-persistent-storage volume, which is mounted by pods hosting the deployment as /var/lib/mysql.

#MySQL Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql
                  key: password
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          persistentVolumeClaim:
           claimName: mysqlclaim