How to create a load balancer
When you create a service, you can optionally create a load balancer to distribute service traffic among the nodes assigned to that service. The key fields in the configuration of a load balancer are the type of service being created and the ports that the load balancer will listen to.
Consider the following configuration file,
nginx_lb.yaml, which defines a deployment (
kind: Deployment) for the
nginx app, followed by a service definition that uses type LoadBalancer to balance http traffic on port 80 for traffic to the
apiVersion: v1 apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-nginx spec: replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- kind: Service metadata: name: my-nginx-svc labels: app: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx
The first part of the configuration file defines an Nginx deployment, requesting that it be hosted on 3 pods running the nginx:1.7.9 image, and accept traffic to the containers on port 80.
The second part of the configuration file defines the Nginx service, which uses type LoadBalancer to balance Nginx traffic on port 80 amongst the available pods.
To create the deployment and service defined in
nginx_lb.yaml while connected to your Kubernetes cluster, enter the command:
$ kubectl apply -f nginx_lb.yaml
This command outputs the following upon successful creation of the deployment and the load balancer:
deployment "my-nginx" created service "my-nginx-svc" created
The load balancer may take a few minutes to go from a pending state to being fully operational. You can view the current state of your cluster by entering
kubectl get all, where your output looks similar to the following:
$ kubectl get all NAME READY STATUS RESTARTS AGE po/my-nginx-431080787-0m4m8 1/1 Running 0 3m po/my-nginx-431080787-hqqcr 1/1 Running 0 3m po/my-nginx-431080787-n8125 1/1 Running 0 3m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 203.0.113.1 443/TCP 3d svc/my-nginx-svc 203.0.113.7 192.0.2.22 80:30269/TCP 3m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/my-nginx 3 3 3 3 3m NAME DESIRED CURRENT READY AGE rs/my-nginx-431080787 3 3 3 3m
The above output shows that the
my-nginx deployment is running on 3 pods (the po/my-nginx entries), that the load balancer is running (svc/my-nginx-svc) and has an external IP (192.0.2.22) that clients can use to connect to the app that's deployed on the pods.
How to create a persistent volume claim
Container storage via a container's root file system is ephemeral, and can disappear upon container deletion and creation. To provide a durable location to store data and prevent it from being lost, you can create and use persistent volumes to store data outside of containers.
You can define and apply a persistent volume claim to your cluster, which in turn creates a persistent volume that's bound to the claim. A claim is a block storage volume in the underlying IaaS provider that's durable and offers persistent storage, enabling your data to remain intact, regardless of whether the containers that the storage is connected to are terminated.
A persistent volume claim (PVC) is a request for storage, similar to how a pod requests compute resources. A PVC provides an abstraction layer to underlying storage. For example, an administrator could create a number of static persistent volumes (PVs) that can later be bound to one or more persistent volume claims. This is analgous to an administrator creating cluster nodes to which pods are later assigned. If none of the static persistent volumes match the user's PVC request, the cluster may attempt to dynamically create a PV that matches the PVC request. This example uses the latter approach, and it assumes that the cluster administrator has not created any suitable PVs that match the PVC request—meaning that the PVCs will dynamically create the PVs for this example.
The following YAML defines two PVCs that each request 50 gigabytes of persistent storage (
storage: 50Gi). You use names of the PVCs (for example, mysqlclaim) when defining which claims to use as the volumes of a deployment.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysqlclaim spec: storageClassName: "oracle-bmc" selector: matchLabels: bmc-availability-domain: "US-ASHBURN-AD-1" accessModes: - ReadWriteOnce resources: requests: storage: 50Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wordpressclaim spec: storageClassName: "oracle-bmc" selector: matchLabels: bmc-availability-domain: "US-ASHBURN-AD-2" accessModes: - ReadWriteOnce resources: requests: storage: 50Gi
Enter the following command to create the PVC from the YAML file:
$ kubectl create -f https://raw.githubusercontent.com/wercker/oke_examples/master/kubernetes_examples/persistent_volume_claims.yaml persistentvolumeclaim "mysqlclaim" created persistentvolumeclaim "wordpressclaim" created
You can verify that the PVCs have been created and bound to persistent volumes by calling
kubectl get pvc:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE mysqlclaim Bound abyhqljrerxpanjto7b5zlxjesy4aedghc5c52f5v43xcrymo77ktdl6ibjq 50Gi RWO oracle-bmc 4m wordpressclaim Bound abyhqljt3rzldcclootxn7yrfgv36s7rnggcobennjohevykqpitzkinspka 50Gi RWO oracle-bmc 4m
You can use these persistent volumes when creating other objects, such as deployments. For example, the following deployment definition instructs the system to use the mysqlclaim PVC as the mysql-persistent-storage volume, which is mounted by pods hosting the deployment as /var/lib/mysql.
#MySQL Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mysql labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysqlclaim