In today’s IT world, data is everything and in the era of DevOps, the requirements change so frequently. Let’s assume your application is running on one Kubernetes cluster and because of some reason you want to migrate data from this Kubernetes cluster to some other Kubernetes cluster that is running in a different availability zone without losing data.

In this blog, we will discuss “How we can migrate PVC across availability zones in Kubernetes".

What is PV, PVC, Storage Volume ?

A Persistent Volume (PV) is a volume plug-in that has a life cycle independent of any individual pod that uses persistent volume, which means that even if the pod is deleted, the data in the persistent volume remains as it is. Pod requests the Volume through the PVC. PVC is the request to provision persistent storage with a specific type and configuration.

A storage volume is a virtual disk that provides persistent block storage space for instances in any cloud service.

Migrating PVC across availability zones

If you want to clone or migrate PVC (Persistent Volume Claim)  from storage volume, the first thing you have to know is which PVC is attached to the pod you want to clone or migrate. And for that you have to run the command:

kubectl get po <pod-name> -o yaml

After this, check the `claimName` in the `volumes` section and note down the name of PV (Persistent Volume) to which PVC is bound. For this, use the command given below:

kubectl get pvc <pvc-name>

Now describe the PV and note down the `volume id` from which the PV is created. This is the Volume id from the cloud provider and to know  the region/availability zone in which the volume is created, you can check the annotations on the volume by describing it using the command:

kubectl describe pv <pv-name>

Now you know the volume to which  the pod is attached to the cloud. Create a snapshot from it and then create a new volume from the same snapshot in any region or availability zone of your choice. Attach a label to your volume that you created which should be exactly the same as given - KubernetesCluster: <clustername> and note down the id of volume you just created on the cloud. You are halfway there.

Now you can easily create a PV from the new volume that you created, just apply the given YAML with the required changes in your case. To check YAML for your cloud provider, follow this documentation.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
    labels:
      type: persistent-volume
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    volumeID: <YOUR STORAGE VOLUME ID>
    fsType: ext4 # You can change this to whichever format type you want like xfs, ext4 etc.

Once the PV is created, you can create a new PVC from the same PV that you just created. For this you can use the given YAML :

apiVersion: v1	
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
  labels:
    type: persistent-volume-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
  selector:
    matchLabels:
      type: <THE NAME OF THE PV YOU MADE EARLIER>

And finally when you have created a PVC, you can attach it to any of the existing pod or to a new pod. For this what you have to do is, just edit the configuration file of the pod and add the persistent storage in spec.volumes section and mount the volume and provide a mountPath in containers.volumeMount section as it is in the given example configuration of a pod.

apiVersion: v1
kind: Pod
metadata:
  name: pv-pod
spec:
  volumes:
    - name: my-pv
      persistentVolumeClaim:
        claimName: my-pvc
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: my-pv


Congratulation! We have successfully migrated PVC from one k8s cluster to another running on two different availability zone. Hope you found the blog interesting and learnt something. If you face any difficulties, feel free to join our discord server or comment down below. We would be happy to solve your queries.