(*) This local static provisioner was created to help with the PV lifecycle. When working with local volume the administrator must perform manually clean up and set up the local volume again each time for reuse. Old PVs with the same name and different configuration were already exist on the cluster and the new PVC is created according to them. node-which-doesnt-exists # <- Will lead to the error When using local volumes the nodeAffinity of the PV is required and should be a value of an existing node in the cluster: apiVersion: v1 If the scheduler failed to match a node to the PV. We'll get the mentioned error on the pending resources. Mongo-persistent-storage-mongo-1 Pending local-storage 45m ![]() Mongo-persistent-storage-mongo-0 Bound mongo-local-pv 50Gi RWO local-storage 80m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ![]() We could see that some workloads (Pods or Stateful sets) will be stuck on pending: $ kubectl get pods Mongo-local-pv 50Gi RWO Retain Bound default/mongo-persistent-storage-mongo-0 local-storage 106m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE If Not then we'll get the unbound immediate PersistentVolumeClaims error in the pod level and no volume plugin matched name when describing the PVC.įor example if only one PV is created (or the others were deleted): $ kubectl get pv If PV capacity >= PVC capacity then PVC should be bound to PV. ![]() The persistentvolume-controller has failed to find a PV with a capacity size which is equal or higher then the value that was specified in the PVC. The mentioned error can be caused for multiple reasons - below are few options which I encountered.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |