cloud,

Deploy NFS StorageClass for Kubernetes

peterlee peterlee Aug 27, 2019

This post will introduce Deploy nfs StorageClass for kubernetes.

You can see previous post: How to start a kubernetes cluster with kubeadm

With previous private kubernetes cluster, we can’t store any data/state persistently. The kubernetes pod may re-create/launch in different node. I choose to use NFS as a storage options. I need to set up a nfs server first.

Install nfs server on Centos

This is another linux on same local network. IP is 192.168.2.104

$ yum install nfs-utils

# share /opt/nfs to 192.168.2.1/24 read and write.
# modify mount point 
# /etc/exports
/opt/nfs 192.168.2.1/24(rw,sync,no_root_squash,no_all_squash)

$ sudo systemctl restart nfs-server
$ sudo systemctl enable nfs-server

Test NFS server is ready.

$ showmount -e 192.168.2.104
Export list for 192.168.2.104:
/opt/nfs 192.168.2.1/24

Warning, this may expose your nfs server to internal network.

# nfs-storage.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-storage
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-storage
            - name: NFS_SERVER
              value: 192.168.2.104
            - name: NFS_PATH
              value: /opt/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.2.104
            path: /opt/nfs
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: nfs-storage
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: default
provisioner: nfs-storage

You also need to set default storage class.

$  kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Changing default storageClass

You may need to set RBAC for nfs-client-provisioner.

# nfs-rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io