From f76d6ea2a94d5be796501f673ead91545dd154b6 Mon Sep 17 00:00:00 2001 From: Earl Warren Date: Sat, 5 Oct 2024 12:37:54 +0200 Subject: [PATCH 1/6] use and create a NFS backed PVC --- README.md | 82 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) diff --git a/README.md b/README.md index 66f6724..1e3b162 100644 --- a/README.md +++ b/README.md @@ -354,6 +354,10 @@ The IP address ends with the same number as the hardware (hetzner02 => .2). The vSwitch on VLAN 4000 is for DRBD exclusively +#### vSwitch NFS + +The vSwitch on VLAN 4001 is for NFS + #### vSwitch k8s The vSwitch on VLAN 4002 is for the k8s control plane @@ -818,6 +822,27 @@ curl -fL https://get.k3s.io | sh -s - server --cluster-init --disable=servicelb curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash - ``` +Define the nfs storage class. + +```sh +$ cat nfs.yml +apiVersion: helm.cattle.io/v1 +kind: HelmChart +metadata: + name: nfs + namespace: default +spec: + chart: nfs-subdir-external-provisioner + repo: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner + targetNamespace: default + set: + nfs.server: 10.53.101.5 + nfs.path: /precious + storageClass.name: nfs +$ kubectl apply --server-side=true -f nfs.yml +$ kubectl get storageclass nfs +``` + ### k8s second server node The token is found on the first node in the `/var/lib/rancher/k3s/server/token` file. @@ -831,6 +856,63 @@ curl -fL https://get.k3s.io | sh -s - server --token $token --server https://$ma curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash - ``` +### k8s NFS storage creation + +Create the directory to be used, with the expected permissions (assuing `/precious` is the directory exported via NFS). + +```sh +sudo mkdir /precious/forgejo-data +sudo chmod 1000:1000 /precious/forgejo-data +``` + +Define the `forgejo-data` pvc. + +```sh +$ cat pv.yml +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: forgejo-data +spec: + capacity: + storage: 20Gi + nfs: + server: 10.53.101.5 + path: /precious/forgejo-data + accessModes: + - ReadWriteMany + persistentVolumeReclaimPolicy: Retain + storageClassName: nfs + mountOptions: + - noatime + volumeMode: Filesystem +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: forgejo-data +spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 20Gi + volumeName: forgejo-data + storageClassName: nfs + volumeMode: Filesystem +$ kubectl apply --server-side=true -f pv.yml +``` + +[Instruct the forgejo pod](https://code.forgejo.org/forgejo-helm/forgejo-helm#persistence) to use the `forgejo-data` pvc. + +```yaml +persistence: + enabled: true + create: false + claimName: forgejo-data +``` + ### k8s dedicated etcd node [dedicated etcd node](https://docs.k3s.io/installation/server-roles#dedicated-etcd-nodes) From 40513d541ad0725adea8244ade1996130f28edd5 Mon Sep 17 00:00:00 2001 From: Earl Warren Date: Sun, 6 Oct 2024 13:46:36 +0200 Subject: [PATCH 2/6] nfs: define a root so that nfsv4 is used instead of nfsv3 --- README.md | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 1e3b162..21fc5a5 100644 --- a/README.md +++ b/README.md @@ -758,8 +758,11 @@ lxc-helpers.sh lxc_install_lxc_inside 10.47.3 fc11 ```sh sudo apt install nfs-kernel-server nfs-common -echo /precious 10.53.101.0/255.255.255.0(rw,no_root_squash,subtree_check) | sudo tee -a /etc/exports -sudo exportfs -a +cat < Date: Wed, 9 Oct 2024 12:01:37 +0300 Subject: [PATCH 3/6] nfs mounts must not be sync This is a 10x performance hit. It is reasonable to mount NFS in async. Just like with locally mounted disk, there is a risk of data loss. But since it honors requests to sync, the application is in control of when it matters. An application (database, git, forgejo even) would have a bad design if, for instance, it returned success on a write operation without issuing a sync. --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 21fc5a5..7697219 100644 --- a/README.md +++ b/README.md @@ -759,8 +759,8 @@ lxc-helpers.sh lxc_install_lxc_inside 10.47.3 fc11 ```sh sudo apt install nfs-kernel-server nfs-common cat < Date: Fri, 11 Oct 2024 15:56:41 +0300 Subject: [PATCH 4/6] allow 10.0.0.0/8 in the firewall --- README.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/README.md b/README.md index 7697219..5f63078 100644 --- a/README.md +++ b/README.md @@ -70,6 +70,13 @@ https://robot.hetzner.com/server to only allow incoming ssh, http, https requests for both IPv4 & IPv6. Each server has a "Firewall" tab that provides control over this firewall. +The firewall applies to the VLAN too. The 10.0.0.0/8 source address +must therefore be allowed explicitly for IPv4 and that must be the +first rule of the firewall. + +A template "k8s" was defined that can be used for sharing the same +rules between multiple k8s nodes. + ## nftables ```sh From 644faf989e8d5f458c4129aff537b5fdf368dd72 Mon Sep 17 00:00:00 2001 From: Earl Warren Date: Thu, 17 Oct 2024 14:23:19 +0200 Subject: [PATCH 5/6] force nfs version 4 It is equivalent to -t nfs4 except there is no way to specify this on mount and it has to be done via options --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 5f63078..ff96053 100644 --- a/README.md +++ b/README.md @@ -896,6 +896,7 @@ spec: storageClassName: nfs mountOptions: - noatime + - nfsvers=4.2 volumeMode: Filesystem --- apiVersion: v1 From 8f0c9c17b99dace9776d3299fd6de427c0cb633c Mon Sep 17 00:00:00 2001 From: Earl Warren Date: Thu, 17 Oct 2024 14:55:46 +0200 Subject: [PATCH 6/6] re-order section for reseting the cluster so that they can be applied in order --- README.md | 158 +++++++++++++++++++++++++++--------------------------- 1 file changed, 79 insertions(+), 79 deletions(-) diff --git a/README.md b/README.md index ff96053..2b0790f 100644 --- a/README.md +++ b/README.md @@ -832,27 +832,6 @@ curl -fL https://get.k3s.io | sh -s - server --cluster-init --disable=servicelb curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash - ``` -Define the nfs storage class. - -```sh -$ cat nfs.yml -apiVersion: helm.cattle.io/v1 -kind: HelmChart -metadata: - name: nfs - namespace: default -spec: - chart: nfs-subdir-external-provisioner - repo: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner - targetNamespace: default - set: - nfs.server: 10.53.101.5 - nfs.path: /k8s - storageClass.name: nfs -$ kubectl apply --server-side=true -f nfs.yml -$ kubectl get storageclass nfs -``` - ### k8s second server node The token is found on the first node in the `/var/lib/rancher/k3s/server/token` file. @@ -866,64 +845,6 @@ curl -fL https://get.k3s.io | sh -s - server --token $token --server https://$ma curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash - ``` -### k8s NFS storage creation - -Create the directory to be used, with the expected permissions (assuing `/k8s` is the directory exported via NFS). - -```sh -sudo mkdir /precious/k8s/forgejo-data -sudo chmod 1000:1000 /precious/k8s/forgejo-data -``` - -Define the `forgejo-data` pvc. - -```sh -$ cat pv.yml ---- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: forgejo-data -spec: - capacity: - storage: 20Gi - nfs: - server: 10.53.101.5 - path: /k8s/forgejo-data - accessModes: - - ReadWriteMany - persistentVolumeReclaimPolicy: Retain - storageClassName: nfs - mountOptions: - - noatime - - nfsvers=4.2 - volumeMode: Filesystem ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: forgejo-data -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 20Gi - volumeName: forgejo-data - storageClassName: nfs - volumeMode: Filesystem -$ kubectl apply --server-side=true -f pv.yml -``` - -[Instruct the forgejo pod](https://code.forgejo.org/forgejo-helm/forgejo-helm#persistence) to use the `forgejo-data` pvc. - -```yaml -persistence: - enabled: true - create: false - claimName: forgejo-data -``` - ### k8s dedicated etcd node [dedicated etcd node](https://docs.k3s.io/installation/server-roles#dedicated-etcd-nodes) @@ -1050,6 +971,85 @@ service: port: 2222 ``` +Define the nfs storage class. + +```sh +$ cat nfs.yml +apiVersion: helm.cattle.io/v1 +kind: HelmChart +metadata: + name: nfs + namespace: default +spec: + chart: nfs-subdir-external-provisioner + repo: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner + targetNamespace: default + set: + nfs.server: 10.53.101.5 + nfs.path: /k8s + storageClass.name: nfs +$ kubectl apply --server-side=true -f nfs.yml +$ sleep 120 ; kubectl get storageclass nfs +``` + +### k8s NFS storage creation + +Create the directory to be used, with the expected permissions (assuing `/k8s` is the directory exported via NFS). + +```sh +sudo mkdir /precious/k8s/forgejo-data +sudo chmod 1000:1000 /precious/k8s/forgejo-data +``` + +Define the `forgejo-data` pvc. + +```sh +$ cat pv.yml +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: forgejo-data +spec: + capacity: + storage: 20Gi + nfs: + server: 10.53.101.5 + path: /k8s/forgejo-data + accessModes: + - ReadWriteMany + persistentVolumeReclaimPolicy: Retain + storageClassName: nfs + mountOptions: + - noatime + - nfsvers=4.2 + volumeMode: Filesystem +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: forgejo-data +spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 20Gi + volumeName: forgejo-data + storageClassName: nfs + volumeMode: Filesystem +$ kubectl apply --server-side=true -f pv.yml +``` + +[Instruct the forgejo pod](https://code.forgejo.org/forgejo-helm/forgejo-helm#persistence) to use the `forgejo-data` pvc. + +```yaml +persistence: + enabled: true + create: false + claimName: forgejo-data +``` + ## Uberspace The website https://forgejo.org is hosted at