1
0
Fork 0
mirror of https://code.forgejo.org/infrastructure/documentation synced 2024-11-22 03:21:10 +00:00

install a Forgejo instance in the k8s cluster

This commit is contained in:
Earl Warren 2024-10-20 11:36:57 +02:00
parent e75adbbb82
commit 2f652df670
No known key found for this signature in database
GPG key ID: 0579CB2928A78A00
3 changed files with 56 additions and 55 deletions

View file

@ -6,6 +6,7 @@ There is a [dedicated chatroom](https://matrix.to/#/#forgejo-ci:matrix.org). A m
- Setting up a new [K8S/DRBD/NFS k8s node](k8s.md) - Setting up a new [K8S/DRBD/NFS k8s node](k8s.md)
- Setting up a new [LXC/DRBD Host](lxc.md) - Setting up a new [LXC/DRBD Host](lxc.md)
- Managing services with a [LXC/DRBD/nginx stack](drbd-nginx-lxc.md) - Managing services with a [LXC/DRBD/nginx stack](drbd-nginx-lxc.md)
- Installing a [Forgejo instance in the K8S cluster](k8s-forgejo.md)
- Installing a [Forgejo runner in an LXC container](runner-lxc.md) - Installing a [Forgejo runner in an LXC container](runner-lxc.md)
- Managing the [Octopuce host](octopuce.md) - Managing the [Octopuce host](octopuce.md)

48
k8s-forgejo.md Normal file
View file

@ -0,0 +1,48 @@
## Forgejo
[forgejo](https://code.forgejo.org/forgejo-helm/forgejo-helm) configuration in [ingress](https://code.forgejo.org/forgejo-helm/forgejo-helm#ingress) for the reverse proxy (`traefik`) to route the domain and for the ACME issuer (`cert-manager`) to obtain a certificate. And in [service](https://code.forgejo.org/forgejo-helm/forgejo-helm#service) for the `ssh` port to be bound to the desired IPs of the load balancer (`metallb`).
```
ingress:
enabled: true
annotations:
# https://cert-manager.io/docs/usage/ingress/#supported-annotations
# https://github.com/cert-manager/cert-manager/issues/2239
cert-manager.io/cluster-issuer: letsencrypt-http
cert-manager.io/private-key-algorithm: ECDSA
cert-manager.io/private-key-size: 384
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
tls:
- hosts:
- t1.forgejo.org
secretName: tls-forgejo-t1-ingress-http
hosts:
- host: t1.forgejo.org
paths:
- path: /
pathType: Prefix
service:
http:
type: ClusterIP
ipFamilyPolicy: PreferDualStack
port: 3000
ssh:
type: LoadBalancer
annotations:
metallb.universe.tf/loadBalancerIPs: 188.40.16.47,2a01:4f8:fff2:48::2
metallb.universe.tf/allow-shared-ip: "key-to-share-failover"
ipFamilyPolicy: PreferDualStack
port: 2222
```
[Instruct the forgejo pod](https://code.forgejo.org/forgejo-helm/forgejo-helm#persistence) to use the `forgejo-data` pvc.
```yaml
persistence:
enabled: true
create: false
claimName: forgejo-data
```

62
k8s.md
View file

@ -98,46 +98,7 @@ For the first node `./setup.sh setup_k8s`. For nodes joining the cluster `./setu
- NFS storage class - NFS storage class
`./setup.sh setup_k8s_nfs` `./setup.sh setup_k8s_nfs`
## Forgejo ## K8S NFS storage creation
[forgejo](https://code.forgejo.org/forgejo-helm/forgejo-helm) configuration in [ingress](https://code.forgejo.org/forgejo-helm/forgejo-helm#ingress) for the reverse proxy (`traefik`) to route the domain and for the ACME issuer (`cert-manager`) to obtain a certificate. And in [service](https://code.forgejo.org/forgejo-helm/forgejo-helm#service) for the `ssh` port to be bound to the desired IPs of the load balancer (`metallb`).
```
ingress:
enabled: true
annotations:
# https://cert-manager.io/docs/usage/ingress/#supported-annotations
# https://github.com/cert-manager/cert-manager/issues/2239
cert-manager.io/cluster-issuer: letsencrypt-http
cert-manager.io/private-key-algorithm: ECDSA
cert-manager.io/private-key-size: 384
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
tls:
- hosts:
- t1.forgejo.org
secretName: tls-forgejo-t1-ingress-http
hosts:
- host: t1.forgejo.org
paths:
- path: /
pathType: Prefix
service:
http:
type: ClusterIP
ipFamilyPolicy: PreferDualStack
port: 3000
ssh:
type: LoadBalancer
annotations:
metallb.universe.tf/loadBalancerIPs: 188.40.16.47,2a01:4f8:fff2:48::2
metallb.universe.tf/allow-shared-ip: "key-to-share-failover"
ipFamilyPolicy: PreferDualStack
port: 2222
```
# K8S NFS storage creation
Define the 20GB `forgejo-data` pvc owned by user id 1000. Define the 20GB `forgejo-data` pvc owned by user id 1000.
@ -145,35 +106,26 @@ Define the 20GB `forgejo-data` pvc owned by user id 1000.
./setup.sh setup_k8s_pvc forgejo-data 20Gi 1000 ./setup.sh setup_k8s_pvc forgejo-data 20Gi 1000
``` ```
[Instruct the forgejo pod](https://code.forgejo.org/forgejo-helm/forgejo-helm#persistence) to use the `forgejo-data` pvc. # Disaster recovery and maintenance
```yaml ## When a machine or disk is scheduled for replacement.
persistence:
enabled: true
create: false
claimName: forgejo-data
```
Disaster recovery and maintenance
# When a machine or disk is scheduled for replacement.
* `kubectl drain hetzner05` # evacuate all the pods out of the node to be shutdown * `kubectl drain hetzner05` # evacuate all the pods out of the node to be shutdown
* `kubectl taint nodes hetzner05 key1=value1:NoSchedule` # prevent any pod from being created there (metallb speaker won't be drained, for instance) * `kubectl taint nodes hetzner05 key1=value1:NoSchedule` # prevent any pod from being created there (metallb speaker won't be drained, for instance)
* `kubectl delete node hetzner05` # let the cluster know it no longer exists so a new one by the same name can replace it * `kubectl delete node hetzner05` # let the cluster know it no longer exists so a new one by the same name can replace it
# Routing the failover IP ## Routing the failover IP
When the machine to which the failover IP (failover.forgejo.org) is routed is unavailable or to be shutdown, to the [Hetzner server panel](https://robot.hetzner.com/server), to the IPs tab and change the route of the failover IP to another node. All nodes are configured with the failover IP, there is nothing else to do. When the machine to which the failover IP (failover.forgejo.org) is routed is unavailable or to be shutdown, to the [Hetzner server panel](https://robot.hetzner.com/server), to the IPs tab and change the route of the failover IP to another node. All nodes are configured with the failover IP, there is nothing else to do.
# Manual boot operations ## Manual boot operations
## On the machine that runs the NFS server ### On the machine that runs the NFS server
* `sudo drbdadm primary r1` # Switch the DRBD to primary * `sudo drbdadm primary r1` # Switch the DRBD to primary
* `sudo mount /precious` # DRBD volume shared via NFS * `sudo mount /precious` # DRBD volume shared via NFS
* `sudo ip addr add 10.53.101.100/24 dev enp5s0.4001` # add NFS server IP * `sudo ip addr add 10.53.101.100/24 dev enp5s0.4001` # add NFS server IP
## On the other machines ### On the other machines
* `sudo ip addr del 10.53.101.100/24 dev enp5s0.4001` # remove NFS server IP * `sudo ip addr del 10.53.101.100/24 dev enp5s0.4001` # remove NFS server IP