Compare commits

...
Sign in to create a new pull request.

60 commits

Author SHA1 Message Date
d58209ef93
Add reverseproxy for alerts.pub.solar 2024-04-27 01:37:03 +02:00
a98cfc82e5
Autoformat dns.tf 2024-04-27 00:26:52 +02:00
a66c6ada59
Add dns entry 2024-04-27 00:26:14 +02:00
8e66bea9c8
Add alertmanager config 2024-04-27 00:12:32 +02:00
505d0f34ea
Merge pull request 'nachtigall: synapse security update' (#153) from chore/synapse-security-update into main
Reviewed-on: pub-solar/infra#153
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-26 20:48:19 +00:00
ddc5c65bf7
chore: bump flake inputs
• Updated input 'home-manager':
    'github:nix-community/home-manager/d6bb9f934f2870e5cbc5b94c79e9db22246141ff?narHash=sha256-dA82pOMQNnCJMAsPG7AXG35VmCSMZsJHTFlTHizpKWQ%3D' (2024-04-06)
  → 'github:nix-community/home-manager/86853e31dc1b62c6eeed11c667e8cdd0285d4411?narHash=sha256-Xn2r0Jv95TswvPlvamCC46wwNo8ALjRCMBJbGykdhcM%3D' (2024-04-25)
• Updated input 'nix-darwin':
    'github:lnl7/nix-darwin/9e7c20ffd056e406ddd0276ee9d89f09c5e5f4ed?narHash=sha256-olEWxacm1xZhAtpq%2BZkEyQgR4zgfE7ddpNtZNvubi3g%3D' (2024-04-19)
  → 'github:lnl7/nix-darwin/230a197063de9287128e2c68a7a4b0cd7d0b50a7?narHash=sha256-lc75rgRQLdp4Dzogv5cfqOg6qYc5Rp83oedF2t0kDp8%3D' (2024-04-24)
• Updated input 'nixpkgs':
    'github:nixos/nixpkgs/bc194f70731cc5d2b046a6c1b3b15f170f05999c?narHash=sha256-YguPZpiejgzLEcO36/SZULjJQ55iWcjAmf3lYiyV1Fo%3D' (2024-04-19)
  → 'github:nixos/nixpkgs/dd37924974b9202f8226ed5d74a252a9785aedf8?narHash=sha256-fFE3M0vCoiSwCX02z8VF58jXFRj9enYUSTqjyHAjrds%3D' (2024-04-24)
• Updated input 'unstable':
    'github:nixos/nixpkgs/5c24cf2f0a12ad855f444c30b2421d044120c66f?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
  → 'github:nixos/nixpkgs/572af610f6151fd41c212f897c71f7056e3fb518?narHash=sha256-cfh1hi%2B6muQMbi9acOlju3V1gl8BEaZBXBR9jQfQi4U%3D' (2024-04-23)
2024-04-25 19:21:05 +02:00
a11255b433
matrix-appservice-irc: remove unneeded syscall override
PR was merged and backported:
https://github.com/NixOS/nixpkgs/pull/271740
2024-04-25 12:37:58 +02:00
d62b6cda92
Merge pull request 'ci: update forgejo runner to fix cache' (#152) from ci/update-forgejo-runner into main
Reviewed-on: pub-solar/infra#152
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-23 18:18:39 +00:00
c580fe0fbb
ci: prevent flake inputs from GC as well 2024-04-23 19:10:20 +02:00
60aef1d038
ci: prevent nix garbage collection 2024-04-23 16:00:16 +02:00
fa9ce9d435
gitea-actions-runner: don't run as systemd DynamicUser
to enable usage of cache outside of /var/lib/private
2024-04-23 15:42:33 +02:00
9541e5029e
flora-6: move forgejo-runner cache directory to /data 2024-04-23 15:12:11 +02:00
c4d0d34807
ci: revert cache-nix-action to version 4.0.3 2024-04-23 15:12:06 +02:00
d5fe65b60d
ci: disable cachix daemon, spams logs with
[2024-04-22 23:46:26][Info] Skipping /nix/store/w2zp8k8yy2avv5r92w0cpq9aixkir2sp-LocalSettings.php
...
2024-04-23 15:11:59 +02:00
0e7dc95250
ci: remove broken purge config from check workflow 2024-04-23 01:42:04 +02:00
c86e22b292
ci: update forgejo-runner to version 3.4.1
https://github.com/NixOS/nixpkgs/pull/301383
2024-04-23 00:38:53 +02:00
4992819742
Merge pull request 'set pruneOpts for restic backups to daily 7, weekly 4, monthly 3' (#151) from feature/restic-backup-retention into main
Reviewed-on: pub-solar/infra#151
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
Reviewed-by: teutat3s <teutat3s@noreply.git.pub.solar>
2024-04-22 19:38:21 +00:00
a9411d05a8
set pruneOpts for restic backups to daily 7, weekly 4, monthly 3 2024-04-22 20:06:49 +02:00
e8530caf1d
Merge pull request 'ci: update nix-quick-install-action, cache-nix-action, cachix-action' (#150) from chore-update-ci into main
Reviewed-on: pub-solar/infra#150
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-22 15:19:36 +00:00
7c492e7391
Merge pull request 'chore: forgejo security update, update matrix-synapse et al.' (#149) from chore-update-flake into main
Reviewed-on: pub-solar/infra#149
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-22 15:18:33 +00:00
a0c6f0dc08
ci: fix cache-nix-action, use new config syntax 2024-04-21 20:17:03 +02:00
46c7c9ecb1
ci: update nix-quick-install-action, cache-nix-action,
cachix-action
2024-04-21 19:58:58 +02:00
fb4004e9f0
chore: update flake inputs
• Updated input 'nix-darwin':
    'github:lnl7/nix-darwin/36524adc31566655f2f4d55ad6b875fb5c1a4083?narHash=sha256-sXcesZWKXFlEQ8oyGHnfk4xc9f2Ip0X/%2BYZOq3sKviI%3D' (2024-03-30)
  → 'github:lnl7/nix-darwin/9e7c20ffd056e406ddd0276ee9d89f09c5e5f4ed?narHash=sha256-olEWxacm1xZhAtpq%2BZkEyQgR4zgfE7ddpNtZNvubi3g%3D' (2024-04-19)
• Updated input 'nixpkgs':
    'github:nixos/nixpkgs/90055d5e616bd943795d38808c94dbf0dd35abe8?narHash=sha256-ZEfGB3YCBVggvk0BQIqVY7J8XF/9jxQ68fCca6nib%2B8%3D' (2024-04-13)
  → 'github:nixos/nixpkgs/bc194f70731cc5d2b046a6c1b3b15f170f05999c?narHash=sha256-YguPZpiejgzLEcO36/SZULjJQ55iWcjAmf3lYiyV1Fo%3D' (2024-04-19)
• Updated input 'unstable':
    'github:nixos/nixpkgs/cfd6b5fc90b15709b780a5a1619695a88505a176?narHash=sha256-WKm9CvgCldeIVvRz87iOMi8CFVB1apJlkUT4GGvA0iM%3D' (2024-04-12)
  → 'github:nixos/nixpkgs/5c24cf2f0a12ad855f444c30b2421d044120c66f?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
2024-04-21 19:28:02 +02:00
3030b0f84d
Merge pull request 'flora-6: add wg-ssh to ignored systemd-wait-online interfaces' (#148) from flora-6/fix-network-wait-online into main
Reviewed-on: pub-solar/infra#148
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-14 21:53:33 +00:00
c07d24f6a7
flora-6: add wg-ssh to ignored interfaces
for systemd-wait-online to start successfully
2024-04-14 23:22:53 +02:00
0f297c4711
Merge pull request 'chore: security update PHP, update element-web, misc updates' (#147) from chore-update-flake into main
Reviewed-on: pub-solar/infra#147
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-14 20:29:39 +00:00
679d9b236f
Merge pull request 'nginx: set worker_processes to number of CPU cores' (#146) from feat/nginx-tuning into main
Reviewed-on: pub-solar/infra#146
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-14 20:22:08 +00:00
78d5e5a4f0
chore: update flake inputs
❯ nix store diff-closures $OLD_CLOSURE $NEW_CLOSURE
cpupower: 6.1.84 → 6.1.86
element-web: 1.11.63 → 1.11.64, +148.0 KiB
element-web-wrapped: 1.11.63 → 1.11.64
initrd-linux: 6.1.84 → 6.1.86
linux: 6.1.84, 6.1.84-modules → 6.1.86, 6.1.86-modules, +24.3 KiB
linux-firmware: 20240312 → 20240410, +493.3 KiB
nixos-system-nachtigall: 23.11.20240410.b2cf36f → 23.11.20240413.90055d5
owncast: 0.1.2 → 0.1.3, -376.1 KiB
php: 8.2.17 → 8.2.18
php-bcmath: 8.2.17 → 8.2.18
php-bz2: 8.2.17 → 8.2.18
php-calendar: 8.2.17 → 8.2.18
php-ctype: 8.2.17 → 8.2.18
php-curl: 8.2.17 → 8.2.18
php-dom: 8.2.17 → 8.2.18
php-exif: 8.2.17 → 8.2.18
php-extra-init: 8.2.17.ini → 8.2.18.ini
php-fileinfo: 8.2.17 → 8.2.18
php-filter: 8.2.17 → 8.2.18
php-ftp: 8.2.17 → 8.2.18
php-gd: 8.2.17 → 8.2.18
php-gettext: 8.2.17 → 8.2.18
php-gmp: 8.2.17 → 8.2.18
php-iconv: 8.2.17 → 8.2.18
php-imap: 8.2.17 → 8.2.18
php-intl: 8.2.17 → 8.2.18
php-ldap: 8.2.17 → 8.2.18
php-mbstring: 8.2.17 → 8.2.18
php-mysqli: 8.2.17 → 8.2.18
php-mysqlnd: 8.2.17 → 8.2.18
php-opcache: 8.2.17 → 8.2.18
php-openssl: 8.2.17 → 8.2.18
php-pcntl: 8.2.17 → 8.2.18
php-pdo: 8.2.17 → 8.2.18
php-pdo_mysql: 8.2.17 → 8.2.18
php-pdo_odbc: 8.2.17 → 8.2.18
php-pdo_pgsql: 8.2.17 → 8.2.18
php-pdo_sqlite: 8.2.17 → 8.2.18
php-pgsql: 8.2.17 → 8.2.18
php-posix: 8.2.17 → 8.2.18
php-readline: 8.2.17 → 8.2.18
php-session: 8.2.17 → 8.2.18
php-simplexml: 8.2.17 → 8.2.18
php-soap: 8.2.17 → 8.2.18
php-sockets: 8.2.17 → 8.2.18
php-sodium: 8.2.17 → 8.2.18
php-sqlite3: 8.2.17 → 8.2.18
php-sysvsem: 8.2.17 → 8.2.18
php-tokenizer: 8.2.17 → 8.2.18
php-with-extensions: 8.2.17 → 8.2.18
php-xmlreader: 8.2.17 → 8.2.18
php-xmlwriter: 8.2.17 → 8.2.18
php-zip: 8.2.17 → 8.2.18
php-zlib: 8.2.17 → 8.2.18
searxng: ∅ → 0-unstable-2024-03-08, +15337.5 KiB
searxng-unstable: 2023-10-31 → ∅, -14965.6 KiB
source: +470.3 KiB
uwsgi: 2.0.23 → 2.0.24
zfs-kernel: 2.2.3-6.1.84 → 2.2.3-6.1.86
2024-04-14 22:09:37 +02:00
c768203bed
nginx: set worker_processes to number of CPU cores
and set worker_connections to 1024

https://nginx.org/en/docs/ngx_core_module.html#worker_processes
https://nginx.org/en/docs/ngx_core_module.html#worker_connections
2024-04-14 17:39:56 +02:00
b0c466869e
Merge pull request 'wireguard: use IP addresses for wireguard endpoints' (#145) from fix/use-ip-for-wireguard into main
Reviewed-on: pub-solar/infra#145
Reviewed-by: teutat3s <teutat3s@noreply.git.pub.solar>
2024-04-12 20:40:39 +00:00
b6a54efd9a
fix: add comment with hostnames to wireguard peers 2024-04-12 22:36:17 +02:00
7e145040cc
wireguard: use IP addresses for wireguard endpoints
Otherwise the hostnames written to the /etc/hosts file are already
pointing at the wireguard IP-addresses, so they can never connect.
2024-04-12 22:31:28 +02:00
9d94b888ae
Merge pull request 'networking: add wireguard hosts to /etc/hosts' (#144) from wireguard/add-etc-hosts into main
Reviewed-on: pub-solar/infra#144
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-12 19:54:09 +00:00
8a9fe3b8fe
chore: update flake inputs
• Updated input 'nixpkgs':
    'github:nixos/nixpkgs/d272ca50d1f7424fbfcd1e6f1c9e01d92f6da167' (2024-04-08)
  → 'github:nixos/nixpkgs/b2cf36f43f9ef2ded5711b30b1f393ac423d8f72' (2024-04-10)
• Updated input 'unstable':
    'github:nixos/nixpkgs/4cba8b53da471aea2ab2b0c1f30a81e7c451f4b6' (2024-04-08)
  → 'github:nixos/nixpkgs/1042fd8b148a9105f3c0aca3a6177fd1d9360ba5' (2024-04-10)
2024-04-12 19:54:09 +00:00
8743ea7b0c
networking: add wireguard hosts to /etc/hosts
Also re-enable DNSSEC, it's reported fixed in systemd-resolved
2024-04-12 19:54:09 +00:00
8743b50f7f
Merge pull request 'forgejo: also reroute ssh traffic for ipv6' (#139) from forgejo/reroute-ssh-ipv6 into main
Reviewed-on: pub-solar/infra#139
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-12 19:38:15 +00:00
316ba9ef53
forgejo: also reroute ssh traffic for ipv6 2024-04-12 19:38:15 +00:00
afca75441c
Merge pull request 'forgejo: enable repo search (indexer), save login cookie for 365 days' (#142) from feat/forgejo-enable-search into main
Reviewed-on: pub-solar/infra#142
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-06 16:07:42 +00:00
9698c47530
Merge pull request 'mastodon: clean media older than 7 days' (#143) from mastodon/auto-clean-7-days into main
Reviewed-on: pub-solar/infra#143
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-06 16:07:34 +00:00
ccb029dde3
Merge pull request 'wireguard: add ryzensun to teutat3s' hosts' (#141) from wireguard/add-ryzensun-host into main
Reviewed-on: pub-solar/infra#141
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-06 16:07:21 +00:00
41e4d3427c
mastodon: clean media older than 7 days
Currently we keep everything for 30 days, which is about 180GB
2024-04-05 23:50:04 +02:00
16e9d476cb
Merge pull request 'docs: include notes regarding rollback in deploy docs, misc updates' (#140) from docs/update-deployment-docs into main
Reviewed-on: pub-solar/infra#140
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-05 21:39:46 +00:00
3caf085d0b
wireguard: add ryzensun to teutat3s' hosts 2024-04-05 23:32:59 +02:00
c5159dd66d
forgejo: enable repo search (indexer), save login
cookie for 365 days instead of default 7 days.
Caveat for the repo indexer is that repository size on disk will grow
by factor of 6. Forgejo repositories currently use 4.7GB on disk, with
3.3GB being a nixpkgs fork.
2024-04-05 23:29:49 +02:00
b27f8c1380
docs: include notes regarding rollback in deploy
docs, misc updates
2024-04-05 23:03:43 +02:00
76ca43142a
Merge pull request 'forgejo: make SSH keys declarative' (#138) from forgejo/ssh-keys-declarative into main
Reviewed-on: pub-solar/infra#138
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-05 19:35:55 +00:00
16c6aa3b61
forgejo: make SSH keys declarative 2024-04-05 19:35:55 +00:00
315cbf5813
Merge pull request 'fix(nextcloud): define a maintenance window' (#135) from chore/nextcloud-config-maintenance-window into main
Reviewed-on: pub-solar/infra#135
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-05 18:41:17 +00:00
9191729f5c
Merge pull request 'nachtigall: forgejo: update firewall settings' (#137) from fix/git-forgejo-open-service-port-in-firewall into main
Reviewed-on: pub-solar/infra#137
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-05 16:51:36 +00:00
b6b8d69852
nachtigall: forgejo: update firewall settings 2024-04-05 18:39:43 +02:00
4380c3b0ab
Merge pull request 'forgejo: use iptables routing instead of ssh patch' (#136) from fix/forgejo-ssh-again into main
Reviewed-on: pub-solar/infra#136
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-05 15:26:10 +00:00
e618b9f9c2
forgejo: use iptables routing instead of ssh patch 2024-04-05 17:00:28 +02:00
ae0c90e4f8
Merge pull request 'forgejo: allow multiple host addresses for SSH' (#133) from fix/forgejo-multi-host into main
Reviewed-on: pub-solar/infra#133
Reviewed-by: teutat3s <teutat3s@noreply.git.pub.solar>
2024-04-05 14:27:03 +00:00
d7c9333ff4
forgejo: allow multiple host addresses for SSH 2024-04-05 14:26:56 +00:00
18a62b8d35
fix(nextcloud): define a maintenance window for
resource intensive background jobs. Docs:
https://docs.nextcloud.com/server/28/admin_manual/configuration_server/background_jobs_configuration.html

> A value of 1 e.g. will only run these background jobs between 01:00am
UTC and 05:00am UTC
2024-04-05 16:23:16 +02:00
9ec77e2a30
Update flake.nix (#134)
Update deploy node settinsg with wireguard ips

Reviewed-on: pub-solar/infra#134
Reviewed-by: Akshay Mankar <axeman@noreply.git.pub.solar>
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
2024-04-05 14:11:42 +00:00
1bcb8bb7e0
Merge pull request 'admins: Add axeman's wireguard device' (#132) from axeman-wireguard into main
Reviewed-on: pub-solar/infra#132
Reviewed-by: b12f <b12f@noreply.git.pub.solar>
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-05 13:41:43 +00:00
cf1e6f8134
admins: Add axeman's wireguard device 2024-04-05 15:41:21 +02:00
83e293016f
Merge pull request 'docs: explain admin access and secrets' (#130) from docs/admin-access into main
Reviewed-on: pub-solar/infra#130
Reviewed-by: Hendrik Sokolowski <hensoko@noreply.git.pub.solar>
2024-04-05 12:56:51 +00:00
91a2b66134
docs: explain admin access and secrets 2024-04-05 12:56:51 +00:00
29 changed files with 668 additions and 117 deletions

View file

@ -10,7 +10,7 @@ jobs:
- name: Check out repository code
uses: https://code.forgejo.org/actions/checkout@v4
- uses: https://github.com/nixbuild/nix-quick-install-action@v26
- uses: https://github.com/nixbuild/nix-quick-install-action@v27
with:
load_nixConfig: false
nix_conf: |
@ -24,7 +24,7 @@ jobs:
echo "hash=$(md5sum flake.lock | awk '{print $1}')" >> $GITHUB_OUTPUT
- name: Restore and cache Nix store
uses: https://github.com/nix-community/cache-nix-action@v4
uses: https://github.com/nix-community/cache-nix-action@v4.0.3
id: nix-store-cache
with:
key: cache-${{ runner.os }}-nix-store-${{ steps.flake-lock-hash.outputs.hash }}
@ -35,16 +35,37 @@ jobs:
gc-max-store-size-linux: 10000000000
purge-caches: true
purge-keys: cache-${{ runner.os }}-nix-store-
purge-key: cache-${{ runner.os }}-nix-store-
purge-created: true
purge-created-max-age: 42
- name: Prepare cachix
uses: https://github.com/cachix/cachix-action@v12
uses: https://github.com/cachix/cachix-action@v14
with:
name: pub-solar
authToken: '${{ secrets.CACHIX_AUTH_TOKEN }}'
useDaemon: false
- name: Run flake checks
run: |
# Prevent cache garbage collection by creating GC roots
for target in $(nix flake show --json --all-systems | jq '
.["nixosConfigurations"] |
to_entries[] |
.key
' | tr -d '"'
); do
nix --print-build-logs --verbose --accept-flake-config --access-tokens '' \
build --out-link ./result-$target ".#nixosConfigurations.${target}.config.system.build.toplevel"
done
nix --print-build-logs --verbose --accept-flake-config --access-tokens '' flake check
# Add GC roots for flake inputs, too
# https://github.com/NixOS/nix/issues/4250#issuecomment-1146878407
mkdir --parents "$NIX_USER_PROFILE_DIR"
gc_root_prefix="$NIX_USER_PROFILE_DIR"/infra-flake-
echo "Adding gcroots flake inputs with prefix $gc_root_prefix ..."
nix flake archive --json 2>/dev/null | jq --raw-output '.inputs | to_entries[] | "ln --force --symbolic --no-target-directory "+.value.path+" \"'"$gc_root_prefix"'"+.key+"\""' | while read -r line; do
eval "$line"
done

View file

@ -0,0 +1,37 @@
# Adminstrative access
People with admin access to the infrastructure are added to [`logins/admins.nix`](../logins/admins.nix). This is a attrset with the following structure:
```
{
<username> = {
sshPubKeys = {
<name> = <pubkey-string>;
};
wireguardDevices = [
{
publicKey = <pubkey-string>;
allowedIPs = [ "10.7.6.<ip-address>/32" "fd00:fae:fae:fae:fae:<ip-address>::/96" ];
}
}];
secretEncryptionKeys = {
<name> = <encryption-key-string>;
};
};
}
```
# SSH Access
SSH is not reachable from the open internet. Instead, SSH Port 22 is protected by a wireguard VPN network. Thus, to get root access on the servers, at least two pieces of information have to be added to the admins config:
1. **SSH Public key**: self-explanatory. Add your public key to your user attrset under `sshPubKeys`.
2. **Wireguard device**: each wireguard device has two parts: the public key and the IP addresses it should have in the wireguard network. The pub.solar wireguard network is spaced under `10.7.6.0/24` and `fd00:fae:fae:fae:fae::/80`. To add your device, it's best to choose a free number between 200 and 255 and use that in both the ipv4 and ipv6 ranges: `10.7.6.<ip-address>/32` `fd00:fae:fae:fae:fae:<ip-address>::/96`. For more information on how to generate keypairs, see [the NixOS Wireguard docs](https://nixos.wiki/wiki/WireGuard#Generate_keypair).
# Secret encryption
Deployment secrets are added to the repository in encrypted files. To be able to work with these encrypted files, your public key(s) will have to be added to your user attrset under `secretEncryptionKeys`.
See also the docs on [working with secrets](./secrets.md).

View file

@ -1,20 +1,32 @@
# Deploying new versions
We use [deploy-rs](https://github.com/serokell/deploy-rs) to deploy changes. Currently this process is not automated, so configuration changes will have to be manually deployed.
We use [deploy-rs](https://github.com/serokell/deploy-rs) to deploy changes.
Currently this process is not automated, so configuration changes will have to
be manually deployed.
To deploy, make sure you have a [working development shell](./development-shell.md). Then, run `deploy-rs` with the hostname of the server you want to deploy:
To deploy, make sure you have a [working development shell](./development-shell.md).
Then, run `deploy-rs` with the hostname of the server you want to deploy:
For nachtigall.pub.solar:
```
deploy '.#nachtigall'
deploy --targets '.#nachtigall' --magic-rollback false --auto-rollback false
```
For flora-6.pub.solar:
```
deploy '.#flora-6'
deploy --targets '.#flora-6' --magic-rollback false --auto-rollback false
```
You'll need to have SSH Access to the boxes to be able to do this.
Usually we skip all rollback functionality, but if you want to deploy a change
that might lock you out, e.g. to SSH, it might make sense to set these to `true`.
### SSH access
Ensure your SSH public key is in place [here](./public-keys/admins.nix) and was deployed by someone with access.
To skip flake checks, e.g. because you already ran them manually before
deployment, add the flag `--skip-checks` at the end of the command.
`--dry-activate` can be used to only put all files in place without switching,
to enable switching to the new config quickly at a later moment.
You'll need to have SSH Access to the boxes to be able to run `deploy`.
### Getting SSH access
See [administrative-access.md](./administrative-access.md).

View file

@ -1 +1,5 @@
# Working with secrets
Secrets are handled with [agenix](https://github.com/ryantm/agenix). To be able to view secrets, your public key will have to be added to the admins config. See [Administrative Access](./administrative-access.md) on how to do this.
For a comprehensive tutorial, see [the agenix repository](https://github.com/ryantm/agenix?tab=readme-ov-file#tutorial).

View file

@ -1,3 +0,0 @@
# SSH Access
SSH Access is granted by adding a public key to [`public-keys/admins.nix`](../public-keys/admins.nix). This change will then have to be deployed to all hosts by an existing key. The keys will also grant access to the initrd SSH Server to enable remote unlock.

24
flake.lock generated
View file

@ -180,11 +180,11 @@
]
},
"locked": {
"lastModified": 1710888565,
"narHash": "sha256-s9Hi4RHhc6yut4EcYD50sZWRDKsugBJHSbON8KFwoTw=",
"lastModified": 1714043624,
"narHash": "sha256-Xn2r0Jv95TswvPlvamCC46wwNo8ALjRCMBJbGykdhcM=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "f33900124c23c4eca5831b9b5eb32ea5894375ce",
"rev": "86853e31dc1b62c6eeed11c667e8cdd0285d4411",
"type": "github"
},
"original": {
@ -224,11 +224,11 @@
]
},
"locked": {
"lastModified": 1711763326,
"narHash": "sha256-sXcesZWKXFlEQ8oyGHnfk4xc9f2Ip0X/+YZOq3sKviI=",
"lastModified": 1713946171,
"narHash": "sha256-lc75rgRQLdp4Dzogv5cfqOg6qYc5Rp83oedF2t0kDp8=",
"owner": "lnl7",
"repo": "nix-darwin",
"rev": "36524adc31566655f2f4d55ad6b875fb5c1a4083",
"rev": "230a197063de9287128e2c68a7a4b0cd7d0b50a7",
"type": "github"
},
"original": {
@ -255,11 +255,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1712168706,
"narHash": "sha256-XP24tOobf6GGElMd0ux90FEBalUtw6NkBSVh/RlA6ik=",
"lastModified": 1713995372,
"narHash": "sha256-fFE3M0vCoiSwCX02z8VF58jXFRj9enYUSTqjyHAjrds=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "1487bdea619e4a7a53a4590c475deabb5a9d1bfb",
"rev": "dd37924974b9202f8226ed5d74a252a9785aedf8",
"type": "github"
},
"original": {
@ -405,11 +405,11 @@
},
"unstable": {
"locked": {
"lastModified": 1712163089,
"narHash": "sha256-Um+8kTIrC19vD4/lUCN9/cU9kcOsD1O1m+axJqQPyMM=",
"lastModified": 1713895582,
"narHash": "sha256-cfh1hi+6muQMbi9acOlju3V1gl8BEaZBXBR9jQfQi4U=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "fd281bd6b7d3e32ddfa399853946f782553163b5",
"rev": "572af610f6151fd41c212f897c71f7056e3fb518",
"type": "github"
},
"original": {

View file

@ -89,14 +89,12 @@
deploy.nodes = self.lib.deploy.mkDeployNodes self.nixosConfigurations {
nachtigall = {
# hostname is set in hosts/nachtigall/networking.nix
hostname = "10.7.6.1";
sshUser = username;
};
flora-6 = {
hostname = "flora-6.pub.solar";
hostname = "10.7.6.2";
sshUser = username;
# Example
#sshOpts = [ "-p" "19999" ];
};
};
};

View file

@ -0,0 +1,260 @@
{ lib }:
let
# docker's filesystems disappear quickly, leading to false positives
deviceFilter = ''path!~"^(/var/lib/docker|/nix/store).*"'';
in
lib.mapAttrsToList
(name: opts: {
alert = name;
expr = opts.condition;
for = opts.time or "2m";
labels = { };
annotations.description = opts.description;
})
({
# prometheus_too_many_restarts = {
# condition = ''changes(process_start_time_seconds{job=~"prometheus|alertmanager"}[15m]) > 2'';
# description = "Prometheus has restarted more than twice in the last 15 minutes. It might be crashlooping.";
# };
# alert_manager_config_not_synced = {
# condition = ''count(count_values("config_hash", alertmanager_config_hash)) > 1'';
# description = "Configurations of AlertManager cluster instances are out of sync.";
# };
#alert_manager_e2e_dead_man_switch = {
# condition = "vector(1)";
# description = "Prometheus DeadManSwitch is an always-firing alert. It's used as an end-to-end test of Prometheus through the Alertmanager.";
#};
# prometheus_not_connected_to_alertmanager = {
# condition = "prometheus_notifications_alertmanagers_discovered < 1";
# description = "Prometheus cannot connect the alertmanager\n VALUE = {{ $value }}\n LABELS = {{ $labels }}";
# };
# prometheus_rule_evaluation_failures = {
# condition = "increase(prometheus_rule_evaluation_failures_total[3m]) > 0";
# description = "Prometheus encountered {{ $value }} rule evaluation failures, leading to potentially ignored alerts.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}";
# };
# prometheus_template_expansion_failures = {
# condition = "increase(prometheus_template_text_expansion_failures_total[3m]) > 0";
# time = "0m";
# description = "Prometheus encountered {{ $value }} template text expansion failures\n VALUE = {{ $value }}\n LABELS = {{ $labels }}";
# };
# promtail_file_lagging = {
# condition = ''abs(promtail_file_bytes_total - promtail_read_bytes_total) > 1e6'';
# time = "15m";
# description = ''{{ $labels.instance }} {{ $labels.job }} {{ $labels.path }} has been lagging by more than 1MB for more than 15m.'';
# };
filesystem_full_80percent = {
condition = ''
100 - ((node_filesystem_avail_bytes{fstype!="rootfs",mountpoint="/"} * 100) / node_filesystem_size_bytes{fstype!="rootfs",mountpoint="/"}) > 80'';
time = "10m";
description =
"{{$labels.instance}} device {{$labels.device}} on {{$labels.mountpoint}} got less than 20% space left on its filesystem.";
};
# filesystem_inodes_full = {
# condition = ''disk_inodes_free / disk_inodes_total < 0.10'';
# time = "10m";
# description = "{{$labels.instance}} device {{$labels.device}} on {{$labels.mountpoint}} got less than 10% inodes left on its filesystem.";
# };
# daily_task_not_run = {
# # give 6 hours grace period
# condition = ''time() - task_last_run{state="ok",frequency="daily"} > (24 + 6) * 60 * 60'';
# description = "{{$labels.instance}}: {{$labels.name}} was not run in the last 24h";
# };
# daily_task_failed = {
# condition = ''task_last_run{state="fail"}'';
# description = "{{$labels.instance}}: {{$labels.name}} failed to run";
# };
# } // (lib.genAttrs [
# "borgbackup-turingmachine"
# "borgbackup-eve"
# "borgbackup-datastore"
# ]
# (name: {
# condition = ''absent_over_time(task_last_run{name="${name}"}[1d])'';
# description = "status of ${name} is unknown: no data for a day";
# }))
# // {
# borgbackup_matchbox_not_run = {
# # give 6 hours grace period
# condition = ''time() - task_last_run{state="ok",frequency="daily",name="borgbackup-matchbox"} > 7 * 24 * 60 * 60'';
# description = "{{$labels.instance}}: {{$labels.name}} was not run in the last week";
# };
# borgbackup_matchbox = {
# condition = ''absent_over_time(task_last_run{name="borgbackup-matchbox"}[7d])'';
# description = "status of borgbackup-matchbox is unknown: no data for a week";
# };
# homeassistant = {
# condition = ''
# homeassistant_entity_available{domain="persistent_notification", entity!="persistent_notification.http_login"} >= 0'';
# description =
# "homeassistant notification {{$labels.entity}} ({{$labels.friendly_name}}): {{$value}}";
# };
swap_using_20percent = {
condition =
"node_memory_SwapTotal_bytes - (node_memory_SwapCached_bytes + node_memory_SwapFree_bytes) > node_memory_SwapTotal_bytes * 0.2";
time = "30m";
description =
"{{$labels.instance}} is using 20% of its swap space for at least 30 minutes.";
};
systemd_service_failed = {
condition = ''node_systemd_unit_state{state="failed"} == 1'';
description =
"{{$labels.instance}} failed to (re)start service {{$labels.name}}.";
};
restic_backup_too_old = {
condition = ''(time() - restic_snapshots_latest_time)/(60*60) > 24'';
description = "{{$labels.instance}} not backed up for more than 24 hours. ({{$value}})";
};
host_down = {
condition = ''up{job="node-stats", instance!~"ahorn.wireguard:9100|kartoffel.wireguard:9100|mega.wireguard:9100"} == 0'';
description = "{{$labels.instance}} is down!";
};
# service_not_running = {
# condition = ''systemd_units_active_code{name=~"teamspeak3-server.service|tt-rss.service", sub!="running"}'';
# description = "{{$labels.instance}} should have a running {{$labels.name}}.";
# };
ram_using_90percent = {
condition =
"node_memory_Buffers_bytes + node_memory_MemFree_bytes + node_memory_Cached_bytes < node_memory_MemTotal_bytes * 0.1";
time = "1h";
description =
"{{$labels.instance}} is using at least 90% of its RAM for at least 1 hour.";
};
cpu_using_90percent = {
condition = ''
100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) >= 90'';
time = "10m";
description =
"{{$labels.instance}} is running with cpu usage > 90% for at least 10 minutes: {{$value}}";
};
reboot = {
condition = "node_boot_time_seconds < 300";
description = "{{$labels.instance}} just rebooted.";
};
uptime = {
condition = "(time() - node_boot_time_seconds ) / (60*60*24) > 30";
description =
"Uptime monster: {{$labels.instance}} has been up for more than 30 days.";
};
flake_nixpkgs_outdated = {
condition = ''
(time() - flake_input_last_modified{input="nixpkgs"}) / (60*60*24) > 30'';
description =
"Nixpkgs outdated: Nixpkgs on {{$labels.instance}} has not been updated in 30 days";
};
/* ping = {
condition = "ping_result_code{type!='mobile'} != 0";
description = "{{$labels.url}}: ping from {{$labels.instance}} has failed!";
};
ping_high_latency = {
condition = "ping_average_response_ms{type!='mobile'} > 5000";
description = "{{$labels.instance}}: ping probe from {{$labels.source}} is encountering high latency!";
};
*/
http_status = {
condition = ''
probe_http_status_code{instance!~"https://megaclan3000.de"} != 200'';
description =
"http request failed from {{$labels.instance}}: {{$labels.result}}!";
};
/* http_match_failed = {
condition = "http_response_response_string_match == 0";
description = "{{$labels.server}} : http body not as expected; status code: {{$labels.status_code}}!";
};
dns_query = {
condition = "dns_query_result_code != 0";
description = "{{$labels.domain}} : could retrieve A record {{$labels.instance}} from server {{$labels.server}}: {{$labels.result}}!";
};
secure_dns_query = {
condition = "secure_dns_state != 0";
description = "{{$labels.domain}} : could retrieve A record {{$labels.instance}} from server {{$labels.server}}: {{$labels.result}} for protocol {{$labels.protocol}}!";
};
connection_failed = {
condition = "net_response_result_code != 0";
description = "{{$labels.server}}: connection to {{$labels.port}}({{$labels.protocol}}) failed from {{$labels.instance}}";
};
healthchecks = {
condition = "hc_check_up == 0";
description = "{{$labels.instance}}: healtcheck {{$labels.job}} fails!";
};
*/
cert_expiry = {
condition = "(probe_ssl_earliest_cert_expiry - time())/(3600*24) < 30";
description =
"{{$labels.instance}}: The TLS certificate will expire in less than 30 days: {{$value}}s";
};
# ignore devices that disabled S.M.A.R.T (example if attached via USB)
# smart_errors = {
# condition = ''smart_device_health_ok{enabled!="Disabled"} != 1'';
# description =
# "{{$labels.instance}}: S.M.A.R.T reports: {{$labels.device}} ({{$labels.model}}) has errors.";
# };
oom_kills = {
condition = "increase(node_vmstat_oom_kill[5m]) > 0";
description = "{{$labels.instance}}: OOM kill detected";
};
/* unusual_disk_read_latency = {
condition =
"rate(diskio_read_time[1m]) / rate(diskio_reads[1m]) > 0.1 and rate(diskio_reads[1m]) > 0";
description = ''
{{$labels.instance}}: Disk latency is growing (read operations > 100ms)
'';
};
unusual_disk_write_latency = {
condition =
"rate(diskio_write_time[1m]) / rate(diskio_write[1m]) > 0.1 and rate(diskio_write[1m]) > 0";
description = ''
{{$labels.instance}}: Disk latency is growing (write operations > 100ms)
'';
};
*/
host_memory_under_memory_pressure = {
condition = "rate(node_vmstat_pgmajfault[1m]) > 1000";
description =
"{{$labels.instance}}: The node is under heavy memory pressure. High rate of major page faults: {{$value}}";
};
# ext4_errors = {
# condition = "ext4_errors_value > 0";
# description =
# "{{$labels.instance}}: ext4 has reported {{$value}} I/O errors: check /sys/fs/ext4/*/errors_count";
# };
# alerts_silences_changed = {
# condition = ''abs(delta(alertmanager_silences{state="active"}[1h])) >= 1'';
# description =
# "alertmanager: number of active silences has changed: {{$value}}";
# };
})

View file

@ -37,6 +37,14 @@
reverse_proxy :${toString config.services.loki.configuration.server.http_listen_port}
'';
};
"alerts.pub.solar" = {
logFormat = lib.mkForce ''
output discard
'';
extraConfig = ''
reverse_proxy 10.7.6.2:${toString config.services.prometheus.alertmanager.port}
'';
};
"grafana.pub.solar" = {
logFormat = lib.mkForce ''
output discard

View file

@ -78,6 +78,7 @@
extraOptions = [
"--network=drone-net"
"--pull=always"
"--add-host=nachtigall.pub.solar:10.7.6.1"
];
environment = {
DRONE_GITEA_SERVER = "https://git.pub.solar";
@ -101,6 +102,7 @@
extraOptions = [
"--network=drone-net"
"--pull=always"
"--add-host=nachtigall.pub.solar:10.7.6.1"
];
environment = {
DRONE_RPC_HOST = "ci.pub.solar";

View file

@ -13,16 +13,43 @@
# Needed for the docker runner to communicate with the act_runner cache
networking.firewall.trustedInterfaces = [ "br-+" ];
users.users.gitea-runner = {
home = "/var/lib/gitea-runner/flora-6";
useDefaultShell = true;
group = "gitea-runner";
isSystemUser = true;
};
users.groups.gitea-runner = {};
systemd.services."gitea-runner-flora\\x2d6".serviceConfig = {
DynamicUser = lib.mkForce false;
};
systemd.tmpfiles.rules = [
"d '/data/gitea-actions-runner' 0750 gitea-runner gitea-runner - -"
"d '/var/lib/gitea-runner' 0750 gitea-runner gitea-runner - -"
];
# forgejo actions runner
# https://forgejo.org/docs/latest/admin/actions/
# https://docs.gitea.com/usage/actions/quickstart
services.gitea-actions-runner = {
package = pkgs.forgejo-actions-runner;
package = pkgs.forgejo-runner;
instances."flora-6" = {
enable = true;
name = config.networking.hostName;
url = "https://git.pub.solar";
tokenFile = config.age.secrets.forgejo-actions-runner-token.path;
settings = {
cache = {
enabled = true;
dir = "/data/gitea-actions-runner/actcache";
host = "";
port = 0;
external_server = "";
};
};
labels = [
# provide a debian 12 bookworm base with Node.js for actions
"debian-latest:docker://git.pub.solar/pub-solar/actions-base-image:20-bookworm"

View file

@ -65,5 +65,50 @@
}];
}
];
ruleFiles = [
(pkgs.writeText "prometheus-rules.yml" (builtins.toJSON {
groups = [{
name = "alerting-rules";
rules = import ./alert-rules.nix { inherit lib; };
}];
}))
];
alertmanagers = [{ static_configs = [{ targets = [ "localhost:9093" ]; }]; }];
alertmanager = {
enable = true;
# port = 9093; # Default
webExternalUrl = "https://alerts.pub.solar"; # TODO use a proper url?
# environmentFile = "${config.age.secrets.nachtigall-alertmanager-envfile.path}";
configuration = {
route = {
receiver = "all";
group_by = [ "instance" ];
group_wait = "30s";
group_interval = "2m";
repeat_interval = "24h";
};
receivers = [{
name = "all";
# Email config documentation: https://prometheus.io/docs/alerting/latest/configuration/#email_config
email_configs = [{
send_resolved = true;
to = "TODO";
from = "alerts@pub.solar";
smarthost = "TODO";
auth_username = "TODO";
auth_password_file = "${config.age.secrets.nachtigall-alertmanager-smtp-password.path}";
require_tls = true;
}];
# TODO:
# For matrix notifications, look into: https://github.com/pinpox/matrix-hook and add a webhook
# webhook_configs = [ { url = "http://127.0.0.1:11000/alert"; } ];
}];
};
};
};
}

View file

@ -35,6 +35,7 @@ in
#systemd.services."systemd-networkd".environment.SYSTEMD_LOG_LEVEL = "debug";
systemd.network.wait-online.ignoredInterfaces = [
"docker0"
"wg-ssh"
];
# List services that you want to enable:

View file

@ -18,12 +18,23 @@
];
privateKeyFile = config.age.secrets.wg-private-key.path;
peers = flake.self.logins.admins.wireguardDevices ++ [
{
endpoint = "nachtigall.pub.solar:51820";
{ # nachtigall.pub.solar
endpoint = "138.201.80.102:51820";
publicKey = "qzNywKY9RvqTnDO8eLik75/SHveaSk9OObilDzv+xkk=";
allowedIPs = [ "10.7.6.1/32" "fd00:fae:fae:fae:fae:1::/96" ];
}
];
};
};
services.openssh.listenAddresses = [
{
addr = "10.7.6.2";
port = 22;
}
{
addr = "[fd00:fae:fae:fae:fae:2::]";
port = 22;
}
];
}

View file

@ -16,6 +16,19 @@
owner = "gitea";
};
age.secrets.forgejo-ssh-private-key = {
file = "${flake.self}/secrets/forgejo-ssh-private-key.age";
mode = "600";
owner = "gitea";
path = "/etc/forgejo/ssh/id_forgejo";
};
environment.etc."forgejo/ssh/id_forgejo.pub" = {
text = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCkPjvF2tZ2lZtkXed6lBvaPUpsNrI5kHlCNEf4LyFtgFXHoUL8UD3Bz9Fn1S+SDkdBMw/SumjvUf7TEGqQqzmFbG7+nWdWg2L00VdN8Kp8W+kKPBByJrzjDUIGhIMt7obaZnlSAVO5Cdqc1Q6bA9POLjSHIBxSD3QUs2pjUCkciNcEtL93easuXnlMwoYa217n5sA8n+BZmOJAcmA/UxYvKsqYlpJxa44m8JgMTy+5L08i/zkx9/FwniOcKcLedxmjZfV8raitDy34LslT2nBNG4I+em7qhKhSScn/cfyPvARiK71pk/rTx9mxBEjcGAkp3+hiA3Nyms0h/qTUh8yGyhbOn8hiro34HEKswXDN1HRfseyyZ4TqOoIC07F53x4OliYA0B+QbvwOemTX2XAWHfU4xEYrIhR46o3Eu5ooOM9HZLLYzIzKjsj/rpuKalFZ+9IeT/PJ/DrbgOEBlJGTu4XucEYXSiIvWB7G9WXij7TXKYbsRAFho9jw+9UZWklFAh9dcUKlX9YxafxOrw9DhJK620hblHLY9wPPFCbZVXDGfqdtn+ncRReMAw6N3VYqxMgnxd+OC52SMsSUi9VaL26i2UvEBwNYuim8GDnVabu/ciQLHMgifBONuF9sKD58ee5nnKgtYLDy9zU86aHBU78Ijew+WhYitO7qejMHMQ==";
mode = "600";
user = "gitea";
};
services.nginx.virtualHosts."git.pub.solar" = {
enableACME = true;
forceSSL = true;
@ -41,11 +54,17 @@
users.groups.gitea = {};
# Expose SSH port only for forgejo SSH
networking.firewall.interfaces.enp35s0.allowedTCPPorts = [ 2223 ];
networking.firewall.extraCommands = ''
iptables -t nat -i enp35s0 -I PREROUTING -p tcp --dport 22 -j REDIRECT --to-ports 2223
ip6tables -t nat -i enp35s0 -I PREROUTING -p tcp --dport 22 -j REDIRECT --to-ports 2223
'';
services.forgejo = {
enable = true;
user = "gitea";
group = "gitea";
package = pkgs.forgejo;
database = {
type = "postgres";
passwordFile = config.age.secrets.forgejo-database-password.path;
@ -63,6 +82,9 @@
DOMAIN = "git.pub.solar";
HTTP_ADDR = "127.0.0.1";
HTTP_PORT = 3000;
START_SSH_SERVER = true;
SSH_LISTEN_PORT = 2223;
SSH_SERVER_HOST_KEYS = "${config.age.secrets."forgejo-ssh-private-key".path}";
};
log.LEVEL = "Warn";
@ -111,6 +133,19 @@
# the value of DEFAULT_ACTIONS_URL is prepended to it.
DEFAULT_ACTIONS_URL = "https://code.forgejo.org";
};
# https://forgejo.org/docs/next/admin/recommendations/#securitylogin_remember_days
security = {
LOGIN_REMEMBER_DAYS = 365;
};
# https://forgejo.org/docs/next/admin/config-cheat-sheet/#indexer-indexer
indexer = {
REPO_INDEXER_ENABLED = true;
REPO_INDEXER_PATH = "indexers/repos.bleve";
MAX_FILE_SIZE = 1048576;
REPO_INDEXER_EXCLUDE = "resources/bin/**";
};
};
};
@ -155,6 +190,11 @@
backupCleanupCommand = ''
rm /tmp/forgejo-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
services.restic.backups.forgejo-storagebox = {
@ -174,5 +214,10 @@
backupCleanupCommand = ''
rm /tmp/forgejo-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
}

View file

@ -64,6 +64,11 @@
backupCleanupCommand = ''
rm /tmp/keycloak-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
services.restic.backups.keycloak-storagebox = {
@ -82,5 +87,10 @@
backupCleanupCommand = ''
rm /tmp/keycloak-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
}

View file

@ -94,6 +94,11 @@
initialize = true;
passwordFile = config.age.secrets."restic-repo-droppie".path;
repository = "sftp:yule@droppie.b12f.io:/media/internal/pub.solar";
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
services.restic.backups.mailman-storagebox = {
@ -109,5 +114,10 @@
initialize = true;
passwordFile = config.age.secrets."restic-repo-storagebox".path;
repository = "sftp:u377325@u377325.your-storagebox.de:/backups";
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
}

View file

@ -61,6 +61,9 @@
passwordFile = "/run/agenix/mastodon-smtp-password";
fromAddress = "mastodon-notifications@pub.solar";
};
mediaAutoRemove = {
olderThanDays = 7;
};
extraEnvFiles = [
"/run/agenix/mastodon-extra-env-secrets"
];
@ -111,6 +114,11 @@
backupCleanupCommand = ''
rm /tmp/mastodon-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
services.restic.backups.mastodon-storagebox = {
@ -129,5 +137,10 @@
backupCleanupCommand = ''
rm /tmp/mastodon-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
}

View file

@ -13,11 +13,6 @@ let
synapseClientPort = "${toString listenerWithClient.port}";
in
{
systemd.services.matrix-appservice-irc.serviceConfig.SystemCallFilter = lib.mkForce [
"@system-service @pkey"
"~@privileged @resources"
"@chown"
];
services.matrix-appservice-irc = {
enable = true;
localpart = "irc_bot";

View file

@ -312,5 +312,10 @@ in
backupCleanupCommand = ''
rm /tmp/matrix-synapse-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
}

View file

@ -97,6 +97,7 @@
integrity.check.disabled = false;
updater.release.channel = "stable";
loglevel = 0;
maintenance_window_start = "1";
# maintenance = false;
app_install_overwrite = [
"pdfdraw"
@ -149,6 +150,11 @@
backupCleanupCommand = ''
rm /tmp/nextcloud-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
services.restic.backups.nextcloud-storagebox = {
@ -168,5 +174,10 @@
backupCleanupCommand = ''
rm /tmp/nextcloud-backup.sql
'';
pruneOpts = [
"--keep-daily 7"
"--keep-weekly 4"
"--keep-monthly 3"
];
};
}

View file

@ -24,6 +24,13 @@ in
# https://my.f5.com/manage/s/article/K51798430
proxy_headers_hash_bucket_size 128;
'';
appendConfig = ''
# Number of CPU cores
worker_processes 8;
'';
eventsConfig = ''
worker_connections 1024;
'';
};
security.acme = {

View file

@ -18,12 +18,23 @@
];
privateKeyFile = config.age.secrets.wg-private-key.path;
peers = flake.self.logins.admins.wireguardDevices ++ [
{
endpoint = "flora-6.pub.solar:51820";
{ # flora-6.pub.solar
endpoint = "80.71.153.210:51820";
publicKey = "jtSR5G2P/nm9s8WrVc26Xc/SQLupRxyXE+5eIeqlsTU=";
allowedIPs = [ "10.7.6.2/32" "fd00:fae:fae:fae:fae:2::/96" ];
}
];
};
};
services.openssh.listenAddresses = [
{
addr = "10.7.6.1";
port = 22;
}
{
addr = "[fd00:fae:fae:fae:fae:1::]";
port = 22;
}
];
}

View file

@ -5,6 +5,14 @@
};
secretEncryptionKeys = sshPubKeys;
wireguardDevices = [
{
# tuxnix
publicKey = "fTvULvdsc92binFaBV+uWwFi33bi8InShcaPnoxUZEA=";
allowedIPs = [ "10.7.6.203/32" "fd00:fae:fae:fae:fae:203::/96" ];
}
];
};
b12f = rec {
@ -55,6 +63,10 @@
publicKey = "3UrVLQrwXnPAVXPiTAd7eM3fZYxnFSYgKAGpNMUwnUk=";
allowedIPs = [ "10.7.6.201/32" "fd00:fae:fae:fae:fae:201::/96" ];
}
{ # ryzensun
publicKey = "oVF2/s7eIxyVjtG0MhKPx5SZ1JllZg+ZFVF2eVYtPGo=";
allowedIPs = [ "10.7.6.204/32" "fd00:fae:fae:fae:fae:204::/96" ];
}
];
};
}

View file

@ -2,6 +2,11 @@
# Don't expose SSH via public interfaces
networking.firewall.interfaces.wg-ssh.allowedTCPPorts = [ 22 ];
networking.hosts = {
"10.7.6.1" = ["nachtigall.pub.solar"];
"10.7.6.2" = ["flora-6.pub.solar"];
};
services.openssh = {
enable = true;
openFirewall = lib.mkDefault false;
@ -31,14 +36,11 @@
services.resolved = {
enable = true;
# DNSSEC=false because of random SERVFAIL responses with Greenbaum DNS
# when using allow-downgrade, see https://github.com/systemd/systemd/issues/10579
extraConfig = ''
DNS=193.110.81.0#dns0.eu 185.253.5.0#dns0.eu 2a0f:fc80::#dns0.eu 2a0f:fc81::#dns0.eu 9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.net
FallbackDNS=5.1.66.255#dot.ffmuc.net 185.150.99.255#dot.ffmuc.net 2001:678:e68:f000::#dot.ffmuc.net 2001:678:ed0:f000::#dot.ffmuc.net
Domains=~.
DNSOverTLS=yes
DNSSEC=false
'';
};
}

View file

@ -13,6 +13,7 @@
};
in
{
forgejo-runner = unstable.forgejo-runner;
element-themes = prev.callPackage ./pkgs/element-themes { inherit (inputs) element-themes; };
})
];

Binary file not shown.

View file

@ -33,6 +33,7 @@ in
"forgejo-actions-runner-token.age".publicKeys = flora6Keys ++ adminKeys;
"forgejo-database-password.age".publicKeys = nachtigallKeys ++ adminKeys;
"forgejo-mailer-password.age".publicKeys = nachtigallKeys ++ adminKeys;
"forgejo-ssh-private-key.age".publicKeys = nachtigallKeys ++ adminKeys;
"matrix-mautrix-telegram-env-file.age".publicKeys = nachtigallKeys ++ adminKeys;
"matrix-synapse-signing-key.age".publicKeys = nachtigallKeys ++ adminKeys;

View file

@ -1,181 +1,186 @@
# https://registry.terraform.io/providers/namecheap/namecheap/latest/docs
resource "namecheap_domain_records" "pub-solar" {
domain = "pub.solar"
mode = "OVERWRITE"
domain = "pub.solar"
mode = "OVERWRITE"
email_type = "MX"
record {
hostname = "flora-6"
type = "A"
address = "80.71.153.210"
type = "A"
address = "80.71.153.210"
}
record {
hostname = "auth"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "ci"
type = "A"
address = "80.71.153.210"
type = "A"
address = "80.71.153.210"
}
record {
hostname = "alerts"
type = "CNAME"
address = "flora-6.pub.solar."
}
record {
hostname = "git"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "stream"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "list"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "obs-portal"
type = "A"
address = "80.71.153.210"
type = "A"
address = "80.71.153.210"
}
record {
hostname = "vpn"
type = "A"
address = "80.71.153.210"
type = "A"
address = "80.71.153.210"
}
record {
hostname = "cache"
type = "A"
address = "95.217.225.160"
type = "A"
address = "95.217.225.160"
}
record {
hostname = "factorio"
type = "A"
address = "80.244.242.2"
type = "A"
address = "80.244.242.2"
}
record {
hostname = "collabora"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "@"
type = "ALIAS"
address = "nachtigall.pub.solar."
ttl = 300
type = "ALIAS"
address = "nachtigall.pub.solar."
ttl = 300
}
record {
hostname = "chat"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "cloud"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "turn"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "grafana"
type = "A"
address = "80.71.153.210"
type = "A"
address = "80.71.153.210"
}
record {
hostname = "hpb"
type = "A"
address = "80.71.153.239"
type = "A"
address = "80.71.153.239"
}
record {
hostname = "files"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "search"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "wiki"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "mastodon"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "matrix"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "tmate"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "www"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
record {
hostname = "@"
type = "TXT"
address = "v=spf1 include:spf.greenbaum.zone a:list.pub.solar ~all"
type = "TXT"
address = "v=spf1 include:spf.greenbaum.zone a:list.pub.solar ~all"
}
record {
hostname = "list"
type = "TXT"
address = "v=spf1 a:list.pub.solar ?all"
type = "TXT"
address = "v=spf1 a:list.pub.solar ?all"
}
record {
hostname = "_dmarc"
type = "TXT"
address = "v=DMARC1; p=reject;"
type = "TXT"
address = "v=DMARC1; p=reject;"
}
record {
hostname = "_dmarc.list"
type = "TXT"
address = "v=DMARC1; p=reject;"
type = "TXT"
address = "v=DMARC1; p=reject;"
}
record {
hostname = "modoboa._domainkey"
type = "TXT"
address = "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAx/EqLMpk0MyL1aQ0JVG44ypTRbZBVA13MFjEntxAvowaWtq1smRbnEwTTKgqUOrUyaM4dVmli1dedne4mk/ncqRAm02KuhtTY+5wXfhTKK53EhqehbKwH+Qvzb12983Qwdau/QTHiFHwXHufMaSsCvd9CRWCp9q68Q7noQqndJeLHT6L0eECd2Zk3ZxJuh+Fxdb7+Kw68Tf6z13Rs+MU01qLM7x0jmSQHa4cv2pk+7NTGMBRp6fVskfbqev5nFkZWJ7rhXEbP9Eukd/L3ro/ubs1quWJotG02gPRKE8fgkm1Ytlws1/pnqpuvKXQS1HzBEP1X2ExezJMzQ1SnZCigQIDAQAB"
type = "TXT"
address = "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAx/EqLMpk0MyL1aQ0JVG44ypTRbZBVA13MFjEntxAvowaWtq1smRbnEwTTKgqUOrUyaM4dVmli1dedne4mk/ncqRAm02KuhtTY+5wXfhTKK53EhqehbKwH+Qvzb12983Qwdau/QTHiFHwXHufMaSsCvd9CRWCp9q68Q7noQqndJeLHT6L0eECd2Zk3ZxJuh+Fxdb7+Kw68Tf6z13Rs+MU01qLM7x0jmSQHa4cv2pk+7NTGMBRp6fVskfbqev5nFkZWJ7rhXEbP9Eukd/L3ro/ubs1quWJotG02gPRKE8fgkm1Ytlws1/pnqpuvKXQS1HzBEP1X2ExezJMzQ1SnZCigQIDAQAB"
}
record {
hostname = "@"
type = "MX"
address = "mail.greenbaum.zone."
mx_pref = "0"
type = "MX"
address = "mail.greenbaum.zone."
mx_pref = "0"
}
record {
hostname = "list"
type = "MX"
address = "list.pub.solar."
mx_pref = "0"
type = "MX"
address = "list.pub.solar."
mx_pref = "0"
}
record {
hostname = "nachtigall"
type = "A"
address = "138.201.80.102"
type = "A"
address = "138.201.80.102"
}
record {
hostname = "nachtigall"
type = "AAAA"
address = "2a01:4f8:172:1c25::1"
type = "AAAA"
address = "2a01:4f8:172:1c25::1"
}
record {
hostname = "matrix.test"
type = "CNAME"
address = "nachtigall.pub.solar."
type = "CNAME"
address = "nachtigall.pub.solar."
}
# SRV records can only be changed via NameCheap Web UI
# add comment