Merge branch 'master' into pub.solar

This commit is contained in:
teutat3s 2021-10-08 01:29:20 +02:00
commit 646fd386ac
Signed by: teutat3s
GPG key ID: 18DAE600A6BBE705
48 changed files with 327 additions and 183 deletions

View file

@ -26,7 +26,7 @@ The following repositories allow you to copy and use this setup:
Updates to this section are trailed here: Updates to this section are trailed here:
[GoMatrixHosting Matrix Docker Ansible Deploy](https://gitlab.com/GoMatrixHosting/gomatrixhosting-matrix-docker-ansible-deploy) [GoMatrixHosting Matrix Docker Ansible Deploy](https://gitlab.com/GoMatrixHosting/matrix-docker-ansible-deploy)
## Does I need an AWX setup to use this? How do I configure it? ## Does I need an AWX setup to use this? How do I configure it?

View file

@ -108,6 +108,9 @@ matrix_nginx_proxy_container_federation_host_bind_port: '127.0.0.1:8449'
# Since we don't obtain any certificates (`matrix_ssl_retrieval_method: none` above), it won't work by default. # Since we don't obtain any certificates (`matrix_ssl_retrieval_method: none` above), it won't work by default.
# An alternative is to tweak some of: `matrix_coturn_tls_enabled`, `matrix_coturn_tls_cert_path` and `matrix_coturn_tls_key_path`. # An alternative is to tweak some of: `matrix_coturn_tls_enabled`, `matrix_coturn_tls_cert_path` and `matrix_coturn_tls_key_path`.
matrix_coturn_enabled: false matrix_coturn_enabled: false
# Trust the reverse proxy to send the correct `X-Forwarded-Proto` header as it is handling the SSL connection.
matrix_nginx_proxy_trust_forwarded_proto: true
``` ```
With this, nginx would still be in use, but it would not bother with anything SSL related or with taking up public ports. With this, nginx would still be in use, but it would not bother with anything SSL related or with taking up public ports.

View file

@ -56,8 +56,40 @@ Name | Description
`matrix_nginx_proxy_proxy_synapse_metrics`|Set this to `true` to make matrix-nginx-proxy expose the Synapse metrics at `https://matrix.DOMAIN/_synapse/metrics` `matrix_nginx_proxy_proxy_synapse_metrics`|Set this to `true` to make matrix-nginx-proxy expose the Synapse metrics at `https://matrix.DOMAIN/_synapse/metrics`
`matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_enabled`|Set this to `true` to password-protect (using HTTP Basic Auth) `https://matrix.DOMAIN/_synapse/metrics` (the username is always `prometheus`, the password is defined in `matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key`) `matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_enabled`|Set this to `true` to password-protect (using HTTP Basic Auth) `https://matrix.DOMAIN/_synapse/metrics` (the username is always `prometheus`, the password is defined in `matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key`)
`matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key`|Set this to a password to use for HTTP Basic Auth for protecting `https://matrix.DOMAIN/_synapse/metrics` (the username is always `prometheus` - it's not configurable) `matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key`|Set this to a password to use for HTTP Basic Auth for protecting `https://matrix.DOMAIN/_synapse/metrics` (the username is always `prometheus` - it's not configurable)
`matrix_server_fqn_grafana`|Use this variable to override the domain at which the Grafana web user-interface is at (defaults to `stats.DOMAIN`). `matrix_server_fqn_grafana`|Use this variable to override the domain at which the Grafana web user-interface is at (defaults to `stats.DOMAIN`)
### Collecting system and Postgres metrics to an external Prometheus server (advanced)
When you normally enable the Prometheus and Grafana via the playbook, it will also show general system (via node-exporter) and Postgres (via postgres-exporter) stats. If you are instead collecting your metrics to an external Prometheus server, you can follow this advanced configuration example to also export these stats.
It would be possible to use `matrix_prometheus_node_exporter_container_http_host_bind_port` etc., but that is not always the best choice, for example because your server is on a public network.
Use the following variables in addition to the ones mentioned above:
Name | Description
-----|----------
`matrix_nginx_proxy_proxy_grafana_enabled`|Set this to `true` to make the stats subdomain (`matrix_server_fqn_grafana`) available via the Nginx proxy
`matrix_ssl_additional_domains_to_obtain_certificates_for`|Add `"{{ matrix_server_fqn_grafana }}"` to this list to have letsencrypt fetch a certificate for the stats subdomain
`matrix_prometheus_node_exporter_enabled`|Set this to `true` to enable the node (general system stats) exporter
`matrix_prometheus_postgres_exporter_enabled`|Set this to `true` to enable the Postgres exporter
`matrix_nginx_proxy_proxy_grafana_additional_server_configuration_blocks`|Add locations to this list depending on which of the above exporters you enabled (see below)
```nginx
matrix_nginx_proxy_proxy_grafana_additional_server_configuration_blocks:
- 'location /node-exporter/ {
resolver 127.0.0.11 valid=5s;
proxy_pass http://matrix-prometheus-node-exporter:9100/;
auth_basic "protected";
auth_basic_user_file /nginx-data/matrix-synapse-metrics-htpasswd;
}'
- 'location /postgres-exporter/ {
resolver 127.0.0.11 valid=5s;
proxy_pass http://matrix-prometheus-postgres-exporter:9187/;
auth_basic "protected";
auth_basic_user_file /nginx-data/matrix-synapse-metrics-htpasswd;
}'
```
You can customize the `location`s to your liking, just point your Prometheus to there later (e.g. `stats.DOMAIN/node-exporter/metrics`). Nginx is very picky about the `proxy_pass`syntax: take care to follow the example closely and note the trailing slash as well as absent use of variables. postgres-exporter uses the nonstandard port 9187.
## More information ## More information

View file

@ -60,7 +60,7 @@ ALTER TABLE public.application_services_state OWNER TO synapse_user;
It can be worked around by changing the username to `synapse`, for example by using `sed`: It can be worked around by changing the username to `synapse`, for example by using `sed`:
```Shell ```Shell
$ sed -i "s/synapse_user/synapse/g" homeserver.sql" $ sed -i "s/synapse_user/synapse/g" homeserver.sql
``` ```
This uses sed to perform an 'in-place' (`-i`) replacement globally (`/g`), searching for `synapse user` and replacing with `synapse` (`s/synapse_user/synapse`). If your database username was different, change `synapse_user` to that username instead. This uses sed to perform an 'in-place' (`-i`) replacement globally (`/g`), searching for `synapse user` and replacing with `synapse` (`s/synapse_user/synapse`). If your database username was different, change `synapse_user` to that username instead.

View file

@ -22,6 +22,7 @@ List of roles where self-building the Docker image is currently possible:
- `matrix-mailer` - `matrix-mailer`
- `matrix-bridge-appservice-irc` - `matrix-bridge-appservice-irc`
- `matrix-bridge-appservice-slack` - `matrix-bridge-appservice-slack`
- `matrix-bridge-appservice-webhooks`
- `matrix-bridge-mautrix-facebook` - `matrix-bridge-mautrix-facebook`
- `matrix-bridge-mautrix-hangouts` - `matrix-bridge-mautrix-hangouts`
- `matrix-bridge-mautrix-telegram` - `matrix-bridge-mautrix-telegram`

View file

@ -104,6 +104,8 @@ matrix_appservice_discord_database_password: "{{ matrix_synapse_macaroon_secret_
# We don't enable bridges by default. # We don't enable bridges by default.
matrix_appservice_webhooks_enabled: false matrix_appservice_webhooks_enabled: false
matrix_appservice_webhooks_container_image_self_build: "{{ matrix_architecture != 'amd64' }}"
# Normally, matrix-nginx-proxy is enabled and nginx can reach matrix-appservice-webhooks over the container network. # Normally, matrix-nginx-proxy is enabled and nginx can reach matrix-appservice-webhooks over the container network.
# If matrix-nginx-proxy is not enabled, or you otherwise have a need for it, you can expose # If matrix-nginx-proxy is not enabled, or you otherwise have a need for it, you can expose
# matrix-appservice-webhooks' client-server port to the local host. # matrix-appservice-webhooks' client-server port to the local host.

View file

@ -24,14 +24,6 @@
mode: '0660' mode: '0660'
tags: use-survey tags: use-survey
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
tags: use-survey
- name: Recreate 'Backup Server' job template - name: Recreate 'Backup Server' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -49,8 +41,8 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
tags: use-survey tags: use-survey
@ -90,6 +82,15 @@
command: borgmatic -c /root/.config/borgmatic/config_2.yaml command: borgmatic -c /root/.config/borgmatic/config_2.yaml
when: matrix_awx_backup_enabled|bool when: matrix_awx_backup_enabled|bool
- name: Delete the AWX session token for executing modules
awx.awx.tower_token:
description: 'AWX Session Token'
scope: "write"
state: absent
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
- name: Set boolean value to exit playbook - name: Set boolean value to exit playbook
set_fact: set_fact:
end_playbook: true end_playbook: true

View file

@ -0,0 +1,10 @@
- name: Create a AWX session token for executing modules
awx.awx.tower_token:
description: 'AWX Session Token'
scope: "write"
state: present
tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_master_token }}"
register: awx_session_token
no_log: True

View file

@ -23,6 +23,15 @@
/usr/local/bin/matrix-synapse-register-user {{ new_username | quote }} {{ new_password | quote }} {{ admin_bool }} /usr/local/bin/matrix-synapse-register-user {{ new_username | quote }} {{ new_password | quote }} {{ admin_bool }}
register: cmd register: cmd
- name: Delete the AWX session token for executing modules
awx.awx.tower_token:
description: 'AWX Session Token'
scope: "write"
state: absent
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
- name: Result - name: Result
debug: msg="{{ cmd.stdout }}" debug: msg="{{ cmd.stdout }}"

View file

@ -77,13 +77,6 @@
mode: '0660' mode: '0660'
when: customise_base_domain_website is undefined when: customise_base_domain_website is undefined
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Website + Access Export' job template - name: Recreate 'Configure Website + Access Export' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -101,8 +94,8 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: customise_base_domain_website is defined when: customise_base_domain_website is defined
@ -123,8 +116,8 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: customise_base_domain_website is undefined when: customise_base_domain_website is undefined

View file

@ -0,0 +1,9 @@
- name: Delete the AWX session token for executing modules
awx.awx.tower_token:
description: 'AWX Session Token'
scope: "write"
state: absent
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"

View file

@ -24,6 +24,15 @@
units: days units: days
unique: yes unique: yes
- name: Delete the AWX session token for executing modules
awx.awx.tower_token:
description: 'AWX Session Token'
scope: "write"
state: absent
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
- name: Set boolean value to exit playbook - name: Set boolean value to exit playbook
set_fact: set_fact:
end_playbook: true end_playbook: true

View file

@ -9,3 +9,7 @@
file: '/var/lib/awx/projects/hosting/hosting_vars.yml' file: '/var/lib/awx/projects/hosting/hosting_vars.yml'
no_log: True no_log: True
- name: Include AWX master token from awx_tokens.yml
include_vars:
file: /var/lib/awx/projects/hosting/awx_tokens.yml
no_log: True

View file

@ -17,6 +17,15 @@
tags: tags:
- always - always
# Create AWX session token
- include_tasks:
file: "create_session_token.yml"
apply:
tags: always
when: run_setup|bool and matrix_awx_enabled|bool
tags:
- always
# Perform a backup of the server # Perform a backup of the server
- include_tasks: - include_tasks:
file: "backup_server.yml" file: "backup_server.yml"
@ -25,7 +34,7 @@
when: run_setup|bool and matrix_awx_enabled|bool when: run_setup|bool and matrix_awx_enabled|bool
tags: tags:
- backup-server - backup-server
# Perform a export of the server # Perform a export of the server
- include_tasks: - include_tasks:
file: "export_server.yml" file: "export_server.yml"
@ -62,6 +71,15 @@
tags: tags:
- purge-database - purge-database
# Rotate SSH key if called
- include_tasks:
file: "rotate_ssh.yml"
apply:
tags: rotate-ssh
when: run_setup|bool and matrix_awx_enabled|bool
tags:
- rotate-ssh
# Import configs, media repo from /chroot/backup import # Import configs, media repo from /chroot/backup import
- include_tasks: - include_tasks:
file: "import_awx.yml" file: "import_awx.yml"
@ -179,6 +197,15 @@
tags: tags:
- setup-synapse-admin - setup-synapse-admin
# Delete AWX session token
- include_tasks:
file: "delete_session_token.yml"
apply:
tags: always
when: run_setup|bool and matrix_awx_enabled|bool
tags:
- always
# Load newly formed matrix variables from AWX volume # Load newly formed matrix variables from AWX volume
- include_tasks: - include_tasks:
file: "load_matrix_variables.yml" file: "load_matrix_variables.yml"

View file

@ -5,18 +5,18 @@
name: dateutils name: dateutils
state: latest state: latest
- name: Ensure dateutils, curl and jq intalled on target machine - name: Include vars in matrix_vars.yml
include_vars:
file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
no_log: True
- name: Ensure curl and jq intalled on target machine
apt: apt:
pkg: pkg:
- curl - curl
- jq - jq
state: present state: present
- name: Include vars in matrix_vars.yml
include_vars:
file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
no_log: True
- name: Collect before shrink size of Synapse database - name: Collect before shrink size of Synapse database
shell: du -sh /matrix/postgres/data shell: du -sh /matrix/postgres/data
register: db_size_before_stat register: db_size_before_stat
@ -144,13 +144,6 @@
loop: "{{ room_list_state_events.splitlines() | flatten(levels=1) }}" loop: "{{ room_list_state_events.splitlines() | flatten(levels=1) }}"
when: purge_mode.find("Number of events [slower]") != -1 when: purge_mode.find("Number of events [slower]") != -1
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Adjust 'Deploy/Update a Server' job template - name: Adjust 'Deploy/Update a Server' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -165,8 +158,8 @@
credential: "{{ member_id }} - AWX SSH Key" credential: "{{ member_id }} - AWX SSH Key"
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1) when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
@ -175,8 +168,8 @@
awx.awx.tower_job_launch: awx.awx.tower_job_launch:
job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server" job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
wait: yes wait: yes
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1) when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
@ -194,8 +187,8 @@
credential: "{{ member_id }} - AWX SSH Key" credential: "{{ member_id }} - AWX SSH Key"
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1) when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
@ -231,8 +224,8 @@
credential: "{{ member_id }} - AWX SSH Key" credential: "{{ member_id }} - AWX SSH Key"
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: (purge_mode.find("Perform final shrink") != -1) when: (purge_mode.find("Perform final shrink") != -1)
@ -241,8 +234,8 @@
awx.awx.tower_job_launch: awx.awx.tower_job_launch:
job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server" job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
wait: yes wait: yes
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: (purge_mode.find("Perform final shrink") != -1) when: (purge_mode.find("Perform final shrink") != -1)
@ -260,8 +253,8 @@
credential: "{{ member_id }} - AWX SSH Key" credential: "{{ member_id }} - AWX SSH Key"
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: (purge_mode.find("Perform final shrink") != -1) when: (purge_mode.find("Perform final shrink") != -1)
@ -308,6 +301,15 @@
msg: "{{ db_size_after_stat.stdout.split('\n') }}" msg: "{{ db_size_after_stat.stdout.split('\n') }}"
when: (db_size_after_stat is defined) and (purge_mode.find("Perform final shrink") != -1) when: (db_size_after_stat is defined) and (purge_mode.find("Perform final shrink") != -1)
- name: Delete the AWX session token for executing modules
awx.awx.tower_token:
description: 'AWX Session Token'
scope: "write"
state: absent
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
- name: Set boolean value to exit playbook - name: Set boolean value to exit playbook
set_fact: set_fact:
end_playbook: true end_playbook: true

View file

@ -1,5 +1,5 @@
- name: Ensure dateutils and curl is installed in AWX - name: Ensure dateutils is installed in AWX
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
yum: yum:
name: dateutils name: dateutils
@ -90,6 +90,15 @@
msg: "{{ remote_media_size_after.stdout.split('\n') }}" msg: "{{ remote_media_size_after.stdout.split('\n') }}"
when: matrix_purge_media_type == "Remote Media" when: matrix_purge_media_type == "Remote Media"
- name: Delete the AWX session token for executing modules
awx.awx.tower_token:
description: 'AWX Session Token'
scope: "write"
state: absent
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
- name: Set boolean value to exit playbook - name: Set boolean value to exit playbook
set_fact: set_fact:
end_playbook: true end_playbook: true

View file

@ -5,4 +5,3 @@
path: "/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml" path: "/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml"
regexp: 'matrix_synapse_use_presence' regexp: 'matrix_synapse_use_presence'
replace: 'matrix_synapse_presence_enabled' replace: 'matrix_synapse_presence_enabled'

View file

@ -0,0 +1,24 @@
- name: Set the new authorized key taken from file
authorized_key:
user: root
state: present
exclusive: yes
key: "{{ lookup('file', '/var/lib/awx/projects/hosting/client_public.key') }}"
- name: Delete the AWX session token for executing modules
awx.awx.tower_token:
description: 'AWX Session Token'
scope: "write"
state: absent
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
- name: Set boolean value to exit playbook
set_fact:
end_playbook: true
- name: End playbook if this task list is called.
meta: end_play
when: end_playbook is defined and end_playbook|bool

View file

@ -218,13 +218,6 @@
- debug: - debug:
msg: "matrix_corporal_matrix_registration_shared_secret: {{ matrix_corporal_matrix_registration_shared_secret }}" msg: "matrix_corporal_matrix_registration_shared_secret: {{ matrix_corporal_matrix_registration_shared_secret }}"
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Corporal (Advanced)' job template - name: Recreate 'Configure Corporal (Advanced)' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -242,6 +235,6 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -82,13 +82,6 @@
dest: '/matrix/awx/configure_dimension.json' dest: '/matrix/awx/configure_dimension.json'
mode: '0660' mode: '0660'
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Dimension' job template - name: Recreate 'Configure Dimension' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -106,6 +99,6 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -40,13 +40,6 @@
dest: '/matrix/awx/configure_element.json' dest: '/matrix/awx/configure_element.json'
mode: '0660' mode: '0660'
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Element' job template - name: Recreate 'Configure Element' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -64,6 +57,6 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -21,13 +21,6 @@
dest: '/matrix/awx/configure_element_subdomain.json' dest: '/matrix/awx/configure_element_subdomain.json'
mode: '0660' mode: '0660'
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Element Subdomain' job template - name: Recreate 'Configure Element Subdomain' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -44,6 +37,6 @@
survey_spec: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/configure_element_subdomain.json') }}" survey_spec: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/configure_element_subdomain.json') }}"
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -22,13 +22,6 @@
dest: '/matrix/awx/configure_jitsi.json' dest: '/matrix/awx/configure_jitsi.json'
mode: '0660' mode: '0660'
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Jitsi' job template - name: Recreate 'Configure Jitsi' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -46,6 +39,6 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -79,13 +79,6 @@
dest: '/matrix/awx/configure_ma1sd.json' dest: '/matrix/awx/configure_ma1sd.json'
mode: '0660' mode: '0660'
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure ma1sd (Advanced)' job template - name: Recreate 'Configure ma1sd (Advanced)' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -103,7 +96,7 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -21,13 +21,6 @@
dest: '/matrix/awx/configure_email_relay.json' dest: '/matrix/awx/configure_email_relay.json'
mode: '0660' mode: '0660'
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Email Relay' job template - name: Recreate 'Configure Email Relay' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -45,6 +38,6 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -200,13 +200,6 @@
dest: '/matrix/awx/configure_synapse.json' dest: '/matrix/awx/configure_synapse.json'
mode: '0660' mode: '0660'
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Synapse' job template - name: Recreate 'Configure Synapse' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -224,6 +217,6 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -21,13 +21,6 @@
dest: '/matrix/awx/configure_synapse_admin.json' dest: '/matrix/awx/configure_synapse_admin.json'
mode: '0660' mode: '0660'
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
shell: |
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
register: tower_token
no_log: True
- name: Recreate 'Configure Synapse Admin' job template - name: Recreate 'Configure Synapse Admin' job template
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
awx.awx.tower_job_template: awx.awx.tower_job_template:
@ -45,6 +38,6 @@
become_enabled: yes become_enabled: yes
state: present state: present
verbosity: 1 verbosity: 1
tower_host: "https://{{ tower_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes

View file

@ -83,8 +83,8 @@ matrix_host_command_openssl: "/usr/bin/env openssl"
matrix_host_command_systemctl: "/usr/bin/env systemctl" matrix_host_command_systemctl: "/usr/bin/env systemctl"
matrix_host_command_sh: "/usr/bin/env sh" matrix_host_command_sh: "/usr/bin/env sh"
matrix_ntpd_package: "{{ 'systemd-timesyncd' if ansible_distribution == 'CentOS' and ansible_distribution_major_version > '7' else 'ntp' }}" matrix_ntpd_package: "{{ 'systemd-timesyncd' if (ansible_distribution == 'CentOS' and ansible_distribution_major_version > '7') or (ansible_distribution == 'Ubuntu' and ansible_distribution_major_version > '18') else 'ntp' }}"
matrix_ntpd_service: "{{ 'systemd-timesyncd' if ansible_distribution == 'CentOS' and ansible_distribution_major_version > '7' else ('ntpd' if ansible_os_family == 'RedHat' or ansible_distribution == 'Archlinux' else 'ntp') }}" matrix_ntpd_service: "{{ 'systemd-timesyncd' if (ansible_distribution == 'CentOS' and ansible_distribution_major_version > '7') or (ansible_distribution == 'Ubuntu' and ansible_distribution_major_version > '18') or ansible_distribution == 'Archlinux' else ('ntpd' if ansible_os_family == 'RedHat' else 'ntp') }}"
matrix_homeserver_url: "https://{{ matrix_server_fqn_matrix }}" matrix_homeserver_url: "https://{{ matrix_server_fqn_matrix }}"

View file

@ -4,7 +4,6 @@
pacman: pacman:
name: name:
- python-docker - python-docker
- "{{ matrix_ntpd_package }}"
# TODO This needs to be verified. Which version do we need? # TODO This needs to be verified. Which version do we need?
- fuse3 - fuse3
- python-dnspython - python-dnspython

View file

@ -3,13 +3,20 @@
matrix_appservice_webhooks_enabled: true matrix_appservice_webhooks_enabled: true
matrix_appservice_webhooks_container_image_self_build: false
matrix_appservice_webhooks_container_image_self_build_repo: "https://github.com/turt2live/matrix-appservice-webhooks"
matrix_appservice_webhooks_container_image_self_build_repo_version: "{{ 'master' if matrix_appservice_webhooks_version == 'latest' else matrix_appservice_webhooks_version }}"
matrix_appservice_webhooks_container_image_self_build_repo_dockerfile_path: "Dockerfile"
matrix_appservice_webhooks_version: latest matrix_appservice_webhooks_version: latest
matrix_appservice_webhooks_docker_image: "{{ matrix_container_global_registry_prefix }}turt2live/matrix-appservice-webhooks:{{ matrix_appservice_webhooks_version }}" matrix_appservice_webhooks_docker_image: "{{ matrix_appservice_webhooks_docker_image_name_prefix }}turt2live/matrix-appservice-webhooks:{{ matrix_appservice_webhooks_version }}"
matrix_appservice_webhooks_docker_image_name_prefix: "{{ 'localhost/' if matrix_appservice_webhooks_container_image_self_build else matrix_container_global_registry_prefix }}"
matrix_appservice_webhooks_docker_image_force_pull: "{{ matrix_appservice_webhooks_docker_image.endswith(':latest') }}" matrix_appservice_webhooks_docker_image_force_pull: "{{ matrix_appservice_webhooks_docker_image.endswith(':latest') }}"
matrix_appservice_webhooks_base_path: "{{ matrix_base_data_path }}/appservice-webhooks" matrix_appservice_webhooks_base_path: "{{ matrix_base_data_path }}/appservice-webhooks"
matrix_appservice_webhooks_config_path: "{{ matrix_appservice_webhooks_base_path }}/config" matrix_appservice_webhooks_config_path: "{{ matrix_appservice_webhooks_base_path }}/config"
matrix_appservice_webhooks_data_path: "{{ matrix_appservice_webhooks_base_path }}/data" matrix_appservice_webhooks_data_path: "{{ matrix_appservice_webhooks_base_path }}/data"
matrix_appservice_webhooks_docker_src_files_path: "{{ matrix_appservice_webhooks_base_path }}/docker-src"
# If nginx-proxy is disabled, the bridge itself expects its endpoint to be on its own domain (e.g. "localhost:6789") # If nginx-proxy is disabled, the bridge itself expects its endpoint to be on its own domain (e.g. "localhost:6789")
matrix_appservice_webhooks_public_endpoint: /appservice-webhooks matrix_appservice_webhooks_public_endpoint: /appservice-webhooks

View file

@ -1,23 +1,47 @@
--- ---
- name: Ensure AppService webhooks paths exist
file:
path: "{{ item.path }}"
state: directory
mode: 0750
owner: "{{ matrix_user_username }}"
group: "{{ matrix_user_groupname }}"
with_items:
- { path: "{{ matrix_appservice_webhooks_base_path }}", when: true }
- { path: "{{ matrix_appservice_webhooks_config_path }}", when: true }
- { path: "{{ matrix_appservice_webhooks_data_path }}", when: true }
- { path: "{{ matrix_appservice_webhooks_docker_src_files_path }}", when: "{{ matrix_appservice_webhooks_container_image_self_build }}"}
when: "item.when|bool"
- name: Ensure Appservice webhooks image is pulled - name: Ensure Appservice webhooks image is pulled
docker_image: docker_image:
name: "{{ matrix_appservice_webhooks_docker_image }}" name: "{{ matrix_appservice_webhooks_docker_image }}"
source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}" source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}"
force_source: "{{ matrix_appservice_webhooks_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}" force_source: "{{ matrix_appservice_webhooks_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_appservice_webhooks_docker_image_force_pull }}" force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_appservice_webhooks_docker_image_force_pull }}"
when: "not matrix_appservice_webhooks_container_image_self_build|bool"
- name: Ensure AppService webhooks paths exist - block:
file: - name: Ensure Appservice webhooks repository is present on self-build
path: "{{ item }}" git:
state: directory repo: "{{ matrix_appservice_webhooks_container_image_self_build_repo }}"
mode: 0750 dest: "{{ matrix_appservice_webhooks_docker_src_files_path }}"
owner: "{{ matrix_user_username }}" version: "{{ matrix_appservice_webhooks_container_image_self_build_repo_version }}"
group: "{{ matrix_user_groupname }}" force: "yes"
with_items: register: matrix_appservice_webhooks_git_pull_results
- "{{ matrix_appservice_webhooks_base_path }}"
- "{{ matrix_appservice_webhooks_config_path }}" - name: Ensure Appservice webhooks Docker image is built
- "{{ matrix_appservice_webhooks_data_path }}" docker_image:
name: "{{ matrix_appservice_webhooks_docker_image }}"
source: build
force_source: "{{ matrix_appservice_webhooks_git_pull_results.changed if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_appservice_webhooks_git_pull_results.changed }}"
build:
dockerfile: "{{ matrix_appservice_webhooks_container_image_self_build_repo_dockerfile_path }}"
path: "{{ matrix_appservice_webhooks_docker_src_files_path }}"
pull: yes
when: "matrix_appservice_webhooks_container_image_self_build|bool"
- name: Ensure Matrix Appservice webhooks config is installed - name: Ensure Matrix Appservice webhooks config is installed
copy: copy:

View file

@ -3,7 +3,7 @@
matrix_beeper_linkedin_enabled: true matrix_beeper_linkedin_enabled: true
matrix_beeper_linkedin_version: v0.5.0 matrix_beeper_linkedin_version: v0.5.1
# See: https://gitlab.com/beeper/linkedin/container_registry # See: https://gitlab.com/beeper/linkedin/container_registry
matrix_beeper_linkedin_docker_image: "registry.gitlab.com/beeper/linkedin:{{ matrix_beeper_linkedin_version }}-amd64" matrix_beeper_linkedin_docker_image: "registry.gitlab.com/beeper/linkedin:{{ matrix_beeper_linkedin_version }}-amd64"
matrix_beeper_linkedin_docker_image_force_pull: "{{ matrix_beeper_linkedin_docker_image.endswith(':latest-amd64') }}" matrix_beeper_linkedin_docker_image_force_pull: "{{ matrix_beeper_linkedin_docker_image.endswith(':latest-amd64') }}"

View file

@ -3,7 +3,7 @@
matrix_heisenbridge_enabled: true matrix_heisenbridge_enabled: true
matrix_heisenbridge_version: 1.1.1 matrix_heisenbridge_version: 1.2.1
matrix_heisenbridge_docker_image: "{{ matrix_container_global_registry_prefix }}hif1/heisenbridge:{{ matrix_heisenbridge_version }}" matrix_heisenbridge_docker_image: "{{ matrix_container_global_registry_prefix }}hif1/heisenbridge:{{ matrix_heisenbridge_version }}"
matrix_heisenbridge_docker_image_force_pull: "{{ matrix_heisenbridge_docker_image.endswith(':latest') }}" matrix_heisenbridge_docker_image_force_pull: "{{ matrix_heisenbridge_docker_image.endswith(':latest') }}"

View file

@ -3,7 +3,7 @@ matrix_client_element_enabled: true
matrix_client_element_container_image_self_build: false matrix_client_element_container_image_self_build: false
matrix_client_element_container_image_self_build_repo: "https://github.com/vector-im/riot-web.git" matrix_client_element_container_image_self_build_repo: "https://github.com/vector-im/riot-web.git"
matrix_client_element_version: v1.8.5 matrix_client_element_version: v1.9.0
matrix_client_element_docker_image: "{{ matrix_client_element_docker_image_name_prefix }}vectorim/element-web:{{ matrix_client_element_version }}" matrix_client_element_docker_image: "{{ matrix_client_element_docker_image_name_prefix }}vectorim/element-web:{{ matrix_client_element_version }}"
matrix_client_element_docker_image_name_prefix: "{{ 'localhost/' if matrix_client_element_container_image_self_build else matrix_container_global_registry_prefix }}" matrix_client_element_docker_image_name_prefix: "{{ 'localhost/' if matrix_client_element_container_image_self_build else matrix_container_global_registry_prefix }}"
matrix_client_element_docker_image_force_pull: "{{ matrix_client_element_docker_image.endswith(':latest') }}" matrix_client_element_docker_image_force_pull: "{{ matrix_client_element_docker_image.endswith(':latest') }}"

View file

@ -15,7 +15,7 @@
- name: Generate Etherpad proxying configuration for matrix-nginx-proxy - name: Generate Etherpad proxying configuration for matrix-nginx-proxy
set_fact: set_fact:
matrix_etherpad_matrix_nginx_proxy_configuration: | matrix_etherpad_matrix_nginx_proxy_configuration: |
rewrite ^{{ matrix_etherpad_public_endpoint }}$ $scheme://$server_name{{ matrix_etherpad_public_endpoint }}/ permanent; rewrite ^{{ matrix_etherpad_public_endpoint }}$ {{ matrix_nginx_proxy_x_forwarded_proto_value }}://$server_name{{ matrix_etherpad_public_endpoint }}/ permanent;
location {{ matrix_etherpad_public_endpoint }}/ { location {{ matrix_etherpad_public_endpoint }}/ {
{% if matrix_nginx_proxy_enabled|default(False) %} {% if matrix_nginx_proxy_enabled|default(False) %}
@ -27,7 +27,7 @@
proxy_http_version 1.1; # recommended with keepalive connections proxy_http_version 1.1; # recommended with keepalive connections
proxy_pass_header Server; proxy_pass_header Server;
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme; # for EP to set secure cookie flag when https is used proxy_set_header X-Forwarded-Proto {{ matrix_nginx_proxy_x_forwarded_proto_value }}; # for EP to set secure cookie flag when https is used
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html # WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_set_header Upgrade $http_upgrade; proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade; proxy_set_header Connection $connection_upgrade;

View file

@ -40,6 +40,12 @@ matrix_nginx_proxy_container_extra_arguments: []
# - services are served directly from the HTTP vhost # - services are served directly from the HTTP vhost
matrix_nginx_proxy_https_enabled: true matrix_nginx_proxy_https_enabled: true
# Controls whether matrix-nginx-proxy trusts an upstream server's X-Forwarded-Proto header
#
# Required if you disable HTTPS for the container (see `matrix_nginx_proxy_https_enabled`) and have an upstream server handle it instead.
matrix_nginx_proxy_trust_forwarded_proto: false
matrix_nginx_proxy_x_forwarded_proto_value: "{{ '$http_x_forwarded_proto' if matrix_nginx_proxy_trust_forwarded_proto else '$scheme' }}"
# Controls whether the matrix-nginx-proxy container exposes its HTTP port (tcp/8080 in the container). # Controls whether the matrix-nginx-proxy container exposes its HTTP port (tcp/8080 in the container).
# #
# Takes an "<ip>:<port>" or "<port>" value (e.g. "127.0.0.1:80"), or empty string to not expose. # Takes an "<ip>:<port>" or "<port>" value (e.g. "127.0.0.1:80"), or empty string to not expose.
@ -177,6 +183,10 @@ matrix_nginx_proxy_proxy_matrix_identity_api_addr_sans_container: "127.0.0.1:809
# Controls whether proxying for metrics (`/_synapse/metrics`) should be done (on the matrix domain) # Controls whether proxying for metrics (`/_synapse/metrics`) should be done (on the matrix domain)
matrix_nginx_proxy_proxy_synapse_metrics: false matrix_nginx_proxy_proxy_synapse_metrics: false
matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_enabled: false matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_enabled: false
# The following value will be written verbatim to the htpasswd file that stores the password for nginx to check against and needs to be encoded appropriately.
# Read the manpage at `man 1 htpasswd` to learn more, then encrypt your password, and paste the encrypted value here.
# e.g. `htpasswd -c mypass.htpasswd prometheus` and enter `mysecurepw` when prompted yields `prometheus:$apr1$wZhqsn.U$7LC3kMmjUbjNAZjyMyvYv/`
# The part after `prometheus:` is needed here. matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key: "$apr1$wZhqsn.U$7LC3kMmjUbjNAZjyMyvYv/"
matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key: "" matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key: ""
# The addresses where the Matrix Client API is. # The addresses where the Matrix Client API is.
@ -426,7 +436,7 @@ matrix_ssl_additional_domains_to_obtain_certificates_for: []
# Controls whether to obtain production or staging certificates from Let's Encrypt. # Controls whether to obtain production or staging certificates from Let's Encrypt.
matrix_ssl_lets_encrypt_staging: false matrix_ssl_lets_encrypt_staging: false
matrix_ssl_lets_encrypt_certbot_docker_image: "{{ matrix_container_global_registry_prefix }}certbot/certbot:{{ matrix_ssl_architecture }}-v1.19.0" matrix_ssl_lets_encrypt_certbot_docker_image: "{{ matrix_container_global_registry_prefix }}certbot/certbot:{{ matrix_ssl_architecture }}-v1.20.0"
matrix_ssl_lets_encrypt_certbot_docker_image_force_pull: "{{ matrix_ssl_lets_encrypt_certbot_docker_image.endswith(':latest') }}" matrix_ssl_lets_encrypt_certbot_docker_image_force_pull: "{{ matrix_ssl_lets_encrypt_certbot_docker_image.endswith(':latest') }}"
matrix_ssl_lets_encrypt_certbot_standalone_http_port: 2402 matrix_ssl_lets_encrypt_certbot_standalone_http_port: 2402
matrix_ssl_lets_encrypt_support_email: ~ matrix_ssl_lets_encrypt_support_email: ~

View file

@ -88,7 +88,7 @@ server {
{% if matrix_nginx_proxy_ocsp_stapling_enabled %} {% if matrix_nginx_proxy_ocsp_stapling_enabled %}
ssl_stapling on; ssl_stapling on;
ssl_stapling_verify on; ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_element_hostname }}/chain.pem; ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_hydrogen_hostname }}/chain.pem;
{% endif %} {% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %} {% if matrix_nginx_proxy_ssl_session_tickets_off %}

View file

@ -20,13 +20,13 @@
{% if matrix_nginx_proxy_floc_optout_enabled %} {% if matrix_nginx_proxy_floc_optout_enabled %}
add_header Permissions-Policy interest-cohort=() always; add_header Permissions-Policy interest-cohort=() always;
{% endif %} {% endif %}
{% if matrix_nginx_proxy_hsts_preload_enabled %} {% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %} {% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %} {% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}"; add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
location /.well-known/matrix { location /.well-known/matrix {
@ -59,7 +59,7 @@
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto {{ matrix_nginx_proxy_x_forwarded_proto_value }};
} }
{% endif %} {% endif %}
@ -77,7 +77,7 @@
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto {{ matrix_nginx_proxy_x_forwarded_proto_value }};
} }
{% endif %} {% endif %}
@ -112,7 +112,7 @@
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto {{ matrix_nginx_proxy_x_forwarded_proto_value }};
} }
{% endif %} {% endif %}
@ -137,7 +137,7 @@
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto {{ matrix_nginx_proxy_x_forwarded_proto_value }};
client_body_buffer_size 25M; client_body_buffer_size 25M;
client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb }}M; client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb }}M;
@ -152,7 +152,7 @@
#} #}
location ~* ^/$ { location ~* ^/$ {
{% if matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain %} {% if matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain %}
return 302 $scheme://{{ matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain }}$request_uri; return 302 {{ matrix_nginx_proxy_x_forwarded_proto_value }}://{{ matrix_nginx_proxy_proxy_matrix_client_redirect_root_uri_to_domain }}$request_uri;
{% else %} {% else %}
rewrite ^/$ /_matrix/static/ last; rewrite ^/$ /_matrix/static/ last;
{% endif %} {% endif %}
@ -215,12 +215,12 @@ server {
ssl_stapling_verify on; ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/chain.pem; ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_matrix_hostname }}/chain.pem;
{% endif %} {% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %} {% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off; ssl_session_tickets off;
{% endif %} {% endif %}
ssl_session_cache {{ matrix_nginx_proxy_ssl_session_cache }}; ssl_session_cache {{ matrix_nginx_proxy_ssl_session_cache }};
ssl_session_timeout {{ matrix_nginx_proxy_ssl_session_timeout }}; ssl_session_timeout {{ matrix_nginx_proxy_ssl_session_timeout }};
{{ render_vhost_directives() }} {{ render_vhost_directives() }}
} }
@ -262,7 +262,7 @@ server {
ssl_stapling_verify on; ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_nginx_proxy_proxy_matrix_federation_api_ssl_trusted_certificate }}; ssl_trusted_certificate {{ matrix_nginx_proxy_proxy_matrix_federation_api_ssl_trusted_certificate }};
{% endif %} {% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %} {% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off; ssl_session_tickets off;
{% endif %} {% endif %}
@ -283,7 +283,7 @@ server {
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto {{ matrix_nginx_proxy_x_forwarded_proto_value }};
client_body_buffer_size 25M; client_body_buffer_size 25M;
client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb }}M; client_max_body_size {{ matrix_nginx_proxy_proxy_matrix_federation_api_client_max_body_size_mb }}M;

View file

@ -71,7 +71,7 @@
proxy_set_header Connection "upgrade"; proxy_set_header Connection "upgrade";
proxy_set_header Upgrade $http_upgrade; proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto {{ matrix_nginx_proxy_x_forwarded_proto_value }};
tcp_nodelay on; tcp_nodelay on;
} }
{% endmacro %} {% endmacro %}
@ -128,7 +128,7 @@ server {
ssl_stapling_verify on; ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_jitsi_hostname }}/chain.pem; ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_jitsi_hostname }}/chain.pem;
{% endif %} {% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %} {% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off; ssl_session_tickets off;
{% endif %} {% endif %}

View file

@ -29,7 +29,7 @@
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto {{ matrix_nginx_proxy_x_forwarded_proto_value }};
} }
{% endmacro %} {% endmacro %}
@ -85,7 +85,7 @@ server {
ssl_stapling_verify on; ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_sygnal_hostname }}/chain.pem; ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_sygnal_hostname }}/chain.pem;
{% endif %} {% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %} {% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off; ssl_session_tickets off;
{% endif %} {% endif %}

View file

@ -22,7 +22,8 @@ matrix_postgres_docker_image_v10: "{{ matrix_container_global_registry_prefix }}
matrix_postgres_docker_image_v11: "{{ matrix_container_global_registry_prefix }}postgres:11.13{{ matrix_postgres_docker_image_suffix }}" matrix_postgres_docker_image_v11: "{{ matrix_container_global_registry_prefix }}postgres:11.13{{ matrix_postgres_docker_image_suffix }}"
matrix_postgres_docker_image_v12: "{{ matrix_container_global_registry_prefix }}postgres:12.8{{ matrix_postgres_docker_image_suffix }}" matrix_postgres_docker_image_v12: "{{ matrix_container_global_registry_prefix }}postgres:12.8{{ matrix_postgres_docker_image_suffix }}"
matrix_postgres_docker_image_v13: "{{ matrix_container_global_registry_prefix }}postgres:13.4{{ matrix_postgres_docker_image_suffix }}" matrix_postgres_docker_image_v13: "{{ matrix_container_global_registry_prefix }}postgres:13.4{{ matrix_postgres_docker_image_suffix }}"
matrix_postgres_docker_image_latest: "{{ matrix_postgres_docker_image_v13 }}" matrix_postgres_docker_image_v14: "{{ matrix_container_global_registry_prefix }}postgres:14.0{{ matrix_postgres_docker_image_suffix }}"
matrix_postgres_docker_image_latest: "{{ matrix_postgres_docker_image_v14 }}"
# This variable is assigned at runtime. Overriding its value has no effect. # This variable is assigned at runtime. Overriding its value has no effect.
matrix_postgres_docker_image_to_use: '{{ matrix_postgres_docker_image_latest }}' matrix_postgres_docker_image_to_use: '{{ matrix_postgres_docker_image_latest }}'

View file

@ -54,3 +54,8 @@
set_fact: set_fact:
matrix_postgres_detected_version_corresponding_docker_image: "{{ matrix_postgres_docker_image_v12 }}" matrix_postgres_detected_version_corresponding_docker_image: "{{ matrix_postgres_docker_image_v12 }}"
when: "matrix_postgres_detected_version == '12' or matrix_postgres_detected_version.startswith('12.')" when: "matrix_postgres_detected_version == '12' or matrix_postgres_detected_version.startswith('12.')"
- name: Determine corresponding Docker image to detected version (use 13.x, if detected)
set_fact:
matrix_postgres_detected_version_corresponding_docker_image: "{{ matrix_postgres_docker_image_v13 }}"
when: "matrix_postgres_detected_version == '13' or matrix_postgres_detected_version.startswith('13.')"

View file

@ -22,8 +22,8 @@
- name: Generate matrix-registration proxying configuration for matrix-nginx-proxy - name: Generate matrix-registration proxying configuration for matrix-nginx-proxy
set_fact: set_fact:
matrix_registration_matrix_nginx_proxy_configuration: | matrix_registration_matrix_nginx_proxy_configuration: |
rewrite ^{{ matrix_registration_public_endpoint }}$ $scheme://$server_name{{ matrix_registration_public_endpoint }}/ permanent; rewrite ^{{ matrix_registration_public_endpoint }}$ {{ matrix_nginx_proxy_x_forwarded_proto_value }}://$server_name{{ matrix_registration_public_endpoint }}/ permanent;
rewrite ^{{ matrix_registration_public_endpoint }}/$ $scheme://$server_name{{ matrix_registration_public_endpoint }}/register redirect; rewrite ^{{ matrix_registration_public_endpoint }}/$ {{ matrix_nginx_proxy_x_forwarded_proto_value }}://$server_name{{ matrix_registration_public_endpoint }}/register redirect;
location ~ ^{{ matrix_registration_public_endpoint }}/(.*) { location ~ ^{{ matrix_registration_public_endpoint }}/(.*) {
{% if matrix_nginx_proxy_enabled|default(False) %} {% if matrix_nginx_proxy_enabled|default(False) %}

View file

@ -22,7 +22,7 @@
- name: Generate Synapse Admin proxying configuration for matrix-nginx-proxy - name: Generate Synapse Admin proxying configuration for matrix-nginx-proxy
set_fact: set_fact:
matrix_synapse_admin_matrix_nginx_proxy_configuration: | matrix_synapse_admin_matrix_nginx_proxy_configuration: |
rewrite ^{{ matrix_synapse_admin_public_endpoint }}$ $scheme://$server_name{{ matrix_synapse_admin_public_endpoint }}/ permanent; rewrite ^{{ matrix_synapse_admin_public_endpoint }}$ {{ matrix_nginx_proxy_x_forwarded_proto_value }}://$server_name{{ matrix_synapse_admin_public_endpoint }}/ permanent;
location ~ ^{{ matrix_synapse_admin_public_endpoint }}/(.*) { location ~ ^{{ matrix_synapse_admin_public_endpoint }}/(.*) {
{% if matrix_nginx_proxy_enabled|default(False) %} {% if matrix_nginx_proxy_enabled|default(False) %}

View file

@ -15,8 +15,8 @@ matrix_synapse_docker_image_name_prefix: "{{ 'localhost/' if matrix_synapse_cont
# amd64 gets released first. # amd64 gets released first.
# arm32 relies on self-building, so the same version can be built immediately. # arm32 relies on self-building, so the same version can be built immediately.
# arm64 users need to wait for a prebuilt image to become available. # arm64 users need to wait for a prebuilt image to become available.
matrix_synapse_version: v1.42.0 matrix_synapse_version: v1.44.0
matrix_synapse_version_arm64: v1.42.0 matrix_synapse_version_arm64: v1.44.0
matrix_synapse_docker_image_tag: "{{ matrix_synapse_version if matrix_architecture in ['arm32', 'amd64'] else matrix_synapse_version_arm64 }}" matrix_synapse_docker_image_tag: "{{ matrix_synapse_version if matrix_architecture in ['arm32', 'amd64'] else matrix_synapse_version_arm64 }}"
matrix_synapse_docker_image_force_pull: "{{ matrix_synapse_docker_image.endswith(':latest') }}" matrix_synapse_docker_image_force_pull: "{{ matrix_synapse_docker_image.endswith(':latest') }}"

View file

@ -357,6 +357,24 @@ update_user_directory: false
daemonize: false daemonize: false
{% endif %} {% endif %}
# Connection settings for the manhole
#
manhole_settings:
# The username for the manhole. This defaults to 'matrix'.
#
#username: manhole
# The password for the manhole. This defaults to 'rabbithole'.
#
#password: mypassword
# The private and public SSH key pair used to encrypt the manhole traffic.
# If these are left unset, then hardcoded and non-secret keys are used,
# which could allow traffic to be intercepted if sent over a public network.
#
#ssh_priv_key_path: /data/id_rsa
#ssh_pub_key_path: /data/id_rsa.pub
# Forward extremities can build up in a room due to networking delays between # Forward extremities can build up in a room due to networking delays between
# homeservers. Once this happens in a large room, calculation of the state of # homeservers. Once this happens in a large room, calculation of the state of
# that room can become quite expensive. To mitigate this, once the number of # that room can become quite expensive. To mitigate this, once the number of
@ -2258,7 +2276,7 @@ password_config:
# #
#require_lowercase: true #require_lowercase: true
# Whether a password must contain at least one lowercase letter. # Whether a password must contain at least one uppercase letter.
# Defaults to 'false'. # Defaults to 'false'.
# #
#require_uppercase: true #require_uppercase: true
@ -2594,12 +2612,16 @@ user_directory:
#enabled: false #enabled: false
# Defines whether to search all users visible to your HS when searching # Defines whether to search all users visible to your HS when searching
# the user directory, rather than limiting to users visible in public # the user directory. If false, search results will only contain users
# rooms. Defaults to false. # visible in public rooms and users sharing a room with the requester.
# Defaults to false.
# #
# If you set it true, you'll have to rebuild the user_directory search # NB. If you set this to true, and the last time the user_directory search
# indexes, see: # indexes were (re)built was before Synapse 1.44, you'll have to
# https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md # rebuild the indexes in order to search through all known users.
# These indexes are built the first time Synapse starts; admins can
# manually trigger a rebuild following the instructions at
# https://matrix-org.github.io/synapse/latest/user_directory.html
# #
# Uncomment to return search results containing all known users, even if that # Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester. # user does not share a room with the requester.

View file

@ -32,6 +32,8 @@ matrix_synapse_workers_generic_worker_endpoints:
- ^/_matrix/federation/v1/user/devices/ - ^/_matrix/federation/v1/user/devices/
- ^/_matrix/federation/v1/get_groups_publicised$ - ^/_matrix/federation/v1/get_groups_publicised$
- ^/_matrix/key/v2/query - ^/_matrix/key/v2/query
- ^/_matrix/federation/unstable/org.matrix.msc2946/spaces/
- ^/_matrix/federation/unstable/org.matrix.msc2946/hierarchy/
# Inbound federation transaction request # Inbound federation transaction request
- ^/_matrix/federation/v1/send/ - ^/_matrix/federation/v1/send/
@ -43,6 +45,9 @@ matrix_synapse_workers_generic_worker_endpoints:
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$ - ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$ - ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
- ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$ - ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
- ^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
- ^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
- ^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
- ^/_matrix/client/(api/v1|r0|unstable)/account/3pid$ - ^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
- ^/_matrix/client/(api/v1|r0|unstable)/devices$ - ^/_matrix/client/(api/v1|r0|unstable)/devices$
- ^/_matrix/client/(api/v1|r0|unstable)/keys/query$ - ^/_matrix/client/(api/v1|r0|unstable)/keys/query$

View file

@ -56,4 +56,4 @@
- matrix-aux - matrix-aux
- matrix-postgres-backup - matrix-postgres-backup
- matrix-prometheus-postgres-exporter - matrix-prometheus-postgres-exporter
- matrix-common-after - matrix-common-after