Merge branch 'master' into pub.solar

This commit is contained in:
teutat3s 2021-11-23 14:35:44 +01:00
commit 8eefd29ec9
Signed by: teutat3s
GPG key ID: 18DAE600A6BBE705
125 changed files with 510 additions and 245 deletions

View file

@ -1,3 +1,12 @@
# 2021-11-11
## Dropped support for Postgres v9.6
Postgres v9.6 reached its end of life today, so the playbook will refuse to run for you if you're still on that version.
Synapse still supports v9.6 (for now), but we're retiring support for it early, to avoid having to maintain support for so many Postgres versions. Users that are still on Postgres v9.6 can easily [upgrade Postgres](docs/maintenance-postgres.md#upgrading-postgresql) via the playbook.
# 2021-10-23 # 2021-10-23
## Hangouts bridge no longer updated, superseded by a Googlechat bridge ## Hangouts bridge no longer updated, superseded by a Googlechat bridge
@ -244,6 +253,8 @@ The fact that we've renamed Synapse's database from `homeserver` to `synapse` (i
## (Breaking Change) The mautrix-facebook bridge now requires a Postgres database ## (Breaking Change) The mautrix-facebook bridge now requires a Postgres database
**Update from 2021-11-15**: SQLite support has been re-added to the mautrix-facebook bridge in [v0.3.2](https://github.com/mautrix/facebook/releases/tag/v0.3.2). You can ignore this changelog entry.
A new version of the [mautrix-facebook](https://github.com/tulir/mautrix-facebook) bridge has been released. It's a full rewrite of its backend and the bridge now requires Postgres. New versions of the bridge can no longer run on SQLite. A new version of the [mautrix-facebook](https://github.com/tulir/mautrix-facebook) bridge has been released. It's a full rewrite of its backend and the bridge now requires Postgres. New versions of the bridge can no longer run on SQLite.
**TLDR**: if you're NOT using an [external Postgres server](docs/configuring-playbook-external-postgres.md) and have NOT forcefully kept the bridge on SQLite during [The big move to all-on-Postgres (potentially dangerous)](#the-big-move-to-all-on-postgres-potentially-dangerous), you will be automatically upgraded without manual intervention. All you need to do is send a `login` message to the Facebook bridge bot again. **TLDR**: if you're NOT using an [external Postgres server](docs/configuring-playbook-external-postgres.md) and have NOT forcefully kept the bridge on SQLite during [The big move to all-on-Postgres (potentially dangerous)](#the-big-move-to-all-on-postgres-potentially-dangerous), you will be automatically upgraded without manual intervention. All you need to do is send a `login` message to the Facebook bridge bot again.

View file

@ -8,8 +8,25 @@ Use the following playbook configuration:
```yaml ```yaml
matrix_mautrix_whatsapp_enabled: true matrix_mautrix_whatsapp_enabled: true
``` ```
Whatsapp multidevice beta is required, now it is enough if Whatsapp is connected to the Internet every 2 weeks.
## Enable backfilling history
This requires a server with MSC2716 support, which is currently an experimental feature in synapse.
Note that as of Synapse 1.46, there are still some bugs with the implementation, especially if using event persistence workers.
Use the following playbook configuration:
```yaml
matrix_synapse_configuration_extension_yaml: |
experimental_features:
msc2716_enabled: true
```
```yaml
matrix_mautrix_whatsapp_configuration_extension_yaml:
bridge:
history_sync:
backfill: true
```
## Set up Double Puppeting ## Set up Double Puppeting

View file

@ -37,6 +37,7 @@ matrix_synapse_ext_password_provider_rest_auth_endpoint: "http://matrix-corporal
matrix_corporal_enabled: true matrix_corporal_enabled: true
# See below for an example of how to use a locally-stored static policy
matrix_corporal_policy_provider_config: | matrix_corporal_policy_provider_config: |
{ {
"Type": "http", "Type": "http",
@ -74,10 +75,48 @@ Matrix Corporal operates with a specific Matrix user on your server.
By default, it's `matrix-corporal` (controllable by the `matrix_corporal_reconciliation_user_id_local_part` setting, see above). By default, it's `matrix-corporal` (controllable by the `matrix_corporal_reconciliation_user_id_local_part` setting, see above).
No matter what Matrix user id you configure to run it with, make sure that: No matter what Matrix user id you configure to run it with, make sure that:
- the Matrix Corporal user is created by [registering it](registering-users.md). Use a password you remember, as you'll need to log in from time to time to create or join rooms - the Matrix Corporal user is created by [registering it](registering-users.md) **with administrator privileges**. Use a password you remember, as you'll need to log in from time to time to create or join rooms
- the Matrix Corporal user is joined and has Admin/Moderator-level access to any rooms you want it to manage - the Matrix Corporal user is joined and has Admin/Moderator-level access to any rooms you want it to manage
### Using a locally-stored static policy
If you'd like to use a [static policy file](https://github.com/devture/matrix-corporal/blob/master/docs/policy-providers.md#static-file-pull-style-policy-provider), you can use a configuration like this:
```yaml
matrix_corporal_policy_provider_config: |
{
"Type": "static_file",
"Path": "/etc/matrix-corporal/policy.json"
}
# Modify the policy below as you see fit
matrix_aux_file_definitions:
- dest: "{{ matrix_corporal_config_dir_path }}/policy.json"
content: |
{
"schemaVersion": 1,
"identificationStamp": "stamp-1",
"flags": {
"allowCustomUserDisplayNames": false,
"allowCustomUserAvatars": false,
"forbidRoomCreation": false,
"forbidEncryptedRoomCreation": true,
"forbidUnencryptedRoomCreation": false,
"allowCustomPassthroughUserPasswords": true,
"allowUnauthenticatedPasswordResets": false,
"allow3pidLogin": false
},
"managedCommunityIds": [],
"managedRoomIds": [],
"users": []
}
```
To learn more about what the policy configuration, see the matrix-corporal documentation on [policy](https://github.com/devture/matrix-corporal/blob/master/docs/policy.md).
Each time you update the policy in your `vars.yml` file, you'd need to re-run the playbook and restart matrix-corporal (`--tags=setup-all,start` or `--tags=setup-aux-files,setup-corporal,start`).
## Matrix Corporal files ## Matrix Corporal files

View file

@ -19,9 +19,9 @@ matrix_container_global_registry_prefix: "docker.io/"
matrix_identity_server_url: "{{ ('https://' + matrix_server_fqn_matrix) if matrix_ma1sd_enabled else None }}" matrix_identity_server_url: "{{ ('https://' + matrix_server_fqn_matrix) if matrix_ma1sd_enabled else None }}"
# If Synapse workers are enabled and matrix-nginx-proxy is disabled, certain APIs may not work over 'http://matrix-synapse:8008'. # If Synapse workers are enabled and matrix-nginx-proxy is disabled, certain APIs may not work over 'http://matrix-synapse:{{ matrix_synapse_container_client_api_port }}'.
# This is because we explicitly disable them for the main Synapse process. # This is because we explicitly disable them for the main Synapse process.
matrix_homeserver_container_url: "{{ 'http://matrix-nginx-proxy:12080' if matrix_nginx_proxy_enabled else 'http://matrix-synapse:8008' }}" matrix_homeserver_container_url: "{{ 'http://matrix-nginx-proxy:12080' if matrix_nginx_proxy_enabled else 'http://matrix-synapse:'+ matrix_synapse_container_client_api_port|string }}"
###################################################################### ######################################################################
# #
@ -113,6 +113,7 @@ matrix_appservice_webhooks_container_http_host_bind_port: "{{ '' if matrix_nginx
matrix_appservice_webhooks_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'webhook.as.token') | to_uuid }}" matrix_appservice_webhooks_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'webhook.as.token') | to_uuid }}"
matrix_appservice_webhooks_homeserver_url: "http://matrix-synapse:{{ matrix_synapse_container_client_api_port }}"
matrix_appservice_webhooks_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'webhook.hs.token') | to_uuid }}" matrix_appservice_webhooks_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'webhook.hs.token') | to_uuid }}"
matrix_appservice_webhooks_id_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'webhook.id.token') | to_uuid }}" matrix_appservice_webhooks_id_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'webhook.id.token') | to_uuid }}"
@ -151,6 +152,7 @@ matrix_appservice_slack_container_http_host_bind_port: "{{ '' if matrix_nginx_pr
matrix_appservice_slack_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'slack.as.token') | to_uuid }}" matrix_appservice_slack_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'slack.as.token') | to_uuid }}"
matrix_appservice_slack_homeserver_url: "http://matrix-synapse:{{ matrix_synapse_container_client_api_port }}"
matrix_appservice_slack_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'slack.hs.token') | to_uuid }}" matrix_appservice_slack_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'slack.hs.token') | to_uuid }}"
matrix_appservice_slack_id_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'slack.id.token') | to_uuid }}" matrix_appservice_slack_id_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'slack.id.token') | to_uuid }}"
@ -567,6 +569,7 @@ matrix_sms_bridge_systemd_required_services_list: |
matrix_sms_bridge_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'sms.as.token') | to_uuid }}" matrix_sms_bridge_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'sms.as.token') | to_uuid }}"
matrix_sms_bridge_homeserver_port: "{{ matrix_synapse_container_client_api_port }}"
matrix_sms_bridge_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'sms.hs.token') | to_uuid }}" matrix_sms_bridge_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'sms.hs.token') | to_uuid }}"
###################################################################### ######################################################################
@ -1047,6 +1050,8 @@ matrix_dimension_enabled: false
# the Dimension HTTP port to the local host. # the Dimension HTTP port to the local host.
matrix_dimension_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:8184' }}" matrix_dimension_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:8184' }}"
matrix_dimension_homeserver_federationUrl: "http://matrix-synapse:{{matrix_synapse_container_federation_api_plain_port|string}}"
matrix_integration_manager_rest_url: "{{ matrix_dimension_integrations_rest_url if matrix_dimension_enabled else None }}" matrix_integration_manager_rest_url: "{{ matrix_dimension_integrations_rest_url if matrix_dimension_enabled else None }}"
matrix_integration_manager_ui_url: "{{ matrix_dimension_integrations_ui_url if matrix_dimension_enabled else None }}" matrix_integration_manager_ui_url: "{{ matrix_dimension_integrations_ui_url if matrix_dimension_enabled else None }}"
@ -1212,7 +1217,8 @@ matrix_ma1sd_container_image_self_build: "{{ matrix_architecture != 'amd64' }}"
# Normally, matrix-nginx-proxy is enabled and nginx can reach ma1sd over the container network. # Normally, matrix-nginx-proxy is enabled and nginx can reach ma1sd over the container network.
# If matrix-nginx-proxy is not enabled, or you otherwise have a need for it, you can expose # If matrix-nginx-proxy is not enabled, or you otherwise have a need for it, you can expose
# ma1sd's web-server port. # ma1sd's web-server port.
matrix_ma1sd_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:' + matrix_ma1sd_default_port|string }}" matrix_ma1sd_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:' + matrix_ma1sd_container_port|string }}"
# We enable Synapse integration via its Postgres database by default. # We enable Synapse integration via its Postgres database by default.
# When using another Identity store, you might wish to disable this and define # When using another Identity store, you might wish to disable this and define
@ -1294,8 +1300,8 @@ matrix_nginx_proxy_proxy_matrix_corporal_api_addr_with_container: "matrix-corpor
matrix_nginx_proxy_proxy_matrix_corporal_api_addr_sans_container: "127.0.0.1:41081" matrix_nginx_proxy_proxy_matrix_corporal_api_addr_sans_container: "127.0.0.1:41081"
matrix_nginx_proxy_proxy_matrix_identity_api_enabled: "{{ matrix_ma1sd_enabled }}" matrix_nginx_proxy_proxy_matrix_identity_api_enabled: "{{ matrix_ma1sd_enabled }}"
matrix_nginx_proxy_proxy_matrix_identity_api_addr_with_container: "matrix-ma1sd:{{ matrix_ma1sd_default_port }}" matrix_nginx_proxy_proxy_matrix_identity_api_addr_with_container: "matrix-ma1sd:{{ matrix_ma1sd_container_port }}"
matrix_nginx_proxy_proxy_matrix_identity_api_addr_sans_container: "127.0.0.1:{{ matrix_ma1sd_default_port }}" matrix_nginx_proxy_proxy_matrix_identity_api_addr_sans_container: "127.0.0.1:{{ matrix_ma1sd_container_port }}"
# By default, we do TLS termination for the Matrix Federation API (port 8448) at matrix-nginx-proxy. # By default, we do TLS termination for the Matrix Federation API (port 8448) at matrix-nginx-proxy.
# Unless this is handled there OR Synapse's federation listener port is disabled, we'll reverse-proxy. # Unless this is handled there OR Synapse's federation listener port is disabled, we'll reverse-proxy.
@ -1306,6 +1312,12 @@ matrix_nginx_proxy_proxy_matrix_federation_api_addr_sans_container: "127.0.0.1:1
# Settings controlling matrix-synapse-proxy.conf # Settings controlling matrix-synapse-proxy.conf
matrix_nginx_proxy_proxy_synapse_enabled: "{{ matrix_synapse_enabled }}" matrix_nginx_proxy_proxy_synapse_enabled: "{{ matrix_synapse_enabled }}"
matrix_nginx_proxy_proxy_synapse_client_api_addr_with_container: "matrix-synapse:{{ matrix_synapse_container_client_api_port }}"
matrix_nginx_proxy_proxy_synapse_client_api_addr_sans_container: "127.0.0.1:{{ matrix_synapse_container_client_api_port }}"
matrix_nginx_proxy_proxy_synapse_federation_api_addr_with_container: "matrix-synapse:{{matrix_synapse_container_federation_api_plain_port|string}}"
matrix_nginx_proxy_proxy_synapse_federation_api_addr_sans_container: "localhost:{{matrix_synapse_container_federation_api_plain_port|string}}"
# When matrix-nginx-proxy is disabled, the actual port number that the vhost uses may begin to matter. # When matrix-nginx-proxy is disabled, the actual port number that the vhost uses may begin to matter.
matrix_nginx_proxy_proxy_matrix_federation_port: "{{ matrix_federation_public_port }}" matrix_nginx_proxy_proxy_matrix_federation_port: "{{ matrix_federation_public_port }}"
@ -1709,18 +1721,18 @@ matrix_synapse_container_image_self_build: "{{ matrix_architecture not in ['arm6
# When ma1sd is enabled, we can use it to validate email addresses and phone numbers. # When ma1sd is enabled, we can use it to validate email addresses and phone numbers.
# Synapse can validate email addresses by itself as well, but it's probably not what we want by default when we have an identity server. # Synapse can validate email addresses by itself as well, but it's probably not what we want by default when we have an identity server.
matrix_synapse_account_threepid_delegates_email: "{{ 'http://matrix-ma1sd:' + matrix_ma1sd_default_port|string if matrix_ma1sd_enabled else '' }}" matrix_synapse_account_threepid_delegates_email: "{{ 'http://matrix-ma1sd:' + matrix_ma1sd_container_port|string if matrix_ma1sd_enabled else '' }}"
matrix_synapse_account_threepid_delegates_msisdn: "{{ 'http://matrix-ma1sd:' + matrix_ma1sd_default_port|string if matrix_ma1sd_enabled else '' }}" matrix_synapse_account_threepid_delegates_msisdn: "{{ 'http://matrix-ma1sd:' + matrix_ma1sd_container_port|string if matrix_ma1sd_enabled else '' }}"
# Normally, matrix-nginx-proxy is enabled and nginx can reach Synapse over the container network. # Normally, matrix-nginx-proxy is enabled and nginx can reach Synapse over the container network.
# If matrix-nginx-proxy is not enabled, or you otherwise have a need for it, # If matrix-nginx-proxy is not enabled, or you otherwise have a need for it,
# you can expose Synapse's ports to the host. # you can expose Synapse's ports to the host.
# #
# For exposing the Matrix Client API's port (plain HTTP) to the local host. # For exposing the Matrix Client API's port (plain HTTP) to the local host.
matrix_synapse_container_client_api_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:8008' }}" matrix_synapse_container_client_api_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:' + matrix_synapse_container_client_api_port|string }}"
# #
# For exposing the Matrix Federation API's plain port (plain HTTP) to the local host. # For exposing the Matrix Federation API's plain port (plain HTTP) to the local host.
matrix_synapse_container_federation_api_plain_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:8048' }}" matrix_synapse_container_federation_api_plain_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:' + matrix_synapse_container_federation_api_plain_port|string }}"
# #
# For exposing the Matrix Federation API's TLS port (HTTPS) to the internet on all network interfaces. # For exposing the Matrix Federation API's TLS port (HTTPS) to the internet on all network interfaces.
matrix_synapse_container_federation_api_tls_host_bind_port: "{{ matrix_federation_public_port if (matrix_synapse_federation_enabled and matrix_synapse_tls_federation_listener_enabled) else '' }}" matrix_synapse_container_federation_api_tls_host_bind_port: "{{ matrix_federation_public_port if (matrix_synapse_federation_enabled and matrix_synapse_tls_federation_listener_enabled) else '' }}"

View file

@ -5,10 +5,11 @@ import json
janitor_token = sys.argv[1] janitor_token = sys.argv[1]
synapse_container_ip = sys.argv[2] synapse_container_ip = sys.argv[2]
synapse_container_port = sys.argv[3]
# collect total amount of rooms # collect total amount of rooms
rooms_raw_url = 'http://' + synapse_container_ip + ':8008/_synapse/admin/v1/rooms' rooms_raw_url = 'http://' + synapse_container_ip + ':' + synapse_container_port + '/_synapse/admin/v1/rooms'
rooms_raw_header = {'Authorization': 'Bearer ' + janitor_token} rooms_raw_header = {'Authorization': 'Bearer ' + janitor_token}
rooms_raw = requests.get(rooms_raw_url, headers=rooms_raw_header) rooms_raw = requests.get(rooms_raw_url, headers=rooms_raw_header)
rooms_raw_python = json.loads(rooms_raw.text) rooms_raw_python = json.loads(rooms_raw.text)
@ -19,7 +20,7 @@ total_rooms = rooms_raw_python["total_rooms"]
room_list_file = open("/tmp/room_list_complete.json", "w") room_list_file = open("/tmp/room_list_complete.json", "w")
for i in range(0, total_rooms, 100): for i in range(0, total_rooms, 100):
rooms_inc_url = 'http://' + synapse_container_ip + ':8008/_synapse/admin/v1/rooms?from=' + str(i) rooms_inc_url = 'http://' + synapse_container_ip + ':' + synapse_container_port + '/_synapse/admin/v1/rooms?from=' + str(i)
rooms_inc = requests.get(rooms_inc_url, headers=rooms_raw_header) rooms_inc = requests.get(rooms_inc_url, headers=rooms_raw_header)
room_list_file.write(rooms_inc.text) room_list_file.write(rooms_inc.text)

View file

@ -2,9 +2,9 @@
- name: Collect entire room list into stdout - name: Collect entire room list into stdout
shell: | shell: |
curl -X GET --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/rooms?from={{ item }}' curl -X GET --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:{{ matrix_synapse_container_client_api_port }}/_synapse/admin/v1/rooms?from={{ item }}'
register: awx_rooms_output register: awx_rooms_output
- name: Print stdout to file - name: Print stdout to file
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
shell: | shell: |

View file

@ -2,11 +2,11 @@
- name: Purge all rooms with more then N events - name: Purge all rooms with more then N events
shell: | shell: |
curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ awx_purge_epoche_time.stdout }}000 }' "{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_history/{{ item[1:-1] }}" curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ awx_purge_epoche_time.stdout }}000 }' "{{ awx_synapse_container_ip.stdout }}:{{ matrix_synapse_container_client_api_port }}/_synapse/admin/v1/purge_history/{{ item[1:-1] }}"
register: awx_purge_command register: awx_purge_command
- name: Print output of purge command - name: Print output of purge command
debug: debug:
msg: "{{ awx_purge_command.stdout }}" msg: "{{ awx_purge_command.stdout }}"
- name: Pause for 5 seconds to let Synapse breathe - name: Pause for 5 seconds to let Synapse breathe

View file

@ -31,7 +31,7 @@
- name: Collect access token for janitor user - name: Collect access token for janitor user
shell: | shell: |
curl -X POST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ awx_janitor_user_password }}"}' "{{ awx_synapse_container_ip.stdout }}:8008/_matrix/client/r0/login" | jq '.access_token' curl -X POST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ awx_janitor_user_password }}"}' "{{ awx_synapse_container_ip.stdout }}:{{ matrix_synapse_container_client_api_port }}/_matrix/client/r0/login" | jq '.access_token'
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
register: awx_janitors_token register: awx_janitors_token
no_log: True no_log: True
@ -47,7 +47,7 @@
- name: Run build_room_list.py script - name: Run build_room_list.py script
shell: | shell: |
runuser -u matrix -- python3 /usr/local/bin/matrix_build_room_list.py {{ awx_janitors_token.stdout[1:-1] }} {{ awx_synapse_container_ip.stdout }} runuser -u matrix -- python3 /usr/local/bin/matrix_build_room_list.py {{ awx_janitors_token.stdout[1:-1] }} {{ awx_synapse_container_ip.stdout }} {{ matrix_synapse_container_client_api_port.stdout }}
register: awx_rooms_total register: awx_rooms_total
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
@ -69,7 +69,7 @@
shell: | shell: |
jq 'try .rooms[] | select(.joined_local_members == 0) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_no_local_users.txt jq 'try .rooms[] | select(.joined_local_members == 0) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_no_local_users.txt
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
- name: Count number of rooms with no local users - name: Count number of rooms with no local users
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
shell: | shell: |
@ -84,7 +84,7 @@
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
- name: Purge all rooms with no local users - name: Purge all rooms with no local users
include_tasks: purge_database_no_local.yml include_tasks: purge_database_no_local.yml
loop: "{{ awx_room_list_no_local_users.splitlines() | flatten(levels=1) }}" loop: "{{ awx_room_list_no_local_users.splitlines() | flatten(levels=1) }}"
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
@ -116,7 +116,7 @@
no_log: True no_log: True
- name: Purge all rooms with more then N users - name: Purge all rooms with more then N users
include_tasks: purge_database_users.yml include_tasks: purge_database_users.yml
loop: "{{ awx_room_list_joined_members.splitlines() | flatten(levels=1) }}" loop: "{{ awx_room_list_joined_members.splitlines() | flatten(levels=1) }}"
when: awx_purge_mode.find("Number of users [slower]") != -1 when: awx_purge_mode.find("Number of users [slower]") != -1
@ -141,7 +141,7 @@
no_log: True no_log: True
- name: Purge all rooms with more then N events - name: Purge all rooms with more then N events
include_tasks: purge_database_events.yml include_tasks: purge_database_events.yml
loop: "{{ awx_room_list_state_events.splitlines() | flatten(levels=1) }}" loop: "{{ awx_room_list_state_events.splitlines() | flatten(levels=1) }}"
when: awx_purge_mode.find("Number of events [slower]") != -1 when: awx_purge_mode.find("Number of events [slower]") != -1
@ -171,7 +171,7 @@
wait: yes wait: yes
tower_host: "https://{{ awx_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) or (awx_purge_mode.find("Skip purging rooms [faster]") != -1) when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) or (awx_purge_mode.find("Skip purging rooms [faster]") != -1)
- name: Revert 'Deploy/Update a Server' job template - name: Revert 'Deploy/Update a Server' job template
@ -237,7 +237,7 @@
wait: yes wait: yes
tower_host: "https://{{ awx_host }}" tower_host: "https://{{ awx_host }}"
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}" tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
validate_certs: yes validate_certs: yes
when: (awx_purge_mode.find("Perform final shrink") != -1) when: (awx_purge_mode.find("Perform final shrink") != -1)
- name: Revert 'Deploy/Update a Server' job template - name: Revert 'Deploy/Update a Server' job template
@ -272,7 +272,7 @@
when: (awx_purge_mode.find("Perform final shrink") != -1) when: (awx_purge_mode.find("Perform final shrink") != -1)
no_log: True no_log: True
- name: Print total number of rooms processed - name: Print total number of rooms processed
debug: debug:
msg: '{{ awx_rooms_total.stdout }}' msg: '{{ awx_rooms_total.stdout }}'
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)

View file

@ -2,11 +2,11 @@
- name: Purge all rooms with no local users - name: Purge all rooms with no local users
shell: | shell: |
curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "room_id": {{ item }} }' '{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_room' curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "room_id": {{ item }} }' '{{ awx_synapse_container_ip.stdout }}:{{ matrix_synapse_container_client_api_port }}/_synapse/admin/v1/purge_room'
register: awx_purge_command register: awx_purge_command
- name: Print output of purge command - name: Print output of purge command
debug: debug:
msg: "{{ awx_purge_command.stdout }}" msg: "{{ awx_purge_command.stdout }}"
- name: Pause for 5 seconds to let Synapse breathe - name: Pause for 5 seconds to let Synapse breathe

View file

@ -2,11 +2,11 @@
- name: Purge all rooms with more then N users - name: Purge all rooms with more then N users
shell: | shell: |
curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ awx_purge_epoche_time.stdout }}000 }' "{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_history/{{ item[1:-1] }}" curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ awx_purge_epoche_time.stdout }}000 }' "{{ awx_synapse_container_ip.stdout }}:{{ matrix_synapse_container_client_api_port }}/_synapse/admin/v1/purge_history/{{ item[1:-1] }}"
register: awx_purge_command register: awx_purge_command
- name: Print output of purge command - name: Print output of purge command
debug: debug:
msg: "{{ awx_purge_command.stdout }}" msg: "{{ awx_purge_command.stdout }}"
- name: Pause for 5 seconds to let Synapse breathe - name: Pause for 5 seconds to let Synapse breathe

View file

@ -7,11 +7,11 @@
- name: Purge local media to specific date - name: Purge local media to specific date
shell: | shell: |
curl -X POST --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" '{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/media/matrix.{{ matrix_domain }}/delete?before_ts={{ awx_epoche_time.stdout }}000' curl -X POST --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" '{{ awx_synapse_container_ip.stdout }}:{{ matrix_synapse_container_client_api_port }}/_synapse/admin/v1/media/matrix.{{ matrix_domain }}/delete?before_ts={{ awx_epoche_time.stdout }}000'
register: awx_purge_command register: awx_purge_command
- name: Print output of purge command - name: Print output of purge command
debug: debug:
msg: "{{ awx_purge_command.stdout }}" msg: "{{ awx_purge_command.stdout }}"
- name: Pause for 5 seconds to let Synapse breathe - name: Pause for 5 seconds to let Synapse breathe

View file

@ -9,7 +9,7 @@
include_vars: include_vars:
file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml' file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
no_log: True no_log: True
- name: Ensure curl and jq intalled on target machine - name: Ensure curl and jq intalled on target machine
apt: apt:
pkg: pkg:
@ -23,7 +23,7 @@
- name: Collect access token for janitor user - name: Collect access token for janitor user
shell: | shell: |
curl -XPOST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ awx_janitor_user_password }}"}' "{{ awx_synapse_container_ip.stdout }}:8008/_matrix/client/r0/login" | jq '.access_token' curl -XPOST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ awx_janitor_user_password }}"}' "{{ awx_synapse_container_ip.stdout }}:{{ matrix_synapse_container_client_api_port }}/_matrix/client/r0/login" | jq '.access_token'
register: awx_janitors_token register: awx_janitors_token
no_log: True no_log: True
@ -31,7 +31,7 @@
delegate_to: 127.0.0.1 delegate_to: 127.0.0.1
shell: "dateseq {{ matrix_purge_from_date }} {{ matrix_purge_to_date }}" shell: "dateseq {{ matrix_purge_from_date }} {{ matrix_purge_to_date }}"
register: awx_purge_dates register: awx_purge_dates
- name: Calculate initial size of local media repository - name: Calculate initial size of local media repository
shell: du -sh /matrix/synapse/storage/media-store/local* shell: du -sh /matrix/synapse/storage/media-store/local*
register: awx_local_media_size_before register: awx_local_media_size_before
@ -47,12 +47,12 @@
no_log: True no_log: True
- name: Purge local media with loop - name: Purge local media with loop
include_tasks: purge_media_local.yml include_tasks: purge_media_local.yml
loop: "{{ awx_purge_dates.stdout_lines | flatten(levels=1) }}" loop: "{{ awx_purge_dates.stdout_lines | flatten(levels=1) }}"
when: awx_purge_media_type == "Local Media" when: awx_purge_media_type == "Local Media"
- name: Purge remote media with loop - name: Purge remote media with loop
include_tasks: purge_media_remote.yml include_tasks: purge_media_remote.yml
loop: "{{ awx_purge_dates.stdout_lines | flatten(levels=1) }}" loop: "{{ awx_purge_dates.stdout_lines | flatten(levels=1) }}"
when: awx_purge_media_type == "Remote Media" when: awx_purge_media_type == "Remote Media"

View file

@ -7,11 +7,11 @@
- name: Purge remote media to specific date - name: Purge remote media to specific date
shell: | shell: |
curl -X POST --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" '{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_media_cache?before_ts={{ awx_epoche_time.stdout }}000' curl -X POST --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" '{{ awx_synapse_container_ip.stdout }}:{{ matrix_synapse_container_client_api_port }}/_synapse/admin/v1/purge_media_cache?before_ts={{ awx_epoche_time.stdout }}000'
register: awx_purge_command register: awx_purge_command
- name: Print output of purge command - name: Print output of purge command
debug: debug:
msg: "{{ awx_purge_command.stdout }}" msg: "{{ awx_purge_command.stdout }}"
- name: Pause for 5 seconds to let Synapse breathe - name: Pause for 5 seconds to let Synapse breathe

View file

@ -30,7 +30,7 @@
insertafter: '# Synapse Extension Start' insertafter: '# Synapse Extension Start'
with_dict: with_dict:
'matrix_synapse_awx_password_provider_rest_auth_enabled': 'true' 'matrix_synapse_awx_password_provider_rest_auth_enabled': 'true'
'matrix_synapse_awx_password_provider_rest_auth_endpoint': '"http://matrix-ma1sd:{{ matrix_ma1sd_default_port }}"' 'matrix_synapse_awx_password_provider_rest_auth_endpoint': '"http://matrix-ma1sd:{{ matrix_ma1sd_container_port }}"'
when: awx_matrix_ma1sd_auth_store == 'LDAP/AD' when: awx_matrix_ma1sd_auth_store == 'LDAP/AD'
- name: Remove entire ma1sd configuration extension - name: Remove entire ma1sd configuration extension

View file

@ -91,7 +91,7 @@ matrix_homeserver_url: "https://{{ matrix_server_fqn_matrix }}"
# Specifies where the homeserver is on the container network. # Specifies where the homeserver is on the container network.
# Where this is depends on whether there's a reverse-proxy in front of it, etc. # Where this is depends on whether there's a reverse-proxy in front of it, etc.
# This likely gets overriden elsewhere. # This likely gets overriden elsewhere.
matrix_homeserver_container_url: "http://matrix-synapse:8008" matrix_homeserver_container_url: ""
matrix_identity_server_url: ~ matrix_identity_server_url: ~

View file

@ -0,0 +1,9 @@
---
- name: Fail if required Matrix Base settings not defined
fail:
msg: >-
You need to define a required configuration setting (`{{ item }}`) for using this playbook.
when: "vars[item] == ''"
with_items:
- "matrix_homeserver_container_url"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-bot-go-neb name: matrix-bot-go-neb
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_bot_go_neb_service_stat.stat.exists|bool" when: "matrix_bot_go_neb_service_stat.stat.exists|bool"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-bot-matrix-reminder-bot name: matrix-bot-matrix-reminder-bot
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_bot_matrix_reminder_bot_service_stat.stat.exists|bool" when: "matrix_bot_matrix_reminder_bot_service_stat.stat.exists|bool"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-bot-mjolnir name: matrix-bot-mjolnir
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_bot_mjolnir_service_stat.stat.exists|bool" when: "matrix_bot_mjolnir_service_stat.stat.exists|bool"

View file

@ -54,6 +54,7 @@
service: service:
name: matrix-appservice-discord name: matrix-appservice-discord
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_appservice_discord_stat_db.stat.exists" when: "matrix_appservice_discord_stat_db.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-appservice-discord name: matrix-appservice-discord
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_appservice_discord_service_stat.stat.exists" when: "matrix_appservice_discord_service_stat.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-appservice-irc name: matrix-appservice-irc
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_appservice_irc_service_stat.stat.exists" when: "matrix_appservice_irc_service_stat.stat.exists"

View file

@ -33,7 +33,7 @@ matrix_appservice_slack_slack_port: 9003
matrix_appservice_slack_container_http_host_bind_port: '' matrix_appservice_slack_container_http_host_bind_port: ''
matrix_appservice_slack_homeserver_media_url: "{{ matrix_server_fqn_matrix }}" matrix_appservice_slack_homeserver_media_url: "{{ matrix_server_fqn_matrix }}"
matrix_appservice_slack_homeserver_url: "http://matrix-synapse:8008" matrix_appservice_slack_homeserver_url: ""
matrix_appservice_slack_homeserver_domain: "{{ matrix_domain }}" matrix_appservice_slack_homeserver_domain: "{{ matrix_domain }}"
matrix_appservice_slack_appservice_url: 'http://matrix-appservice-slack' matrix_appservice_slack_appservice_url: 'http://matrix-appservice-slack'
@ -82,7 +82,7 @@ matrix_appservice_slack_configuration_extension_yaml: |
# Optional # Optional
#matrix_admin_room: "!aBcDeF:matrix.org" #matrix_admin_room: "!aBcDeF:matrix.org"
#homeserver: #homeserver:
# url: http://localhost:8008 # url: http://localhost:{{ matrix_synapse_container_client_api_port }}
# server_name: my.server # server_name: my.server
# Optional # Optional
#tls: #tls:

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-appservice-slack name: matrix-appservice-slack
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_appservice_slack_service_stat.stat.exists" when: "matrix_appservice_slack_service_stat.stat.exists"

View file

@ -8,5 +8,6 @@
with_items: with_items:
- "matrix_appservice_slack_control_room_id" - "matrix_appservice_slack_control_room_id"
- "matrix_appservice_slack_appservice_token" - "matrix_appservice_slack_appservice_token"
- "matrix_appservice_slack_homeserver_url"
- "matrix_appservice_slack_homeserver_token" - "matrix_appservice_slack_homeserver_token"
- "matrix_appservice_slack_id_token" - "matrix_appservice_slack_id_token"

View file

@ -36,7 +36,7 @@ matrix_appservice_webhooks_matrix_port: 6789
matrix_appservice_webhooks_container_http_host_bind_port: '' matrix_appservice_webhooks_container_http_host_bind_port: ''
matrix_appservice_webhooks_homeserver_media_url: "{{ matrix_server_fqn_matrix }}" matrix_appservice_webhooks_homeserver_media_url: "{{ matrix_server_fqn_matrix }}"
matrix_appservice_webhooks_homeserver_url: "http://matrix-synapse:8008" matrix_appservice_webhooks_homeserver_url: ""
matrix_appservice_webhooks_homeserver_domain: "{{ matrix_domain }}" matrix_appservice_webhooks_homeserver_domain: "{{ matrix_domain }}"
matrix_appservice_webhooks_appservice_url: 'http://matrix-appservice-webhooks' matrix_appservice_webhooks_appservice_url: 'http://matrix-appservice-webhooks'

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-appservice-webhooks name: matrix-appservice-webhooks
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_appservice_webhooks_service_stat.stat.exists" when: "matrix_appservice_webhooks_service_stat.stat.exists"

View file

@ -7,6 +7,7 @@
when: "vars[item] == ''" when: "vars[item] == ''"
with_items: with_items:
- "matrix_appservice_webhooks_appservice_token" - "matrix_appservice_webhooks_appservice_token"
- "matrix_appservice_webhooks_homeserver_url"
- "matrix_appservice_webhooks_homeserver_token" - "matrix_appservice_webhooks_homeserver_token"
- "matrix_appservice_webhooks_id_token" - "matrix_appservice_webhooks_id_token"
- "matrix_appservice_webhooks_api_secret" - "matrix_appservice_webhooks_api_secret"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-beeper-linkedin name: matrix-beeper-linkedin
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_beeper_linkedin_service_stat.stat.exists" when: "matrix_beeper_linkedin_service_stat.stat.exists"

View file

@ -3,7 +3,7 @@
matrix_heisenbridge_enabled: true matrix_heisenbridge_enabled: true
matrix_heisenbridge_version: 1.5.0 matrix_heisenbridge_version: 1.7.1
matrix_heisenbridge_docker_image: "{{ matrix_container_global_registry_prefix }}hif1/heisenbridge:{{ matrix_heisenbridge_version }}" matrix_heisenbridge_docker_image: "{{ matrix_container_global_registry_prefix }}hif1/heisenbridge:{{ matrix_heisenbridge_version }}"
matrix_heisenbridge_docker_image_force_pull: "{{ matrix_heisenbridge_docker_image.endswith(':latest') }}" matrix_heisenbridge_docker_image_force_pull: "{{ matrix_heisenbridge_docker_image.endswith(':latest') }}"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-heisenbridge name: matrix-heisenbridge
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_heisenbridge_service_stat.stat.exists" when: "matrix_heisenbridge_service_stat.stat.exists"

View file

@ -6,7 +6,7 @@ matrix_mautrix_facebook_enabled: true
matrix_mautrix_facebook_container_image_self_build: false matrix_mautrix_facebook_container_image_self_build: false
matrix_mautrix_facebook_container_image_self_build_repo: "https://mau.dev/mautrix/facebook.git" matrix_mautrix_facebook_container_image_self_build_repo: "https://mau.dev/mautrix/facebook.git"
matrix_mautrix_facebook_version: v0.3.1 matrix_mautrix_facebook_version: v0.3.2
matrix_mautrix_facebook_docker_image: "{{ matrix_mautrix_facebook_docker_image_name_prefix }}mautrix/facebook:{{ matrix_mautrix_facebook_version }}" matrix_mautrix_facebook_docker_image: "{{ matrix_mautrix_facebook_docker_image_name_prefix }}mautrix/facebook:{{ matrix_mautrix_facebook_version }}"
matrix_mautrix_facebook_docker_image_name_prefix: "{{ 'localhost/' if matrix_mautrix_facebook_container_image_self_build else 'dock.mau.dev/' }}" matrix_mautrix_facebook_docker_image_name_prefix: "{{ 'localhost/' if matrix_mautrix_facebook_container_image_self_build else 'dock.mau.dev/' }}"
matrix_mautrix_facebook_docker_image_force_pull: "{{ matrix_mautrix_facebook_docker_image.endswith(':latest') }}" matrix_mautrix_facebook_docker_image_force_pull: "{{ matrix_mautrix_facebook_docker_image.endswith(':latest') }}"

View file

@ -86,6 +86,7 @@
service: service:
name: matrix-mautrix-facebook name: matrix-mautrix-facebook
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mautrix_facebook_stat_database.stat.exists" when: "matrix_mautrix_facebook_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mautrix-facebook name: matrix-mautrix-facebook
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mautrix_facebook_service_stat.stat.exists" when: "matrix_mautrix_facebook_service_stat.stat.exists"

View file

@ -10,22 +10,14 @@
- "matrix_mautrix_facebook_homeserver_token" - "matrix_mautrix_facebook_homeserver_token"
- block: - block:
- name: Fail if on SQLite, unless on the last version supporting SQLite - name: Inject warning if on an old SQLite-supporting version
fail:
msg: >-
You're trying to use the mautrix-facebook bridge with an SQLite database.
Going forward, this bridge only supports Postgres.
To learn more about this, see our changelog: https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/CHANGELOG.md#breaking-change-the-mautrix-facebook-bridge-now-requires-a-postgres-database
when: "not matrix_mautrix_facebook_docker_image.endswith(':da1b4ec596e334325a1589e70829dea46e73064b')"
- name: Inject warning if still on SQLite
set_fact: set_fact:
matrix_playbook_runtime_results: | matrix_playbook_runtime_results: |
{{ {{
matrix_playbook_runtime_results|default([]) matrix_playbook_runtime_results|default([])
+ +
[ [
"NOTE: Your mautrix-facebook bridge setup is still on SQLite. Your bridge is not getting any updates and will likely stop working at some point. To learn more about this, see our changelog: https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/master/CHANGELOG.md#breaking-change-the-mautrix-facebook-bridge-now-requires-a-postgres-database" "NOTE: Your mautrix-facebook bridge is still on SQLite and on the last version that supported it, before support was dropped. Support has been subsequently re-added in v0.3.2, so we advise you to upgrade (by removing your `matrix_mautrix_facebook_docker_image` definition from vars.yml)"
] ]
}} }}
when: "matrix_mautrix_facebook_database_engine == 'sqlite'" when: "matrix_mautrix_facebook_database_engine == 'sqlite' and matrix_mautrix_facebook_docker_image.endswith(':da1b4ec596e334325a1589e70829dea46e73064b')"

View file

@ -85,6 +85,7 @@
service: service:
name: matrix-mautrix-googlechat name: matrix-mautrix-googlechat
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mautrix_googlechat_stat_database.stat.exists" when: "matrix_mautrix_googlechat_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mautrix-googlechat name: matrix-mautrix-googlechat
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mautrix_googlechat_service_stat.stat.exists" when: "matrix_mautrix_googlechat_service_stat.stat.exists"

View file

@ -85,6 +85,7 @@
service: service:
name: matrix-mautrix-hangouts name: matrix-mautrix-hangouts
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mautrix_hangouts_stat_database.stat.exists" when: "matrix_mautrix_hangouts_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mautrix-hangouts name: matrix-mautrix-hangouts
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mautrix_hangouts_service_stat.stat.exists" when: "matrix_mautrix_hangouts_service_stat.stat.exists"

View file

@ -8,6 +8,7 @@
service: service:
name: matrix-mautrix-instagram name: matrix-mautrix-instagram
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mautrix_instagram_service_stat.stat.exists" when: "matrix_mautrix_instagram_service_stat.stat.exists"

View file

@ -10,6 +10,7 @@
service: service:
name: matrix-mautrix-signal-daemon name: matrix-mautrix-signal-daemon
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mautrix_signal_daemon_service_stat.stat.exists" when: "matrix_mautrix_signal_daemon_service_stat.stat.exists"
@ -29,6 +30,7 @@
service: service:
name: matrix-mautrix-signal name: matrix-mautrix-signal
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mautrix_signal_service_stat.stat.exists" when: "matrix_mautrix_signal_service_stat.stat.exists"

View file

@ -107,6 +107,7 @@
service: service:
name: matrix-mautrix-telegram name: matrix-mautrix-telegram
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mautrix_telegram_stat_database.stat.exists" when: "matrix_mautrix_telegram_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mautrix-telegram name: matrix-mautrix-telegram
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mautrix_telegram_service_stat.stat.exists" when: "matrix_mautrix_telegram_service_stat.stat.exists"

View file

@ -36,7 +36,6 @@ matrix_mautrix_whatsapp_homeserver_token: ''
matrix_mautrix_whatsapp_appservice_bot_username: whatsappbot matrix_mautrix_whatsapp_appservice_bot_username: whatsappbot
# Database-related configuration fields. # Database-related configuration fields.
# #
# To use SQLite, stick to these defaults. # To use SQLite, stick to these defaults.
@ -71,9 +70,14 @@ matrix_mautrix_whatsapp_appservice_database_uri: "{{
}[matrix_mautrix_whatsapp_database_engine] }[matrix_mautrix_whatsapp_database_engine]
}}" }}"
# Can be set to enable automatic double-puppeting via Shared Secret Auth (https://github.com/devture/matrix-synapse-shared-secret-auth). # Can be set to enable automatic double-puppeting via Shared Secret Auth (https://github.com/devture/matrix-synapse-shared-secret-auth).
matrix_mautrix_whatsapp_login_shared_secret: '' matrix_mautrix_whatsapp_login_shared_secret: ''
matrix_mautrix_whatsapp_bridge_login_shared_secret_map:
"{{ {matrix_mautrix_whatsapp_homeserver_domain: matrix_mautrix_whatsapp_login_shared_secret} if matrix_mautrix_whatsapp_login_shared_secret else {} }}"
# Servers to always allow double puppeting from
matrix_mautrix_whatsapp_bridge_double_puppet_server_map:
"{{ matrix_mautrix_whatsapp_homeserver_domain : matrix_mautrix_whatsapp_homeserver_address }}"
# Default mautrix-whatsapp configuration template which covers the generic use case. # Default mautrix-whatsapp configuration template which covers the generic use case.
# You can customize it by controlling the various variables inside it. # You can customize it by controlling the various variables inside it.

View file

@ -93,6 +93,7 @@
service: service:
name: matrix-mautrix-whatsapp name: matrix-mautrix-whatsapp
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mautrix_whatsapp_stat_database.stat.exists" when: "matrix_mautrix_whatsapp_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mautrix-whatsapp name: matrix-mautrix-whatsapp
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mautrix_whatsapp_service_stat.stat.exists" when: "matrix_mautrix_whatsapp_service_stat.stat.exists"

View file

@ -7,15 +7,17 @@ homeserver:
domain: {{ matrix_mautrix_whatsapp_homeserver_domain }} domain: {{ matrix_mautrix_whatsapp_homeserver_domain }}
# Application service host/registration related details. # Application service host/registration related details.
# Changing these values requires regeneration of the registration. # Changing these values requires regeneration of the registration.
# The URL to push real-time bridge status to.
# If set, the bridge will make POST requests to this URL whenever a user's whatsapp connection state changes.
# The bridge will use the appservice as_token to authorize requests.
status_endpoint: "null"
appservice: appservice:
# The address that the homeserver can use to connect to this appservice. # The address that the homeserver can use to connect to this appservice.
address: {{ matrix_mautrix_whatsapp_appservice_address }} address: {{ matrix_mautrix_whatsapp_appservice_address }}
# The hostname and port where this appservice should listen. # The hostname and port where this appservice should listen.
hostname: 0.0.0.0 hostname: 0.0.0.0
port: 8080 port: 8080
# Database config. # Database config.
database: database:
# The database type. "sqlite3" and "postgres" are supported. # The database type. "sqlite3" and "postgres" are supported.
@ -27,10 +29,6 @@ appservice:
# Maximum number of connections. Mostly relevant for Postgres. # Maximum number of connections. Mostly relevant for Postgres.
max_open_conns: 20 max_open_conns: 20
max_idle_conns: 2 max_idle_conns: 2
# Path to the Matrix room state store.
state_store_path: ./mx-state.json
# The unique ID of this appservice. # The unique ID of this appservice.
id: whatsapp id: whatsapp
# Appservice bot details. # Appservice bot details.
@ -41,7 +39,6 @@ appservice:
# to leave display name/avatar as-is. # to leave display name/avatar as-is.
displayname: WhatsApp bridge bot displayname: WhatsApp bridge bot
avatar: mxc://maunium.net/NeXNQarUbrlYBiPCpprYsRqr avatar: mxc://maunium.net/NeXNQarUbrlYBiPCpprYsRqr
# Authentication tokens for AS <-> HS communication. Autogenerated; do not modify. # Authentication tokens for AS <-> HS communication. Autogenerated; do not modify.
as_token: "{{ matrix_mautrix_whatsapp_appservice_token }}" as_token: "{{ matrix_mautrix_whatsapp_appservice_token }}"
hs_token: "{{ matrix_mautrix_whatsapp_homeserver_token }}" hs_token: "{{ matrix_mautrix_whatsapp_homeserver_token }}"
@ -51,79 +48,137 @@ bridge:
# Localpart template of MXIDs for WhatsApp users. # Localpart template of MXIDs for WhatsApp users.
# {{ '{{.}}' }} is replaced with the phone number of the WhatsApp user. # {{ '{{.}}' }} is replaced with the phone number of the WhatsApp user.
username_template: "{{ 'whatsapp_{{.}}' }}" username_template: "{{ 'whatsapp_{{.}}' }}"
# Displayname template for WhatsApp users. displayname_template: "{{ '{{if .PushName}}{{.PushName}}{{else if .BusinessName}}{{.BusinessName}}{{else}}{{.JID}}{{end}} (WA)' }}"
# {{ '{{.Notify'}}' }} - nickname set by the WhatsApp user # Should the bridge send a read receipt from the bridge bot when a message has been sent to WhatsApp?
# {{ '{{.Jid}}' }} - phone number (international format) delivery_receipts: false
# The following variables are also available, but will cause problems on multi-user instances: # Should incoming calls send a message to the Matrix room?
# {{ '{{.Name}}' }} - display name from contact list call_start_notices: true
# {{ '{{.Short}}' }} - short display name from contact list # Should another user's cryptographic identity changing send a message to Matrix?
displayname_template: "{{ '{{if .Notify}}{{.Notify}}{{else}}{{.Jid}}{{end}} (WA)' }}" identity_change_notices: false
# WhatsApp connection timeout in seconds. # Should a "reactions not yet supported" warning be sent to the Matrix room when a user reacts to a message?
connection_timeout: 20 reaction_notices: true
# Maximum number of times to retry connecting on connection error. portal_message_buffer: 128
max_connection_attempts: 3 # Settings for handling history sync payloads. These settings only apply right after login,
# Number of seconds to wait between connection attempts. # because the phone only sends the history sync data once, and there's no way to re-request it
# Negative numbers are exponential backoff: -connection_retry_delay + 1 + 2^attempts # (other than logging out and back in again).
connection_retry_delay: -1 history_sync:
# Whether or not the bridge should send a notice to the user's management room when it retries connecting. # Should the bridge create portals for chats in the history sync payload?
# If false, it will only report when it stops retrying. create_portals: true
report_connection_retry: true # Maximum age of chats in seconds to create portals for. Set to 0 to create portals for all chats in sync payload.
# Maximum number of seconds to wait for chats to be sent at startup. max_age: 604800
# If this is too low and you have lots of chats, it could cause backfilling to fail. # Enable backfilling history sync payloads from WhatsApp using batch sending?
chat_list_wait: 30 # This requires a server with MSC2716 support, which is currently an experimental feature in synapse.
# Maximum number of seconds to wait to sync portals before force unlocking message processing. # It can be enabled by setting experimental_features -> msc2716_enabled to true in homeserver.yaml.
# If this is too low and you have lots of chats, it could cause backfilling to fail. # Note that as of Synapse 1.46, there are still some bugs with the implementation, especially if using event persistence workers.
portal_sync_wait: 600 backfill: false
# Use double puppets for backfilling?
# Whether or not to send call start/end notices to Matrix. # In order to use this, the double puppets must be in the appservice's user ID namespace
call_notices: # (because the bridge can't use the double puppet access token with batch sending).
start: true # This only affects double puppets on the local server, double puppets on other servers will never be used.
end: true # Doesn't work out of box with this playbook
double_puppet_backfill: false
# Number of chats to sync for new users. # Should the bridge request a full sync from the phone when logging in?
initial_chat_sync_count: 10 # This bumps the size of history syncs from 3 months to 1 year.
# Number of old messages to fill when creating new portal rooms. request_full_sync: false
initial_history_fill_count: 20 user_avatar_sync: true
# Maximum number of chats to sync when recovering from downtime. # Should Matrix users leaving groups be bridged to WhatsApp?
# Set to -1 to sync all new chats during downtime. bridge_matrix_leave: true
recovery_chat_sync_limit: -1 # Should the bridge sync with double puppeting to receive EDUs that aren't normally sent to appservices.
# Whether or not to sync history when recovering from downtime.
recovery_history_backfill: true
# Maximum number of seconds since last message in chat to skip
# syncing the chat in any case. This setting will take priority
# over both recovery_chat_sync_limit and initial_chat_sync_count.
# Default is 3 days = 259200 seconds
sync_max_chat_age: 259200
# Whether or not to sync with custom puppets to receive EDUs that
# are not normally sent to appservices.
sync_with_custom_puppets: true sync_with_custom_puppets: true
# Shared secret for https://github.com/devture/matrix-synapse-shared-secret-auth # Should the bridge update the m.direct account data event when double puppeting is enabled.
# Note that updating the m.direct event is not atomic (except with mautrix-asmux)
# and is therefore prone to race conditions.
sync_direct_chat_list: false
# When double puppeting is enabled, users can use `!wa toggle` to change whether
# presence and read receipts are bridged. These settings set the default values.
# Existing users won't be affected when these are changed.
default_bridge_receipts: true
default_bridge_presence: true
# Servers to always allow double puppeting from
double_puppet_server_map:
"{{ matrix_mautrix_whatsapp_homeserver_domain }}": {{ matrix_mautrix_whatsapp_homeserver_address }}
# Allow using double puppeting from any server with a valid client .well-known file.
double_puppet_allow_discovery: false
# Shared secrets for https://github.com/devture/matrix-synapse-shared-secret-auth
# #
# If set, custom puppets will be enabled automatically for local users # If set, double puppeting will be enabled automatically for local users
# instead of users having to find an access token and run `login-matrix` # instead of users having to find an access token and run `login-matrix`
# manually. # manually.
login_shared_secret: {{ matrix_mautrix_whatsapp_login_shared_secret|to_json }} login_shared_secret_map: {{ matrix_mautrix_whatsapp_bridge_login_shared_secret_map|to_json }}
# Should the bridge explicitly set the avatar and room name for private chat portal rooms?
# Whether or not to invite own WhatsApp user's Matrix puppet into private
# chat portals when backfilling if needed.
# This always uses the default puppet instead of custom puppets due to
# rate limits and timestamp massaging.
invite_own_puppet_for_backfilling: true
# Whether or not to explicitly set the avatar and room name for private
# chat portal rooms. This can be useful if the previous field works fine,
# but causes room avatar/name bugs.
private_chat_portal_meta: false private_chat_portal_meta: false
# Should Matrix m.notice-type messages be bridged?
bridge_notices: true
# Set this to true to tell the bridge to re-send m.bridge events to all rooms on the next run.
# This field will automatically be changed back to false after it, except if the config file is not writable.
resend_bridge_info: false
# When using double puppeting, should muted chats be muted in Matrix?
mute_bridging: false
# When using double puppeting, should archived chats be moved to a specific tag in Matrix?
# Note that WhatsApp unarchives chats when a message is received, which will also be mirrored to Matrix.
# This can be set to a tag (e.g. m.lowpriority), or null to disable.
archive_tag: null
# Same as above, but for pinned chats. The favorite tag is called m.favourite
pinned_tag: null
# Should mute status and tags only be bridged when the portal room is created?
tag_only_on_create: true
# Should WhatsApp status messages be bridged into a Matrix room?
# Disabling this won't affect already created status broadcast rooms.
enable_status_broadcast: true
# Should the status broadcast room be muted and moved into low priority by default?
# This is only applied when creating the room, the user can unmute/untag it later.
mute_status_broadcast: true
# Should the bridge use thumbnails from WhatsApp?
# They're disabled by default due to very low resolution.
whatsapp_thumbnail: false
# Allow invite permission for user. User can invite any bots to room with whatsapp # Allow invite permission for user. User can invite any bots to room with whatsapp
# users (private chat and groups) # users (private chat and groups)
allow_user_invite: false allow_user_invite: false
# Whether or not created rooms should have federation enabled.
# If false, created portal rooms will never be federated.
federate_rooms: true
# The prefix for commands. Only required in non-management rooms. # The prefix for commands. Only required in non-management rooms.
command_prefix: "!wa" command_prefix: "!wa"
# Messages sent upon joining a management room.
# Markdown is supported. The defaults are listed below.
management_room_text:
# Sent when joining a room.
welcome: "Hello, I'm a WhatsApp bridge bot."
# Sent when joining a management room and the user is already logged in.
welcome_connected: "Use `help` for help."
# Sent when joining a management room and the user is not logged in.
welcome_unconnected: "Use `help` for help or `login` to log in."
# Optional extra text sent when joining a management room.
additional_help: ""
# End-to-bridge encryption support options.
#
# See https://docs.mau.fi/bridges/general/end-to-bridge-encryption.html for more info.
encryption:
# Allow encryption, work in group chat rooms with e2ee enabled
allow: false
# Default to encryption, force-enable encryption in all portals the bridge creates
# This will cause the bridge bot to be in private chats for the encryption to work properly.
# It is recommended to also set private_chat_portal_meta to true when using this.
default: false
# Options for automatic key sharing.
key_sharing:
# Enable key sharing? If enabled, key requests for rooms where users are in will be fulfilled.
# You must use a client that supports requesting keys from other users to use this feature.
allow: false
# Require the requesting device to have a valid cross-signing signature?
# This doesn't require that the bridge has verified the device, only that the user has verified it.
# Not yet implemented.
require_cross_signing: false
# Require devices to be verified by the bridge?
# Verification by the bridge is not yet implemented.
require_verification: true
# Permissions for using the bridge. # Permissions for using the bridge.
# Permitted values: # Permitted values:
# relay - Talk through the relaybot (if enabled), no access otherwise
# user - Access to use the bridge to chat with a WhatsApp account. # user - Access to use the bridge to chat with a WhatsApp account.
# admin - User level and some additional administration tools # admin - User level and some additional administration tools
# Permitted keys: # Permitted keys:
@ -133,15 +188,13 @@ bridge:
permissions: permissions:
"{{ matrix_mautrix_whatsapp_homeserver_domain }}": user "{{ matrix_mautrix_whatsapp_homeserver_domain }}": user
relaybot: # Settings for relay mode
# Whether or not relaybot support is enabled. relay:
# Whether relay mode should be allowed. If allowed, `!wa set-relay` can be used to turn any
# authenticated user into a relaybot for that chat.
enabled: false enabled: false
# The management room for the bot. This is where all status notifications are posted and # Should only admins be allowed to set themselves as relay users?
# in this room, you can use `!wa <command>` instead of `!wa relaybot <command>`. Omitting admin_only: true
# the command prefix completely like in user management rooms is not possible.
management: '!foo:example.com'
# List of users to invite to all created rooms that include the relaybot.
invites: []
# The formats to use when sending messages to WhatsApp via the relaybot. # The formats to use when sending messages to WhatsApp via the relaybot.
message_formats: message_formats:
m.text: "<b>{{ '{{ .Sender.Displayname }}' }}</b>: {{ '{{ .Message }}' }}" m.text: "<b>{{ '{{ .Sender.Displayname }}' }}</b>: {{ '{{ .Message }}' }}"
@ -152,6 +205,7 @@ bridge:
m.audio: "<b>{{ '{{ .Sender.Displayname }}' }}</b>: sent an audio file" m.audio: "<b>{{ '{{ .Sender.Displayname }}' }}</b>: sent an audio file"
m.video: "<b>{{ '{{ .Sender.Displayname }}' }}</b>: sent a video" m.video: "<b>{{ '{{ .Sender.Displayname }}' }}</b>: sent a video"
m.location: "<b>{{ '{{ .Sender.Displayname }}' }}</b>: sent a location" m.location: "<b>{{ '{{ .Sender.Displayname }}' }}</b>: sent a location"
# Logging config. # Logging config.
logging: logging:
# The directory for log files. Will be created if not found. # The directory for log files. Will be created if not found.

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mx-puppet-discord name: matrix-mx-puppet-discord
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mx_puppet_discord_service_stat.stat.exists" when: "matrix_mx_puppet_discord_service_stat.stat.exists"

View file

@ -31,6 +31,7 @@
service: service:
name: matrix-mx-puppet-groupme name: matrix-mx-puppet-groupme
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mx_puppet_groupme_stat_database.stat.exists" when: "matrix_mx_puppet_groupme_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mx-puppet-groupme name: matrix-mx-puppet-groupme
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mx_puppet_groupme_service_stat.stat.exists" when: "matrix_mx_puppet_groupme_service_stat.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mx-puppet-instagram name: matrix-mx-puppet-instagram
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mx_puppet_instagram_service_stat.stat.exists" when: "matrix_mx_puppet_instagram_service_stat.stat.exists"

View file

@ -31,6 +31,7 @@
service: service:
name: matrix-mx-puppet-skype name: matrix-mx-puppet-skype
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mx_puppet_skype_stat_database.stat.exists" when: "matrix_mx_puppet_skype_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mx-puppet-skype name: matrix-mx-puppet-skype
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mx_puppet_skype_service_stat.stat.exists" when: "matrix_mx_puppet_skype_service_stat.stat.exists"

View file

@ -31,6 +31,7 @@
service: service:
name: matrix-mx-puppet-slack name: matrix-mx-puppet-slack
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mx_puppet_slack_stat_database.stat.exists" when: "matrix_mx_puppet_slack_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mx-puppet-slack name: matrix-mx-puppet-slack
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mx_puppet_slack_service_stat.stat.exists" when: "matrix_mx_puppet_slack_service_stat.stat.exists"

View file

@ -31,6 +31,7 @@
service: service:
name: matrix-mx-puppet-steam name: matrix-mx-puppet-steam
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mx_puppet_steam_stat_database.stat.exists" when: "matrix_mx_puppet_steam_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mx-puppet-steam name: matrix-mx-puppet-steam
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mx_puppet_steam_service_stat.stat.exists" when: "matrix_mx_puppet_steam_service_stat.stat.exists"

View file

@ -31,6 +31,7 @@
service: service:
name: matrix-mx-puppet-twitter name: matrix-mx-puppet-twitter
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_mx_puppet_twitter_stat_database.stat.exists" when: "matrix_mx_puppet_twitter_stat_database.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-mx-puppet-twitter name: matrix-mx-puppet-twitter
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mx_puppet_twitter_service_stat.stat.exists" when: "matrix_mx_puppet_twitter_service_stat.stat.exists"

View file

@ -26,7 +26,7 @@ matrix_sms_bridge_systemd_wanted_services_list: []
matrix_sms_bridge_appservice_url: 'http://matrix-sms-bridge:8080' matrix_sms_bridge_appservice_url: 'http://matrix-sms-bridge:8080'
matrix_sms_bridge_homeserver_hostname: 'matrix-synapse' matrix_sms_bridge_homeserver_hostname: 'matrix-synapse'
matrix_sms_bridge_homeserver_port: '8008' matrix_sms_bridge_homeserver_port: ""
matrix_sms_bridge_homserver_domain: "{{ matrix_domain }}" matrix_sms_bridge_homserver_domain: "{{ matrix_domain }}"
matrix_sms_bridge_default_room: '' matrix_sms_bridge_default_room: ''

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-sms-bridge name: matrix-sms-bridge
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_sms_bridge_service_stat.stat.exists" when: "matrix_sms_bridge_service_stat.stat.exists"
@ -16,4 +17,4 @@
file: file:
path: "{{ matrix_systemd_path }}/matrix-sms-bridge.service" path: "{{ matrix_systemd_path }}/matrix-sms-bridge.service"
state: absent state: absent
when: "matrix_sms_bridge_service_stat.stat.exists" when: "matrix_sms_bridge_service_stat.stat.exists"

View file

@ -7,6 +7,7 @@
when: "vars[item] == ''" when: "vars[item] == ''"
with_items: with_items:
- "matrix_sms_bridge_appservice_token" - "matrix_sms_bridge_appservice_token"
- "matrix_sms_bridge_homeserver_port"
- "matrix_sms_bridge_homeserver_token" - "matrix_sms_bridge_homeserver_token"
- "matrix_sms_bridge_default_region" - "matrix_sms_bridge_default_region"
- "matrix_sms_bridge_default_timezone" - "matrix_sms_bridge_default_timezone"

View file

@ -7,7 +7,7 @@ matrix_client_element_container_image_self_build_repo: "https://github.com/vecto
# - https://github.com/vector-im/element-web/issues/19544 # - https://github.com/vector-im/element-web/issues/19544
matrix_client_element_container_image_self_build_low_memory_system_patch_enabled: "{{ ansible_memtotal_mb < 4096 }}" matrix_client_element_container_image_self_build_low_memory_system_patch_enabled: "{{ ansible_memtotal_mb < 4096 }}"
matrix_client_element_version: v1.9.4 matrix_client_element_version: v1.9.5
matrix_client_element_docker_image: "{{ matrix_client_element_docker_image_name_prefix }}vectorim/element-web:{{ matrix_client_element_version }}" matrix_client_element_docker_image: "{{ matrix_client_element_docker_image_name_prefix }}vectorim/element-web:{{ matrix_client_element_version }}"
matrix_client_element_docker_image_name_prefix: "{{ 'localhost/' if matrix_client_element_container_image_self_build else matrix_container_global_registry_prefix }}" matrix_client_element_docker_image_name_prefix: "{{ 'localhost/' if matrix_client_element_container_image_self_build else matrix_container_global_registry_prefix }}"
matrix_client_element_docker_image_force_pull: "{{ matrix_client_element_docker_image.endswith(':latest') }}" matrix_client_element_docker_image_force_pull: "{{ matrix_client_element_docker_image.endswith(':latest') }}"

View file

@ -10,6 +10,7 @@
service: service:
name: matrix-riot-web name: matrix-riot-web
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_client_element_enabled|bool and matrix_client_riot_web_service_stat.stat.exists" when: "matrix_client_element_enabled|bool and matrix_client_riot_web_service_stat.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-client-element name: matrix-client-element
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_client_element_service_stat.stat.exists|bool" when: "matrix_client_element_service_stat.stat.exists|bool"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-client-hydrogen name: matrix-client-hydrogen
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_client_hydrogen_service_stat.stat.exists|bool" when: "matrix_client_hydrogen_service_stat.stat.exists|bool"

View file

@ -22,7 +22,7 @@ matrix_corporal_container_extra_arguments: []
# List of systemd services that matrix-corporal.service depends on # List of systemd services that matrix-corporal.service depends on
matrix_corporal_systemd_required_services_list: ['docker.service'] matrix_corporal_systemd_required_services_list: ['docker.service']
matrix_corporal_version: 2.1.2 matrix_corporal_version: 2.2.1
matrix_corporal_docker_image: "{{ matrix_corporal_docker_image_name_prefix }}devture/matrix-corporal:{{ matrix_corporal_docker_image_tag }}" matrix_corporal_docker_image: "{{ matrix_corporal_docker_image_name_prefix }}devture/matrix-corporal:{{ matrix_corporal_docker_image_tag }}"
matrix_corporal_docker_image_name_prefix: "{{ 'localhost/' if matrix_corporal_container_image_self_build else matrix_container_global_registry_prefix }}" matrix_corporal_docker_image_name_prefix: "{{ 'localhost/' if matrix_corporal_container_image_self_build else matrix_container_global_registry_prefix }}"
matrix_corporal_docker_image_tag: "{{ matrix_corporal_version }}" # for backward-compatibility matrix_corporal_docker_image_tag: "{{ matrix_corporal_version }}" # for backward-compatibility
@ -36,7 +36,7 @@ matrix_corporal_var_dir_path: "{{ matrix_corporal_base_path }}/var"
matrix_corporal_matrix_homeserver_domain_name: "{{ matrix_domain }}" matrix_corporal_matrix_homeserver_domain_name: "{{ matrix_domain }}"
# Controls where matrix-corporal can reach your Synapse server (e.g. "http://matrix-synapse:8008"). # Controls where matrix-corporal can reach your Synapse server (e.g. "http://matrix-synapse:{{ matrix_synapse_container_client_api_port }}").
# If Synapse runs on the same machine, you may need to add its service to `matrix_corporal_systemd_required_services_list`. # If Synapse runs on the same machine, you may need to add its service to `matrix_corporal_systemd_required_services_list`.
matrix_corporal_matrix_homeserver_api_endpoint: "" matrix_corporal_matrix_homeserver_api_endpoint: ""

View file

@ -83,6 +83,7 @@
service: service:
name: matrix-corporal name: matrix-corporal
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "not matrix_corporal_enabled|bool and matrix_corporal_service_stat.stat.exists" when: "not matrix_corporal_enabled|bool and matrix_corporal_service_stat.stat.exists"

View file

@ -10,6 +10,7 @@
service: service:
name: matrix-coturn name: matrix-coturn
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_coturn_service_stat.stat.exists|bool" when: "matrix_coturn_service_stat.stat.exists|bool"
@ -17,6 +18,7 @@
service: service:
name: matrix-coturn name: matrix-coturn
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
failed_when: false failed_when: false
when: "matrix_coturn_service_stat.stat.exists|bool" when: "matrix_coturn_service_stat.stat.exists|bool"

View file

@ -39,7 +39,7 @@ matrix_dimension_integrations_rest_url: "https://{{ matrix_server_fqn_dimension
matrix_dimension_integrations_widgets_urls: ["https://{{ matrix_server_fqn_dimension }}/widgets"] matrix_dimension_integrations_widgets_urls: ["https://{{ matrix_server_fqn_dimension }}/widgets"]
matrix_dimension_integrations_jitsi_widget_url: "https://{{ matrix_server_fqn_dimension }}/widgets/jitsi" matrix_dimension_integrations_jitsi_widget_url: "https://{{ matrix_server_fqn_dimension }}/widgets/jitsi"
matrix_dimension_homeserver_federationUrl: "http://matrix-synapse:8048" matrix_dimension_homeserver_federationUrl: ""
# Database-related configuration fields. # Database-related configuration fields.

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-dimension name: matrix-dimension
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_dimension_service_stat.stat.exists|bool" when: "matrix_dimension_service_stat.stat.exists|bool"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-dynamic-dns name: matrix-dynamic-dns
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_dynamic_dns_service_stat.stat.exists" when: "matrix_dynamic_dns_service_stat.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-email2matrix name: matrix-email2matrix
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_email2matrix_service_stat.stat.exists|bool" when: "matrix_email2matrix_service_stat.stat.exists|bool"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-etherpad name: matrix-etherpad
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_etherpad_service_stat.stat.exists|bool" when: "matrix_etherpad_service_stat.stat.exists|bool"

View file

@ -93,6 +93,7 @@
service: service:
name: matrix-grafana name: matrix-grafana
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "not matrix_grafana_enabled|bool and matrix_grafana_service_stat.stat.exists" when: "not matrix_grafana_enabled|bool and matrix_grafana_service_stat.stat.exists"

View file

@ -68,6 +68,7 @@
service: service:
name: matrix-jitsi-jicofo name: matrix-jitsi-jicofo
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "not matrix_jitsi_enabled|bool and matrix_jitsi_jicofo_service_stat.stat.exists" when: "not matrix_jitsi_enabled|bool and matrix_jitsi_jicofo_service_stat.stat.exists"

View file

@ -68,6 +68,7 @@
service: service:
name: matrix-jitsi-jvb name: matrix-jitsi-jvb
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "not matrix_jitsi_enabled|bool and matrix_jitsi_jvb_service_stat.stat.exists" when: "not matrix_jitsi_enabled|bool and matrix_jitsi_jvb_service_stat.stat.exists"

View file

@ -59,6 +59,7 @@
service: service:
name: matrix-jitsi-prosody name: matrix-jitsi-prosody
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "not matrix_jitsi_enabled|bool and matrix_jitsi_prosody_service_stat.stat.exists" when: "not matrix_jitsi_enabled|bool and matrix_jitsi_prosody_service_stat.stat.exists"

View file

@ -69,6 +69,7 @@
service: service:
name: matrix-jitsi-web name: matrix-jitsi-web
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "not matrix_jitsi_enabled|bool and matrix_jitsi_web_service_stat.stat.exists" when: "not matrix_jitsi_enabled|bool and matrix_jitsi_web_service_stat.stat.exists"

View file

@ -19,8 +19,8 @@ matrix_ma1sd_docker_src_files_path: "{{ matrix_ma1sd_base_path }}/docker-src/ma1
matrix_ma1sd_config_path: "{{ matrix_ma1sd_base_path }}/config" matrix_ma1sd_config_path: "{{ matrix_ma1sd_base_path }}/config"
matrix_ma1sd_data_path: "{{ matrix_ma1sd_base_path }}/data" matrix_ma1sd_data_path: "{{ matrix_ma1sd_base_path }}/data"
matrix_ma1sd_default_port: 8090 matrix_ma1sd_container_port: 8090
# Controls whether the matrix-ma1sd container exposes its HTTP port (tcp/{{ matrix_ma1sd_default_port }} in the container). # Controls whether the matrix-ma1sd container exposes its HTTP port (tcp/{{ matrix_ma1sd_container_port }} in the container).
# #
# Takes an "<ip>:<port>" or "<port>" value (e.g. "127.0.0.1:8090"), or empty string to not expose. # Takes an "<ip>:<port>" or "<port>" value (e.g. "127.0.0.1:8090"), or empty string to not expose.
matrix_ma1sd_container_http_host_bind_port: '' matrix_ma1sd_container_http_host_bind_port: ''
@ -83,7 +83,7 @@ matrix_ma1sd_threepid_medium_email_connectors_smtp_password: ""
# so that ma1sd can rewrite the original URL to one that would reach the homeserver. # so that ma1sd can rewrite the original URL to one that would reach the homeserver.
matrix_ma1sd_dns_overwrite_enabled: false matrix_ma1sd_dns_overwrite_enabled: false
matrix_ma1sd_dns_overwrite_homeserver_client_name: "{{ matrix_server_fqn_matrix }}" matrix_ma1sd_dns_overwrite_homeserver_client_name: "{{ matrix_server_fqn_matrix }}"
matrix_ma1sd_dns_overwrite_homeserver_client_value: "http://matrix-synapse:8008" matrix_ma1sd_dns_overwrite_homeserver_client_value: ""
# Override the default session templates # Override the default session templates
# To use this, fill in the template variables with the full desired template as a multi-line YAML variable # To use this, fill in the template variables with the full desired template as a multi-line YAML variable

View file

@ -23,6 +23,7 @@
service: service:
name: matrix-mxisd name: matrix-mxisd
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
when: "matrix_mxisd_service_stat.stat.exists" when: "matrix_mxisd_service_stat.stat.exists"

View file

@ -9,6 +9,7 @@
service: service:
name: matrix-ma1sd name: matrix-ma1sd
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "matrix_ma1sd_service_stat.stat.exists|bool" when: "matrix_ma1sd_service_stat.stat.exists|bool"

View file

@ -46,6 +46,7 @@
when: "vars[item] == ''" when: "vars[item] == ''"
with_items: with_items:
- "matrix_ma1sd_threepid_medium_email_connectors_smtp_host" - "matrix_ma1sd_threepid_medium_email_connectors_smtp_host"
- "matrix_ma1sd_dns_overwrite_homeserver_client_value"
- name: (Deprecation) Catch and report renamed ma1sd variables - name: (Deprecation) Catch and report renamed ma1sd variables
fail: fail:
@ -56,6 +57,7 @@
with_items: with_items:
- {'old': 'matrix_ma1sd_container_expose_port', 'new': '<superseded by matrix_ma1sd_container_http_host_bind_port>'} - {'old': 'matrix_ma1sd_container_expose_port', 'new': '<superseded by matrix_ma1sd_container_http_host_bind_port>'}
- {'old': 'matrix_ma1sd_threepid_medium_email_custom_unbind_fraudulent_template', 'new': 'matrix_ma1sd_threepid_medium_email_custom_session_unbind_notification_template'} - {'old': 'matrix_ma1sd_threepid_medium_email_custom_unbind_fraudulent_template', 'new': 'matrix_ma1sd_threepid_medium_email_custom_session_unbind_notification_template'}
- {'old': 'matrix_ma1sd_default_port', 'new': 'matrix_ma1sd_container_port'}
- name: (Deprecation) Catch and report mxisd variables - name: (Deprecation) Catch and report mxisd variables
fail: fail:

View file

@ -26,7 +26,7 @@ ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-ma1sd \
--tmpfs=/tmp:rw,exec,nosuid,size=10m \ --tmpfs=/tmp:rw,exec,nosuid,size=10m \
--network={{ matrix_docker_network }} \ --network={{ matrix_docker_network }} \
{% if matrix_ma1sd_container_http_host_bind_port %} {% if matrix_ma1sd_container_http_host_bind_port %}
-p {{ matrix_ma1sd_container_http_host_bind_port }}:{{ matrix_ma1sd_default_port }} \ -p {{ matrix_ma1sd_container_http_host_bind_port }}:{{ matrix_ma1sd_container_port }} \
{% endif %} {% endif %}
{% if matrix_ma1sd_verbose_logging %} {% if matrix_ma1sd_verbose_logging %}
-e MA1SD_LOG_LEVEL=debug \ -e MA1SD_LOG_LEVEL=debug \

View file

@ -7,7 +7,7 @@ matrix_mailer_container_image_self_build_repository_url: "https://github.com/dev
matrix_mailer_container_image_self_build_src_files_path: "{{ matrix_mailer_base_path }}/docker-src" matrix_mailer_container_image_self_build_src_files_path: "{{ matrix_mailer_base_path }}/docker-src"
matrix_mailer_container_image_self_build_version: "{{ matrix_mailer_docker_image.split(':')[1] }}" matrix_mailer_container_image_self_build_version: "{{ matrix_mailer_docker_image.split(':')[1] }}"
matrix_mailer_version: 4.94.2-r0-4 matrix_mailer_version: 4.94.2-r0-5
matrix_mailer_docker_image: "{{ matrix_mailer_docker_image_name_prefix }}devture/exim-relay:{{ matrix_mailer_version }}" matrix_mailer_docker_image: "{{ matrix_mailer_docker_image_name_prefix }}devture/exim-relay:{{ matrix_mailer_version }}"
matrix_mailer_docker_image_name_prefix: "{{ 'localhost/' if matrix_mailer_container_image_self_build else matrix_container_global_registry_prefix }}" matrix_mailer_docker_image_name_prefix: "{{ 'localhost/' if matrix_mailer_container_image_self_build else matrix_container_global_registry_prefix }}"
matrix_mailer_docker_image_force_pull: "{{ matrix_mailer_docker_image.endswith(':latest') }}" matrix_mailer_docker_image_force_pull: "{{ matrix_mailer_docker_image.endswith(':latest') }}"

View file

@ -79,6 +79,7 @@
service: service:
name: matrix-mailer name: matrix-mailer
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "not matrix_mailer_enabled|bool and matrix_mailer_service_stat.stat.exists" when: "not matrix_mailer_enabled|bool and matrix_mailer_service_stat.stat.exists"

View file

@ -1,5 +1,5 @@
matrix_nginx_proxy_enabled: true matrix_nginx_proxy_enabled: true
matrix_nginx_proxy_version: 1.21.3-alpine matrix_nginx_proxy_version: 1.21.4-alpine
# We use an official nginx image, which we fix-up to run unprivileged. # We use an official nginx image, which we fix-up to run unprivileged.
# An alternative would be an `nginxinc/nginx-unprivileged` image, but # An alternative would be an `nginxinc/nginx-unprivileged` image, but
@ -115,9 +115,10 @@ matrix_nginx_proxy_proxy_riot_compat_redirect_hostname: "riot.{{ matrix_domain }
matrix_nginx_proxy_proxy_synapse_enabled: false matrix_nginx_proxy_proxy_synapse_enabled: false
matrix_nginx_proxy_proxy_synapse_hostname: "matrix-nginx-proxy" matrix_nginx_proxy_proxy_synapse_hostname: "matrix-nginx-proxy"
matrix_nginx_proxy_proxy_synapse_federation_api_enabled: "{{ matrix_nginx_proxy_proxy_matrix_federation_api_enabled }}" matrix_nginx_proxy_proxy_synapse_federation_api_enabled: "{{ matrix_nginx_proxy_proxy_matrix_federation_api_enabled }}"
# The addresses where the Federation API is, when using Synapse. # The addresses where the Federation API is, when using Synapse.
matrix_nginx_proxy_proxy_synapse_federation_api_addr_with_container: "matrix-synapse:8048" matrix_nginx_proxy_proxy_synapse_federation_api_addr_with_container: ""
matrix_nginx_proxy_proxy_synapse_federation_api_addr_sans_container: "localhost:8048" matrix_nginx_proxy_proxy_synapse_federation_api_addr_sans_container: ""
# Controls whether proxying the Element domain should be done. # Controls whether proxying the Element domain should be done.
matrix_nginx_proxy_proxy_element_enabled: false matrix_nginx_proxy_proxy_element_enabled: false
@ -165,20 +166,20 @@ matrix_nginx_proxy_proxy_matrix_corporal_api_addr_sans_container: "127.0.0.1:410
# This can be used to forward the API endpoint to another service, augmenting the functionality of Synapse's own User Directory Search. # This can be used to forward the API endpoint to another service, augmenting the functionality of Synapse's own User Directory Search.
# To learn more, see: https://github.com/ma1uta/ma1sd/blob/master/docs/features/directory.md # To learn more, see: https://github.com/ma1uta/ma1sd/blob/master/docs/features/directory.md
matrix_nginx_proxy_proxy_matrix_user_directory_search_enabled: false matrix_nginx_proxy_proxy_matrix_user_directory_search_enabled: false
matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_with_container: "matrix-ma1sd:{{ matrix_ma1sd_default_port }}" matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_with_container: "matrix-ma1sd:{{ matrix_ma1sd_container_port }}"
matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_sans_container: "127.0.0.1:{{ matrix_ma1sd_default_port }}" matrix_nginx_proxy_proxy_matrix_user_directory_search_addr_sans_container: "127.0.0.1:{{ matrix_ma1sd_container_port }}"
# Controls whether proxying for 3PID-based registration (`/_matrix/client/r0/register/(email|msisdn)/requestToken`) should be done (on the matrix domain). # Controls whether proxying for 3PID-based registration (`/_matrix/client/r0/register/(email|msisdn)/requestToken`) should be done (on the matrix domain).
# This allows another service to control registrations involving 3PIDs. # This allows another service to control registrations involving 3PIDs.
# To learn more, see: https://github.com/ma1uta/ma1sd/blob/master/docs/features/registration.md # To learn more, see: https://github.com/ma1uta/ma1sd/blob/master/docs/features/registration.md
matrix_nginx_proxy_proxy_matrix_3pid_registration_enabled: false matrix_nginx_proxy_proxy_matrix_3pid_registration_enabled: false
matrix_nginx_proxy_proxy_matrix_3pid_registration_addr_with_container: "matrix-ma1sd:{{ matrix_ma1sd_default_port }}" matrix_nginx_proxy_proxy_matrix_3pid_registration_addr_with_container: "matrix-ma1sd:{{ matrix_ma1sd_container_port }}"
matrix_nginx_proxy_proxy_matrix_3pid_registration_addr_sans_container: "127.0.0.1:{{ matrix_ma1sd_default_port }}" matrix_nginx_proxy_proxy_matrix_3pid_registration_addr_sans_container: "127.0.0.1:{{ matrix_ma1sd_container_port }}"
# Controls whether proxying for the Identity API (`/_matrix/identity`) should be done (on the matrix domain) # Controls whether proxying for the Identity API (`/_matrix/identity`) should be done (on the matrix domain)
matrix_nginx_proxy_proxy_matrix_identity_api_enabled: false matrix_nginx_proxy_proxy_matrix_identity_api_enabled: false
matrix_nginx_proxy_proxy_matrix_identity_api_addr_with_container: "matrix-ma1sd:{{ matrix_ma1sd_default_port }}" matrix_nginx_proxy_proxy_matrix_identity_api_addr_with_container: "matrix-ma1sd:{{ matrix_ma1sd_container_port }}"
matrix_nginx_proxy_proxy_matrix_identity_api_addr_sans_container: "127.0.0.1:{{ matrix_ma1sd_default_port }}" matrix_nginx_proxy_proxy_matrix_identity_api_addr_sans_container: "127.0.0.1:{{ matrix_ma1sd_container_port }}"
# Controls whether proxying for metrics (`/_synapse/metrics`) should be done (on the matrix domain) # Controls whether proxying for metrics (`/_synapse/metrics`) should be done (on the matrix domain)
matrix_nginx_proxy_proxy_synapse_metrics: false matrix_nginx_proxy_proxy_synapse_metrics: false
@ -196,8 +197,8 @@ matrix_nginx_proxy_proxy_matrix_client_api_addr_with_container: "matrix-nginx-pr
matrix_nginx_proxy_proxy_matrix_client_api_addr_sans_container: "127.0.0.1:12080" matrix_nginx_proxy_proxy_matrix_client_api_addr_sans_container: "127.0.0.1:12080"
# The addresses where the Matrix Client API is, when using Synapse. # The addresses where the Matrix Client API is, when using Synapse.
matrix_nginx_proxy_proxy_synapse_client_api_addr_with_container: "matrix-synapse:8008" matrix_nginx_proxy_proxy_synapse_client_api_addr_with_container: ""
matrix_nginx_proxy_proxy_synapse_client_api_addr_sans_container: "127.0.0.1:8008" matrix_nginx_proxy_proxy_synapse_client_api_addr_sans_container: ""
# This needs to be equal or higher than the maximum upload size accepted by Synapse. # This needs to be equal or higher than the maximum upload size accepted by Synapse.
matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb: 50 matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb: 50
@ -437,7 +438,7 @@ matrix_ssl_additional_domains_to_obtain_certificates_for: []
# Controls whether to obtain production or staging certificates from Let's Encrypt. # Controls whether to obtain production or staging certificates from Let's Encrypt.
matrix_ssl_lets_encrypt_staging: false matrix_ssl_lets_encrypt_staging: false
matrix_ssl_lets_encrypt_certbot_docker_image: "{{ matrix_container_global_registry_prefix }}certbot/certbot:{{ matrix_ssl_architecture }}-v1.20.0" matrix_ssl_lets_encrypt_certbot_docker_image: "{{ matrix_container_global_registry_prefix }}certbot/certbot:{{ matrix_ssl_architecture }}-v1.21.0"
matrix_ssl_lets_encrypt_certbot_docker_image_force_pull: "{{ matrix_ssl_lets_encrypt_certbot_docker_image.endswith(':latest') }}" matrix_ssl_lets_encrypt_certbot_docker_image_force_pull: "{{ matrix_ssl_lets_encrypt_certbot_docker_image.endswith(':latest') }}"
matrix_ssl_lets_encrypt_certbot_standalone_http_port: 2402 matrix_ssl_lets_encrypt_certbot_standalone_http_port: 2402
matrix_ssl_lets_encrypt_support_email: ~ matrix_ssl_lets_encrypt_support_email: ~

View file

@ -193,6 +193,7 @@
service: service:
name: matrix-nginx-proxy name: matrix-nginx-proxy
state: stopped state: stopped
enabled: no
daemon_reload: yes daemon_reload: yes
register: stopping_result register: stopping_result
when: "not matrix_nginx_proxy_enabled|bool and matrix_nginx_proxy_service_stat.stat.exists" when: "not matrix_nginx_proxy_enabled|bool and matrix_nginx_proxy_service_stat.stat.exists"

View file

@ -43,5 +43,9 @@
msg: "The `{{ item }}` variable must be defined and have a non-null value" msg: "The `{{ item }}` variable must be defined and have a non-null value"
with_items: with_items:
- "matrix_ssl_lets_encrypt_support_email" - "matrix_ssl_lets_encrypt_support_email"
- "matrix_nginx_proxy_proxy_synapse_federation_api_addr_sans_container"
- "matrix_nginx_proxy_proxy_synapse_federation_api_addr_with_container"
- "matrix_nginx_proxy_proxy_synapse_client_api_addr_with_container"
- "matrix_nginx_proxy_proxy_synapse_client_api_addr_sans_container"
when: "vars[item] == '' or vars[item] is none" when: "vars[item] == '' or vars[item] is none"
when: "matrix_ssl_retrieval_method == 'lets-encrypt'" when: "matrix_ssl_retrieval_method == 'lets-encrypt'"

View file

@ -40,6 +40,7 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_base_domain_hostname }}; server_name {{ matrix_nginx_proxy_base_domain_hostname }};
server_tokens off; server_tokens off;

View file

@ -33,6 +33,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_bot_go_neb_hostname }}; server_name {{ matrix_nginx_proxy_proxy_bot_go_neb_hostname }};
server_tokens off; server_tokens off;
@ -83,7 +85,7 @@ server {
ssl_stapling_verify on; ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_bot_go_neb_hostname }}/chain.pem; ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_bot_go_neb_hostname }}/chain.pem;
{% endif %} {% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %} {% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off; ssl_session_tickets off;
{% endif %} {% endif %}

View file

@ -41,6 +41,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_element_hostname }}; server_name {{ matrix_nginx_proxy_proxy_element_hostname }};

View file

@ -39,6 +39,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_hydrogen_hostname }}; server_name {{ matrix_nginx_proxy_proxy_hydrogen_hostname }};

View file

@ -36,6 +36,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_dimension_hostname }}; server_name {{ matrix_nginx_proxy_proxy_dimension_hostname }};
server_tokens off; server_tokens off;
@ -86,7 +88,7 @@ server {
ssl_stapling_verify on; ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_dimension_hostname }}/chain.pem; ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_dimension_hostname }}/chain.pem;
{% endif %} {% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %} {% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off; ssl_session_tickets off;
{% endif %} {% endif %}

View file

@ -161,6 +161,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_matrix_hostname }}; server_name {{ matrix_nginx_proxy_proxy_matrix_hostname }};
server_tokens off; server_tokens off;

View file

@ -43,6 +43,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_grafana_hostname }}; server_name {{ matrix_nginx_proxy_proxy_grafana_hostname }};
@ -94,12 +96,12 @@ server {
ssl_stapling_verify on; ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_grafana_hostname }}/chain.pem; ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_grafana_hostname }}/chain.pem;
{% endif %} {% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %} {% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off; ssl_session_tickets off;
{% endif %} {% endif %}
ssl_session_cache {{ matrix_nginx_proxy_ssl_session_cache }}; ssl_session_cache {{ matrix_nginx_proxy_ssl_session_cache }};
ssl_session_timeout {{ matrix_nginx_proxy_ssl_session_timeout }}; ssl_session_timeout {{ matrix_nginx_proxy_ssl_session_timeout }};
{{ render_vhost_directives() }} {{ render_vhost_directives() }}
} }

View file

@ -78,6 +78,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_jitsi_hostname }}; server_name {{ matrix_nginx_proxy_proxy_jitsi_hostname }};
server_tokens off; server_tokens off;

View file

@ -28,6 +28,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_riot_compat_redirect_hostname }}; server_name {{ matrix_nginx_proxy_proxy_riot_compat_redirect_hostname }};

View file

@ -35,6 +35,8 @@
server { server {
listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }}; listen {{ 8080 if matrix_nginx_proxy_enabled else 80 }};
listen [::]:{{ 8080 if matrix_nginx_proxy_enabled else 80 }};
server_name {{ matrix_nginx_proxy_proxy_sygnal_hostname }}; server_name {{ matrix_nginx_proxy_proxy_sygnal_hostname }};
server_tokens off; server_tokens off;

View file

@ -120,7 +120,7 @@ server {
{% endfor %} {% endfor %}
{% if matrix_nginx_proxy_synapse_presence_disabled %} {% if matrix_nginx_proxy_synapse_presence_disabled %}
# FIXME: keep in sync with synapse workers documentation manually # FIXME: keep in sync with synapse workers documentation manually
location ~ ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status { location ~ ^/_matrix/client/(api/v1|r0|v3|unstable)/presence/[^/]+/status {
proxy_pass http://frontend_proxy_upstream$request_uri; proxy_pass http://frontend_proxy_upstream$request_uri;
proxy_set_header Host $host; proxy_set_header Host $host;
} }

Some files were not shown because too many files have changed in this diff Show more