Merge branch 'master' into pub.solar
This commit is contained in:
commit
2531e82d5b
44
CHANGELOG.md
44
CHANGELOG.md
|
@ -1,3 +1,47 @@
|
|||
# 2022-10-14
|
||||
|
||||
## synapse-s3-storage-provider support
|
||||
|
||||
**`synapse-s3-storage-provider` support is very new and still relatively untested. Using it may cause data loss.**
|
||||
|
||||
You can now store your Synapse media repository files on Amazon S3 (or another S3-compatible object store) using [synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider) - a media provider for Synapse (Python module), which should work faster and more reliably than our previous [Goofys](docs/configuring-playbook-s3-goofys.md) implementation (Goofys will continue to work).
|
||||
|
||||
This is not just for initial installations. Users with existing files (stored in the local filesystem) can also migrate their files to `synapse-s3-storage-provider`.
|
||||
|
||||
To get started, see our [Storing Synapse media files on Amazon S3 with synapse-s3-storage-provider](docs/configuring-playbook-synapse-s3-storage-provider.md) documentation.
|
||||
|
||||
|
||||
## Synapse container image customization support
|
||||
|
||||
We now support customizing the Synapse container image by adding additional build steps to its [`Dockerfile`](https://docs.docker.com/engine/reference/builder/).
|
||||
|
||||
Our [synapse-s3-storage-provider support](#synapse-s3-storage-provider-support) is actually built on this. When `s3-storage-provider` is enabled, we automatically add additional build steps to install its Python module into the Synapse image.
|
||||
|
||||
Besides this kind of auto-added build steps (for components supported by the playbook), we also let you inject your own custom build steps using configuration like this:
|
||||
|
||||
```yaml
|
||||
matrix_synapse_container_image_customizations_enabled: true
|
||||
|
||||
matrix_synapse_container_image_customizations_dockerfile_body_custom: |
|
||||
RUN echo 'This is a custom step for building the customized Docker image for Synapse.'
|
||||
RUN echo 'You can override matrix_synapse_container_image_customizations_dockerfile_body_custom to add your own steps.'
|
||||
RUN echo 'You do NOT need to include a FROM clause yourself.'
|
||||
```
|
||||
|
||||
People who have needed to customize Synapse previously had to fork the git repository, make their changes to the `Dockerfile` there, point the playbook to the new repository (`matrix_synapse_container_image_self_build_repo`) and enable self-building from scratch (`matrix_synapse_container_image_self_build: true`). This is harder and slower.
|
||||
|
||||
With the new Synapse-customization feature in the playbook, we use the original upstream (pre-built, if available) Synapse image and only build on top of it, right on the Matrix server. This is much faster than building all of Synapse from scratch.
|
||||
|
||||
|
||||
# 2022-10-02
|
||||
|
||||
## matrix-ldap-registration-proxy support
|
||||
|
||||
Thanks to [@TheOneWithTheBraid](https://github.com/TheOneWithTheBraid), we now support installing [matrix-ldap-registration-proxy](https://gitlab.com/activism.international/matrix_ldap_registration_proxy) - a proxy which handles Matrix registration requests and forwards them to LDAP.
|
||||
|
||||
See our [Setting up the ldap-registration-proxy](docs/configuring-playbook-matrix-ldap-registration-proxy.md) documentation to get started.
|
||||
|
||||
|
||||
# 2022-09-15
|
||||
|
||||
## (Potential Backward Compatibility Break) Major improvements to Synapse workers
|
||||
|
|
|
@ -23,7 +23,7 @@ Using this playbook, you can get the following services configured on your serve
|
|||
|
||||
- (optional) a [Dendrite](https://github.com/matrix-org/dendrite) homeserver - storing your data and managing your presence in the [Matrix](http://matrix.org/) network. Dendrite is a second-generation Matrix homeserver written in Go, an alternative to Synapse.
|
||||
|
||||
- (optional) [Amazon S3](https://aws.amazon.com/s3/) storage for Synapse's content repository (`media_store`) files using [Goofys](https://github.com/kahing/goofys)
|
||||
- (optional) [Amazon S3](https://aws.amazon.com/s3/) (or other S3-compatible object store) storage for Synapse's content repository (`media_store`) files using [Goofys](https://github.com/kahing/goofys) or [`synapse-s3-storage-provider`](https://github.com/matrix-org/synapse-s3-storage-provider)
|
||||
|
||||
- (optional, default) [PostgreSQL](https://www.postgresql.org/) database for Synapse. [Using an external PostgreSQL server](docs/configuring-playbook-external-postgres.md) is also possible.
|
||||
|
||||
|
@ -45,6 +45,8 @@ Using this playbook, you can get the following services configured on your serve
|
|||
|
||||
- (optional, advanced) the [matrix-synapse-ldap3](https://github.com/matrix-org/matrix-synapse-ldap3) LDAP Auth password provider module
|
||||
|
||||
- (optional, advanced) the [matrix-ldap-registration-proxy](https://gitlab.com/activism.international/matrix_ldap_registration_proxy) a proxy that handles Matrix registration requests and forwards them to LDAP.
|
||||
|
||||
- (optional, advanced) the [synapse-simple-antispam](https://github.com/t2bot/synapse-simple-antispam) spam checker module
|
||||
|
||||
- (optional, advanced) the [Matrix Corporal](https://github.com/devture/matrix-corporal) reconciliator and gateway for a managed Matrix server
|
||||
|
|
|
@ -8,6 +8,7 @@ See the project's [documentation](https://matrix-org.github.io/matrix-hookshot/l
|
|||
|
||||
Note: the playbook also supports [matrix-appservice-webhooks](configuring-playbook-bridge-appservice-webhooks.md), which however is soon to be archived by its author and to be replaced by hookshot.
|
||||
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
Refer to the [official instructions](https://matrix-org.github.io/matrix-hookshot/latest/setup.html) to learn what the individual options do.
|
||||
|
@ -16,10 +17,25 @@ Refer to the [official instructions](https://matrix-org.github.io/matrix-hooksho
|
|||
2. Take special note of the `matrix_hookshot_*_enabled` variables. Services that need no further configuration are enabled by default (GitLab, Generic), while you must first add the required configuration and enable the others (GitHub, Jira, Figma).
|
||||
3. If you're setting up the GitHub bridge, you'll need to generate and download a private key file after you created your GitHub app. Copy the contents of that file to the variable `matrix_hookshot_github_private_key` so the playbook can install it for you, or use one of the [other methods](#manage-github-private-key-with-matrix-aux-role) explained below.
|
||||
4. If you've already installed Matrix services using the playbook before, you'll need to re-run it (`--tags=setup-all,start`). If not, proceed with [configuring other playbook services](configuring-playbook.md) and then with [Installing](installing.md). Get back to this guide once ready. Hookshot can be set up individually using the tag `setup-hookshot`.
|
||||
5. Refer to [Hookshot's official instructions](https://matrix-org.github.io/matrix-hookshot/latest/usage.html) to start using the bridge. **Important:** Note that the different listeners are bound to certain paths which might differ from those assumed by the hookshot documentation, see [URLs for bridges setup](urls-for-bridges-setup) below.
|
||||
|
||||
Other configuration options are available via the `matrix_hookshot_configuration_extension_yaml` and `matrix_hookshot_registration_extension_yaml` variables, see the comments in [main.yml](/roles/matrix-bridge-hookshot/defaults/main.yml) for how to use them.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Create a room and invite the Hookshot bot (`@hookshot:DOMAIN`) to it.
|
||||
|
||||
Make sure the bot is able to send state events (usually the Moderator power level in clients).
|
||||
|
||||
Send a `!hookshot help` message to see a list of help commands.
|
||||
|
||||
Refer to [Hookshot's documentation](https://matrix-org.github.io/matrix-hookshot/latest/usage.html) for more details about using the brige's various features.
|
||||
|
||||
**Important:** Note that the different listeners are bound to certain paths which might differ from those assumed by the hookshot documentation, see [URLs for bridges setup](#urls-for-bridges-setup) below.
|
||||
|
||||
|
||||
## More setup documentation
|
||||
|
||||
### URLs for bridges setup
|
||||
|
||||
Unless indicated otherwise, the following endpoints are reachable on your `matrix.` subdomain (if the feature is enabled).
|
||||
|
|
|
@ -59,3 +59,8 @@ matrix_mautrix_telegram_configuration_extension_yaml: |
|
|||
|
||||
More details about permissions in this example:
|
||||
https://github.com/mautrix/telegram/blob/master/mautrix_telegram/example-config.yaml#L410
|
||||
|
||||
If you like to exclude all groups from syncing and use the Telgeram-Bridge only for direct chats, you can add the following additional playbook configuration:
|
||||
```yaml
|
||||
matrix_mautrix_telegram_filter_mode: whitelist
|
||||
```
|
||||
|
|
|
@ -87,7 +87,7 @@ For more information refer to the [docker-jitsi-meet](https://github.com/jitsi/d
|
|||
|
||||
By default the Jitsi Meet instance does not work with a client in LAN (Local Area Network), even if others are connected from WAN. There are no video and audio. In the case of WAN to WAN everything is ok.
|
||||
|
||||
The reason is the Jitsi VideoBridge git to LAN client the IP address of the docker image instead of the host. The [documentation](https://github.com/jitsi/docker-jitsi-meet#running-behind-nat-or-on-a-lan-environment) of Jitsi in docker suggest to add `DOCKER_HOST_ADDRESS` in enviornment variable to make it work.
|
||||
The reason is the Jitsi VideoBridge git to LAN client the IP address of the docker image instead of the host. The [documentation](https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-docker/#running-behind-nat-or-on-a-lan-environment) of Jitsi in docker suggest to add `JVB_ADVERTISE_IPS` in enviornment variable to make it work.
|
||||
|
||||
Here is how to do it in the playbook.
|
||||
|
||||
|
@ -95,7 +95,7 @@ Add these two lines to your `inventory/host_vars/matrix.DOMAIN/vars.yml` configu
|
|||
|
||||
```yaml
|
||||
matrix_jitsi_jvb_container_extra_arguments:
|
||||
- '--env "DOCKER_HOST_ADDRESS=<Local IP adress of the host>"'
|
||||
- '--env "JVB_ADVERTISE_IPS=<Local IP address of the host>"'
|
||||
```
|
||||
|
||||
## (Optional) Fine tune Jitsi
|
||||
|
|
|
@ -28,5 +28,12 @@ If you wish for users to **authenticate only against configured password provide
|
|||
matrix_synapse_password_config_localdb_enabled: false
|
||||
```
|
||||
|
||||
|
||||
## Using ma1sd Identity Server for authentication
|
||||
|
||||
If you wish to use the ma1sd Identity Server for LDAP authentication instead of [matrix-synapse-ldap3](https://github.com/matrix-org/matrix-synapse-ldap3) consult [Adjusting ma1sd Identity Server configuration](configuring-playbook-ma1sd.md#authentication).
|
||||
|
||||
|
||||
## Handling user registration
|
||||
|
||||
If you wish for users to also be able to make new registrations against LDAP, you may **also** wish to [set up the ldap-registration-proxy](configuring-playbook-matrix-ldap-registration-proxy.md).
|
||||
|
|
33
docs/configuring-playbook-matrix-ldap-registration-proxy.md
Normal file
33
docs/configuring-playbook-matrix-ldap-registration-proxy.md
Normal file
|
@ -0,0 +1,33 @@
|
|||
# Setting up matrix-ldap-registration-proxy (optional)
|
||||
|
||||
The playbook can install and configure [matrix-ldap-registration-proxy](https://gitlab.com/activism.international/matrix_ldap_registration_proxy) for you.
|
||||
|
||||
This proxy handles Matrix registration requests and forwards them to LDAP.
|
||||
|
||||
**Please note:** This does support the full Matrix specification for registrations. It only provide a very coarse
|
||||
implementation of a basic password registration.
|
||||
|
||||
## Quickstart
|
||||
|
||||
Add the following configuration to your `inventory/host_vars/matrix.DOMAIN/vars.yml` file:
|
||||
|
||||
```yaml
|
||||
matrix_ldap_registration_proxy_enabled: true
|
||||
# LDAP credentials
|
||||
matrix_ldap_registration_proxy_ldap_uri: <URI>
|
||||
matrix_ldap_registration_proxy_ldap_base_dn: <DN>
|
||||
matrix_ldap_registration_proxy_ldap_user: <USER>
|
||||
matrix_ldap_registration_proxy_ldap_password: <password>
|
||||
```
|
||||
|
||||
If you already use the [synapse external password provider via LDAP](configuring-playbook-ldap-auth.md) (that is, you have `matrix_synapse_ext_password_provider_ldap_enabled: true` and other options in your configuration)
|
||||
you can use the following values as configuration:
|
||||
|
||||
```yaml
|
||||
# Use the LDAP values specified for the synapse role to setup LDAP proxy
|
||||
matrix_ldap_registration_proxy_ldap_uri: "{{ matrix_synapse_ext_password_provider_ldap_uri }}"
|
||||
matrix_ldap_registration_proxy_ldap_base_dn: "{{ matrix_synapse_ext_password_provider_ldap_base }}"
|
||||
matrix_ldap_registration_proxy_ldap_user: "{{ matrix_synapse_ext_password_provider_ldap_bind_dn }}"
|
||||
matrix_ldap_registration_proxy_ldap_password: "{{ matrix_synapse_ext_password_provider_ldap_bind_password }}"
|
||||
```
|
||||
|
|
@ -27,11 +27,23 @@ No matter which external webserver you decide to go with, you'll need to:
|
|||
|
||||
1) Make sure your web server user (something like `http`, `apache`, `www-data`, `nginx`) is part of the `matrix` group. You should run something like this: `usermod -a -G matrix nginx`. This allows your webserver user to access files owned by the `matrix` group. When using an external nginx webserver, this allows it to read configuration files from `/matrix/nginx-proxy/conf.d`. When using another server, it would make other files, such as `/matrix/static-files/.well-known`, accessible to it.
|
||||
|
||||
2) Edit your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`) to disable the integrated nginx server:
|
||||
2) Edit your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`)
|
||||
- to disable the integrated nginx server:
|
||||
|
||||
```yaml
|
||||
matrix_nginx_proxy_enabled: false
|
||||
```
|
||||
```yaml
|
||||
matrix_nginx_proxy_enabled: false
|
||||
```
|
||||
- if using an external server on another host, add the `<service>_http_host_bind_port` or `<service>_http_bind_port` variables for the services that will be exposed by the external server on the other host. The actual name of the variable is listed in the `roles/<service>/defaults/vars.yml` file for each service. Most variables follow the `<service>_http_host_bind_port` format.
|
||||
|
||||
These variables will make Docker expose the ports on all network interfaces instead of localhost only.
|
||||
[Keep in mind that there are some security concerns if you simply proxy everything.](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md#synapse-administration-endpoints)
|
||||
|
||||
Here are the variables required for the default configuration (Synapse and Element)
|
||||
```
|
||||
matrix_synapse_container_client_api_host_bind_port: '0.0.0.0:8008'
|
||||
matrix_synapse_container_federation_api_plain_host_bind_port: '0.0.0.0:8048'
|
||||
matrix_client_element_container_http_host_bind_port: "0.0.0.0:8765"
|
||||
```
|
||||
|
||||
3) **If you'll manage SSL certificates by yourself**, edit your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`) to disable SSL certificate retrieval:
|
||||
|
||||
|
@ -41,7 +53,6 @@ matrix_ssl_retrieval_method: none
|
|||
|
||||
**Note**: During [installation](installing.md), unless you've disabled SSL certificate management (`matrix_ssl_retrieval_method: none`), the playbook would need 80 to be available, in order to retrieve SSL certificates. **Please manually stop your other webserver while installing**. You can start it back up afterwards.
|
||||
|
||||
|
||||
### Using your own external nginx webserver
|
||||
|
||||
Once you've followed the [Preparation](#preparation) guide above, it's time to set up your external nginx server.
|
||||
|
@ -60,15 +71,6 @@ matrix_nginx_proxy_ssl_protocols: "TLSv1.2"
|
|||
|
||||
If you are experiencing issues, try updating to a newer version of Nginx. As a data point in May 2021 a user reported that Nginx 1.14.2 was not working for them. They were getting errors about socket leaks. Updating to Nginx 1.19 fixed their issue.
|
||||
|
||||
If you are not going to be running your webserver on the same docker network, or the same machine as matrix, these variables can be set to bind synapse to an exposed port. [Keep in mind that there are some security concerns if you simply proxy everything to it](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md#synapse-administration-endpoints)
|
||||
```yaml
|
||||
# Takes an "<ip>:<port>" or "<port>" value (e.g. "127.0.0.1:8048" or "192.168.1.3:80"), or empty string to not expose.
|
||||
matrix_synapse_container_client_api_host_bind_port: ''
|
||||
matrix_synapse_container_federation_api_plain_host_bind_port: ''
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Using your own external Apache webserver
|
||||
|
||||
Once you've followed the [Preparation](#preparation) guide above, you can take a look at the [examples/apache](../examples/apache) directory for a sample configuration.
|
||||
|
|
137
docs/configuring-playbook-s3-goofys.md
Normal file
137
docs/configuring-playbook-s3-goofys.md
Normal file
|
@ -0,0 +1,137 @@
|
|||
# Storing Matrix media files on Amazon S3 with Goofys (optional)
|
||||
|
||||
If you'd like to store Synapse's content repository (`media_store`) files on Amazon S3 (or other S3-compatible service),
|
||||
you can let this playbook configure [Goofys](https://github.com/kahing/goofys) for you.
|
||||
|
||||
Another (and better performing) way to use S3 storage with Synapse is [synapse-s3-storage-provider](configuring-playbook-synapse-s3-storage-provider.md).
|
||||
|
||||
Using a Goofys-backed media store works, but performance may not be ideal. If possible, try to use a region which is close to your Matrix server.
|
||||
|
||||
If you'd like to move your locally-stored media store data to Amazon S3 (or another S3-compatible object store), we also provide some migration instructions below.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
After [creating the S3 bucket and configuring it](configuring-playbook-s3.md#bucket-creation-and-security-configuration), you can proceed to configure Goofys in your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`):
|
||||
|
||||
```yaml
|
||||
matrix_s3_media_store_enabled: true
|
||||
matrix_s3_media_store_bucket_name: "your-bucket-name"
|
||||
matrix_s3_media_store_aws_access_key: "access-key-goes-here"
|
||||
matrix_s3_media_store_aws_secret_key: "secret-key-goes-here"
|
||||
matrix_s3_media_store_region: "eu-central-1"
|
||||
```
|
||||
|
||||
You can use any S3-compatible object store by **additionally** configuring these variables:
|
||||
|
||||
```yaml
|
||||
matrix_s3_media_store_custom_endpoint_enabled: true
|
||||
matrix_s3_media_store_custom_endpoint: "https://your-custom-endpoint"
|
||||
```
|
||||
|
||||
If you have local media store files and wish to migrate to Backblaze B2 subsequently, follow our [migration guide to Backblaze B2](#migrating-to-backblaze-b2) below instead of applying this configuration as-is.
|
||||
|
||||
|
||||
## Migrating from local filesystem storage to S3
|
||||
|
||||
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before migrating your local media store to an S3-backed one.
|
||||
|
||||
Follow one of the guides below for a migration path from a locally-stored media store to one stored on S3-compatible storage:
|
||||
|
||||
- [Storing Matrix media files on Amazon S3 with Goofys (optional)](#storing-matrix-media-files-on-amazon-s3-with-goofys-optional)
|
||||
- [Usage](#usage)
|
||||
- [Migrating from local filesystem storage to S3](#migrating-from-local-filesystem-storage-to-s3)
|
||||
- [Migrating to any S3-compatible storage (universal, but likely slow)](#migrating-to-any-s3-compatible-storage-universal-but-likely-slow)
|
||||
- [Migrating to Backblaze B2](#migrating-to-backblaze-b2)
|
||||
|
||||
### Migrating to any S3-compatible storage (universal, but likely slow)
|
||||
|
||||
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before doing this.
|
||||
|
||||
1. Proceed with the steps below without stopping Matrix services
|
||||
|
||||
2. Start by adding the base S3 configuration in your `vars.yml` file (seen above, may be different depending on the S3 provider of your choice)
|
||||
|
||||
3. In addition to the base configuration you see above, add this to your `vars.yml` file:
|
||||
|
||||
```yaml
|
||||
matrix_s3_media_store_path: /matrix/s3-media-store
|
||||
```
|
||||
|
||||
This enables S3 support, but mounts the S3 storage bucket to `/matrix/s3-media-store` without hooking it to your homeserver yet. Your homeserver will still continue using your local filesystem for its media store.
|
||||
|
||||
5. Run the playbook to apply the changes: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||
|
||||
6. Do an **initial sync of your files** by running this **on the server** (it may take a very long time):
|
||||
|
||||
```sh
|
||||
sudo -u matrix -- rsync --size-only --ignore-existing -avr /matrix/synapse/storage/media-store/. /matrix/s3-media-store/.
|
||||
```
|
||||
|
||||
You may need to install `rsync` manually.
|
||||
|
||||
7. Stop all Matrix services (`ansible-playbook -i inventory/hosts setup.yml --tags=stop`)
|
||||
|
||||
8. Start the S3 service by running this **on the server**: `systemctl start matrix-goofys`
|
||||
|
||||
9. Sync the files again by re-running the `rsync` command you see in step #6
|
||||
|
||||
10. Stop the S3 service by running this **on the server**: `systemctl stop matrix-goofys`
|
||||
|
||||
11. Get the old media store out of the way by running this command on the server:
|
||||
|
||||
```sh
|
||||
mv /matrix/synapse/storage/media-store /matrix/synapse/storage/media-store-local-backup
|
||||
```
|
||||
|
||||
12. Remove the `matrix_s3_media_store_path` configuration from your `vars.yml` file (undoing step #3 above)
|
||||
|
||||
13. Run the playbook: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||
|
||||
14. You're done! Verify that loading existing (old) media files works and that you can upload new ones.
|
||||
|
||||
15. When confident that it all works, get rid of the local media store directory: `rm -rf /matrix/synapse/storage/media-store-local-backup`
|
||||
|
||||
|
||||
### Migrating to Backblaze B2
|
||||
|
||||
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before doing this.
|
||||
|
||||
1. While all Matrix services are running, run the following command on the server:
|
||||
|
||||
(you need to adjust the 3 `--env` line below with your own data)
|
||||
|
||||
```sh
|
||||
docker run -it --rm -w /work \
|
||||
--env='B2_KEY_ID=YOUR_KEY_GOES_HERE' \
|
||||
--env='B2_KEY_SECRET=YOUR_SECRET_GOES_HERE' \
|
||||
--env='B2_BUCKET_NAME=YOUR_BUCKET_NAME_GOES_HERE' \
|
||||
--mount type=bind,src=/matrix/synapse/storage/media-store,dst=/work,ro \
|
||||
--entrypoint=/bin/sh \
|
||||
docker.io/tianon/backblaze-b2:3.6.0 \
|
||||
-c 'b2 authorize-account $B2_KEY_ID $B2_KEY_SECRET && b2 sync /work b2://$B2_BUCKET_NAME --skipNewer'
|
||||
```
|
||||
|
||||
This is some initial file sync, which may take a very long time.
|
||||
|
||||
2. Stop all Matrix services (`ansible-playbook -i inventory/hosts setup.yml --tags=stop`)
|
||||
|
||||
3. Run the command from step #1 again.
|
||||
|
||||
Doing this will sync any new files that may have been created locally in the meantime.
|
||||
|
||||
Now that Matrix services aren't running, we're sure to get Backblaze B2 and your local media store fully in sync.
|
||||
|
||||
4. Get the old media store out of the way by running this command on the server:
|
||||
|
||||
```sh
|
||||
mv /matrix/synapse/storage/media-store /matrix/synapse/storage/media-store-local-backup
|
||||
```
|
||||
|
||||
5. Put the [Backblaze B2 settings seen above](#backblaze-b2) in your `vars.yml` file
|
||||
|
||||
6. Run the playbook: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||
|
||||
7. You're done! Verify that loading existing (old) media files works and that you can upload new ones.
|
||||
|
||||
8. When confident that it all works, get rid of the local media store directory: `rm -rf /matrix/synapse/storage/media-store-local-backup`
|
|
@ -1,19 +1,48 @@
|
|||
# Storing Matrix media files on Amazon S3 (optional)
|
||||
# Storing Synapse media files on Amazon S3 or another compatible Object Storage (optional)
|
||||
|
||||
By default, this playbook configures your server to store Synapse's content repository (`media_store`) files on the local filesystem.
|
||||
If that's alright, you can skip this.
|
||||
|
||||
If you'd like to store Synapse's content repository (`media_store`) files on Amazon S3 (or other S3-compatible service),
|
||||
you can let this playbook configure [Goofys](https://github.com/kahing/goofys) for you.
|
||||
As an alternative to storing media files on the local filesystem, you can store them on [Amazon S3](https://aws.amazon.com/s3/) or another S3-compatible object store.
|
||||
|
||||
Using a Goofys-backed media store works, but performance may not be ideal. If possible, try to use a region which is close to your Matrix server.
|
||||
First, [choose an Object Storage provider](#choosing-an-object-storage-provider).
|
||||
|
||||
If you'd like to move your locally-stored media store data to Amazon S3 (or another S3-compatible object store), we also provide some migration instructions below.
|
||||
Then, [create the S3 bucket](#bucket-creation-and-security-configuration).
|
||||
|
||||
Finally, [set up S3 storage for Synapse](#setting-up) (with [Goofys](configuring-playbook-s3-goofys.md) or [synapse-s3-storage-provider](configuring-playbook-synapse-s3-storage-provider.md)).
|
||||
|
||||
|
||||
## Choosing an Object Storage provider
|
||||
|
||||
You can create [Amazon S3](https://aws.amazon.com/s3/) or another S3-compatible object store like [Backblaze B2](https://www.backblaze.com/b2/cloud-storage.html), [Wasabi](https://wasabi.com), [Digital Ocean Spaces](https://www.digitalocean.com/products/spaces), etc.
|
||||
|
||||
Amazon S3 and Backblaze S3 are pay-as-you with no minimum charges for storing too little data.
|
||||
|
||||
All these providers have different prices, with Backblaze B2 appearing to be the cheapest.
|
||||
|
||||
Wasabi has a minimum charge of 1TB if you're storing less than 1TB, which becomes expensive if you need to store less data than that.
|
||||
|
||||
Digital Ocean Spaces has a minimum charge of 250GB ($5/month as of 2022-10), which is also expensive if you're storing less data than that.
|
||||
|
||||
Important aspects of choosing the right provider are:
|
||||
|
||||
- a provider by a company you like and trust (or dislike less than the others)
|
||||
- a provider which has a data region close to your Matrix server (if it's farther away, high latency may cause slowdowns)
|
||||
- a provider which is OK pricewise
|
||||
- a provider with free or cheap egress (if you need to get the data out often, for some reason) - likely not too important for the common use-case
|
||||
|
||||
|
||||
## Bucket creation and Security Configuration
|
||||
|
||||
Now that you've [chosen an Object Storage provider](#choosing-an-object-storage-provider), you need to create a storage bucket.
|
||||
|
||||
How you do this varies from provider to provider, with Amazon S3 being the most complicated due to its vast number of services and complicated security policies.
|
||||
|
||||
Below, we provider some guides for common providers. If you don't see yours, look at the others for inspiration or read some guides online about how to create a bucket. Feel free to contribute to this documentation with an update!
|
||||
|
||||
## Amazon S3
|
||||
|
||||
You'll need an Amazon S3 bucket and some IAM user credentials (access key + secret key) with full write access to the bucket. Example security policy:
|
||||
You'll need an Amazon S3 bucket and some IAM user credentials (access key + secret key) with full write access to the bucket. Example IAM security policy:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -34,154 +63,45 @@ You'll need an Amazon S3 bucket and some IAM user credentials (access key + secr
|
|||
}
|
||||
```
|
||||
|
||||
You then need to enable S3 support in your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`).
|
||||
It would be something like this:
|
||||
|
||||
```yaml
|
||||
matrix_s3_media_store_enabled: true
|
||||
matrix_s3_media_store_bucket_name: "your-bucket-name"
|
||||
matrix_s3_media_store_aws_access_key: "access-key-goes-here"
|
||||
matrix_s3_media_store_aws_secret_key: "secret-key-goes-here"
|
||||
matrix_s3_media_store_region: "eu-central-1"
|
||||
```
|
||||
**NOTE**: This policy needs to be attached to an IAM user created from the **Security Credentials** menu. This is not a **Bucket Policy**.
|
||||
|
||||
|
||||
## Using other S3-compatible object stores
|
||||
## Backblaze B2
|
||||
|
||||
You can use any S3-compatible object store by **additionally** configuring these variables:
|
||||
To use [Backblaze B2](https://www.backblaze.com/b2/cloud-storage.html) you first need to sign up.
|
||||
|
||||
```yaml
|
||||
matrix_s3_media_store_custom_endpoint_enabled: true
|
||||
# Example: "https://storage.googleapis.com"
|
||||
matrix_s3_media_store_custom_endpoint: "your-custom-endpoint"
|
||||
```
|
||||
You [can't easily change which region (US, Europe) your Backblaze account stores files in](https://old.reddit.com/r/backblaze/comments/hi1v90/make_the_choice_for_the_b2_data_center_region/), so make sure to carefully choose the region when signing up (hint: it's a hard to see dropdown below the username/password fields in the signup form).
|
||||
|
||||
### Backblaze B2
|
||||
|
||||
To use [Backblaze B2](https://www.backblaze.com/b2/cloud-storage.html):
|
||||
After logging in to Backblaze:
|
||||
|
||||
- create a new **private** bucket through its user interface (you can call it something like `matrix-DOMAIN-media-store`)
|
||||
- note the **Endpoint** for your bucket (something like `s3.us-west-002.backblazeb2.com`)
|
||||
- adjust its lifecycle rules to use the following **custom** rules:
|
||||
- File Path: *empty value*
|
||||
- Days Till Hide: *empty value*
|
||||
- Days Till Delete: `1`
|
||||
- note the **Endpoint** for your bucket (something like `s3.us-west-002.backblazeb2.com`).
|
||||
- adjust its Lifecycle Rules to: Keep only the last version of the file
|
||||
- go to [App Keys](https://secure.backblaze.com/app_keys.htm) and use the **Add a New Application Key** to create a new one
|
||||
- restrict it to the previously created bucket (e.g. `matrix-DOMAIN-media-store`)
|
||||
- give it *Read & Write* access
|
||||
|
||||
Copy the `keyID` and `applicationKey`.
|
||||
The `keyID` value is your **Access Key** and `applicationKey` is your **Secret Key**.
|
||||
|
||||
You need the following *additional* playbook configuration (on top of what you see above):
|
||||
For configuring [Goofys](configuring-playbook-s3-goofys.md) or [s3-synapse-storage-provider](configuring-playbook-synapse-s3-storage-provider.md) you will need:
|
||||
|
||||
```yaml
|
||||
matrix_s3_media_store_bucket_name: "YOUR_BUCKET_NAME_GOES_HERE"
|
||||
matrix_s3_media_store_aws_access_key: "YOUR_keyID_GOES_HERE"
|
||||
matrix_s3_media_store_aws_secret_key: "YOUR_applicationKey_GOES_HERE"
|
||||
matrix_s3_media_store_custom_endpoint_enabled: true
|
||||
matrix_s3_media_store_custom_endpoint: "https://s3.us-west-002.backblazeb2.com" # this may be different for your bucket
|
||||
```
|
||||
- **Endpoint URL** - this is the **Endpoint** value you saw above, but prefixed with `https://`
|
||||
|
||||
If you have local media store files and wish to migrate to Backblaze B2 subsequently, follow our [migration guide to Backblaze B2](#migrating-to-backblaze-b2) below instead of applying this configuration as-is.
|
||||
- **Region** - use the value you see in the Endpoint (e.g. `us-west-002`)
|
||||
|
||||
- **Storage Class** - use `STANDARD`. Backblaze B2 does not have different storage classes, so it doesn't make sense to use any other value.
|
||||
|
||||
|
||||
## Migrating from local filesystem storage to S3
|
||||
## Other providers
|
||||
|
||||
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before migrating your local media store to an S3-backed one.
|
||||
For other S3-compatible providers, you may not need to configure security policies, etc. (just like for [Backblaze B2](#backblaze-b2)).
|
||||
|
||||
Follow one of the guides below for a migration path from a locally-stored media store to one stored on S3-compatible storage:
|
||||
|
||||
- [Migrating to any S3-compatible storage (universal, but likely slow)](#migrating-to-any-s3-compatible-storage-universal-but-likely-slow)
|
||||
- [Migrating to Backblaze B2](#migrating-to-backblaze-b2)
|
||||
|
||||
### Migrating to any S3-compatible storage (universal, but likely slow)
|
||||
|
||||
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before doing this.
|
||||
|
||||
1. Proceed with the steps below without stopping Matrix services
|
||||
|
||||
2. Start by adding the base S3 configuration in your `vars.yml` file (seen above, may be different depending on the S3 provider of your choice)
|
||||
|
||||
3. In addition to the base configuration you see above, add this to your `vars.yml` file:
|
||||
|
||||
```yaml
|
||||
matrix_s3_media_store_path: /matrix/s3-media-store
|
||||
```
|
||||
|
||||
This enables S3 support, but mounts the S3 storage bucket to `/matrix/s3-media-store` without hooking it to your homeserver yet. Your homeserver will still continue using your local filesystem for its media store.
|
||||
|
||||
5. Run the playbook to apply the changes: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||
|
||||
6. Do an **initial sync of your files** by running this **on the server** (it may take a very long time):
|
||||
|
||||
```sh
|
||||
sudo -u matrix -- rsync --size-only --ignore-existing -avr /matrix/synapse/storage/media-store/. /matrix/s3-media-store/.
|
||||
```
|
||||
|
||||
You may need to install `rsync` manually.
|
||||
|
||||
7. Stop all Matrix services (`ansible-playbook -i inventory/hosts setup.yml --tags=stop`)
|
||||
|
||||
8. Start the S3 service by running this **on the server**: `systemctl start matrix-goofys`
|
||||
|
||||
9. Sync the files again by re-running the `rsync` command you see in step #6
|
||||
|
||||
10. Stop the S3 service by running this **on the server**: `systemctl stop matrix-goofys`
|
||||
|
||||
11. Get the old media store out of the way by running this command on the server:
|
||||
|
||||
```sh
|
||||
mv /matrix/synapse/storage/media-store /matrix/synapse/storage/media-store-local-backup
|
||||
```
|
||||
|
||||
12. Remove the `matrix_s3_media_store_path` configuration from your `vars.yml` file (undoing step #3 above)
|
||||
|
||||
13. Run the playbook: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||
|
||||
14. You're done! Verify that loading existing (old) media files works and that you can upload new ones.
|
||||
|
||||
15. When confident that it all works, get rid of the local media store directory: `rm -rf /matrix/synapse/storage/media-store-local-backup`
|
||||
You most likely just need to create an S3 bucket and get some credentials (access key and secret key) for accessing the bucket in a read/write manner.
|
||||
|
||||
|
||||
### Migrating to Backblaze B2
|
||||
## Setting up
|
||||
|
||||
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before doing this.
|
||||
To set up Synapse to store files in S3, follow the instructions for the method of your choice:
|
||||
|
||||
1. While all Matrix services are running, run the following command on the server:
|
||||
|
||||
(you need to adjust the 3 `--env` line below with your own data)
|
||||
|
||||
```sh
|
||||
docker run -it --rm -w /work \
|
||||
--env='B2_KEY_ID=YOUR_KEY_GOES_HERE' \
|
||||
--env='B2_KEY_SECRET=YOUR_SECRET_GOES_HERE' \
|
||||
--env='B2_BUCKET_NAME=YOUR_BUCKET_NAME_GOES_HERE' \
|
||||
-v /matrix/synapse/storage/media-store/:/work \
|
||||
--entrypoint=/bin/sh \
|
||||
docker.io/tianon/backblaze-b2:2.1.0 \
|
||||
-c 'b2 authorize-account $B2_KEY_ID $B2_KEY_SECRET > /dev/null && b2 sync /work/ b2://$B2_BUCKET_NAME'
|
||||
```
|
||||
|
||||
This is some initial file sync, which may take a very long time.
|
||||
|
||||
2. Stop all Matrix services (`ansible-playbook -i inventory/hosts setup.yml --tags=stop`)
|
||||
|
||||
3. Run the command from step #1 again.
|
||||
|
||||
Doing this will sync any new files that may have been created locally in the meantime.
|
||||
|
||||
Now that Matrix services aren't running, we're sure to get Backblaze B2 and your local media store fully in sync.
|
||||
|
||||
4. Get the old media store out of the way by running this command on the server:
|
||||
|
||||
```sh
|
||||
mv /matrix/synapse/storage/media-store /matrix/synapse/storage/media-store-local-backup
|
||||
```
|
||||
|
||||
5. Put the [Backblaze B2 settings seen above](#backblaze-b2) in your `vars.yml` file
|
||||
|
||||
6. Run the playbook: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||
|
||||
7. You're done! Verify that loading existing (old) media files works and that you can upload new ones.
|
||||
|
||||
8. When confident that it all works, get rid of the local media store directory: `rm -rf /matrix/synapse/storage/media-store-local-backup`
|
||||
- using [synapse-s3-storage-provider](configuring-playbook-synapse-s3-storage-provider.md) (recommended)
|
||||
- using [Goofys to mount the S3 store to the local filesystem](configuring-playbook-s3-goofys.md)
|
||||
|
|
126
docs/configuring-playbook-synapse-s3-storage-provider.md
Normal file
126
docs/configuring-playbook-synapse-s3-storage-provider.md
Normal file
|
@ -0,0 +1,126 @@
|
|||
# Storing Synapse media files on Amazon S3 with synapse-s3-storage-provider (optional)
|
||||
|
||||
If you'd like to store Synapse's content repository (`media_store`) files on Amazon S3 (or other S3-compatible service),
|
||||
you can use the [synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider) media provider module for Synapse.
|
||||
|
||||
**`synapse-s3-storage-provider` support is very new and still relatively untested. Using it may cause data loss.**
|
||||
|
||||
An alternative (which has worse performance) is to use [Goofys to mount the S3 store to the local filesystem](configuring-playbook-s3-goofys.md).
|
||||
|
||||
|
||||
## How it works?
|
||||
|
||||
Summarized writings here are inspired by [this article](https://quentin.dufour.io/blog/2021-09-14/matrix-synapse-s3-storage/).
|
||||
|
||||
The way media storage providers in Synapse work has some caveats:
|
||||
|
||||
- Synapse still continues to use locally-stored files (for creating thumbnails, serving files, etc)
|
||||
- the media storage provider is just an extra storage mechanism (in addition to the local filesystem)
|
||||
- all files are stored locally at first, and then copied to the media storage provider (either synchronously or asynchronously)
|
||||
- if a file is not available on the local filesystem, it's pulled from a media storage provider
|
||||
|
||||
You may be thinking **if all files are stored locally as well, what's the point**?
|
||||
|
||||
You can run some scripts to delete the local files once in a while (which we do automatically by default - see [Periodically cleaning up the local filesystem](#periodically-cleaning-up-the-local-filesystem)), thus freeing up local disk space. If these files are needed in the future (for serving them to users, etc.), Synapse will pull them from the media storage provider on demand.
|
||||
|
||||
While you will need some local disk space around, it's only to accommodate usage, etc., and won't grow as large as your S3 store.
|
||||
|
||||
|
||||
## Installing
|
||||
|
||||
After [creating the S3 bucket and configuring it](configuring-playbook-s3.md#bucket-creation-and-security-configuration), you can proceed to configure Goofys in your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`):
|
||||
|
||||
```yaml
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_enabled: true
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_bucket: your-bucket-name
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_region_name: some-region-name # e.g. eu-central-1
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url: https://.. # delete this whole line for Amazon S3
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id: access-key-goes-here
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key: secret-key-goes-here
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_storage_class: STANDARD # or STANDARD_IA, etc.
|
||||
|
||||
# For additional advanced settings, take a look at `roles/matrix-synapse/defaults/main.yml`
|
||||
```
|
||||
|
||||
If you have existing files in Synapse's media repository (`/matrix/synapse/media-store/..`):
|
||||
|
||||
- new files will start being stored both locally and on the S3 store
|
||||
- the existing files will remain on the local filesystem only until [migrating them to the S3 store](#migrating-your-existing-media-files-to-the-s3-store)
|
||||
- at some point (and periodically in the future), you can delete local files which have been uploaded to the S3 store already
|
||||
|
||||
Regardless of whether you need to [Migrate your existing files to the S3 store](#migrating-your-existing-media-files-to-the-s3-store) or not, make sure you've familiarized yourself with [How it works?](#how-it-works) above and [Periodically cleaning up the local filesystem](#periodically-cleaning-up-the-local-filesystem) below.
|
||||
|
||||
|
||||
## Migrating your existing media files to the S3 store
|
||||
|
||||
Migrating your existing data can happen in multiple ways:
|
||||
|
||||
- [using the `s3_media_upload` script from `synapse-s3-storage-provider`](#using-the-s3_media_upload-script-from-synapse-s3-storage-provider) (very slow when dealing with lots of data)
|
||||
- [using another tool in combination with `s3_media_upload`](#using-another-tool-in-combination-with-s3_media_upload) (quicker when dealing with lots of data)
|
||||
|
||||
### Using the `s3_media_upload` script from `synapse-s3-storage-provider`
|
||||
|
||||
Instead of using `s3_media_upload` directly, which is very slow and painful for an initial data migration, we recommend [using another tool in combination with `s3_media_upload`](#using-another-tool-in-combination-with-s3_media_upload).
|
||||
|
||||
To copy your existing files, SSH into the server and run `/usr/local/bin/matrix-synapse-s3-storage-provider-shell`.
|
||||
|
||||
This launches a Synapse container, which has access to the local media store, Postgres database, S3 store and has some convenient environment variables configured for you to use (`MEDIA_PATH`, `BUCKET`, `ENDPOINT`, `UPDATE_DB_DAYS`, etc).
|
||||
|
||||
Then use the following commands (`$` values come from environment variables - they're **not placeholders** that you need to substitute):
|
||||
|
||||
- `s3_media_upload update-db $UPDATE_DB_DURATION` - create a local SQLite database (`cache.db`) with a list of media repository files (from the `synapse` Postgres database) eligible for operating on
|
||||
- `$UPDATE_DB_DURATION` is influenced by the `matrix_synapse_ext_synapse_s3_storage_provider_update_db_day_count` variable (defaults to `0`)
|
||||
- `$UPDATE_DB_DURATION` defaults to `0d` (0 days), which means **include files which haven't been accessed for more than 0 days** (that is, **all files will be included**).
|
||||
- `s3_media_upload check-deleted $MEDIA_PATH` - check whether files in the local cache still exist in the local media repository directory
|
||||
- `s3_media_upload upload $MEDIA_PATH $BUCKET --delete --storage-class $STORAGE_CLASS --endpoint-url $ENDPOINT` - uploads locally-stored files to S3 and deletes them from the local media repository directory
|
||||
|
||||
The `s3_media_upload upload` command may take a lot of time to complete.
|
||||
|
||||
Instead of running the above commands manually in the shell, you can also run the `/usr/local/bin/matrix-synapse-s3-storage-provider-migrate` script which will run the same commands automatically. We demonstrate how to do it manually, because:
|
||||
|
||||
- it's what the upstream project demonstrates and it teaches you how to use the `s3_media_upload` tool
|
||||
- allows you to check and verify the output of each command, to catch mistakes
|
||||
- includes progress bars and detailed output for each command
|
||||
- allows you to easily interrupt slow-running commands, etc. (the `/usr/local/bin/matrix-synapse-s3-storage-provider-migrate` starts a container without interactive TTY support, so `Ctrl+C` may not work and you and require killing via `docker kill ..`)
|
||||
|
||||
### Using another tool in combination with `s3_media_upload`
|
||||
|
||||
To migrate your existing local data to S3, we recommend to:
|
||||
|
||||
- **first** use another tool ([`aws s3`](#copying-data-to-amazon-s3) or [`b2 sync`](#copying-data-to-backblaze-b2), etc.) to copy the local files to the S3 bucket
|
||||
|
||||
- **only then** [use the `s3_media_upload` tool to finish the migration](#using-the-s3_media_upload-script-from-synapse-s3-storage-provider) (this checks to ensure all files are uploaded and then deletes the local files)
|
||||
|
||||
#### Copying data to Amazon S3
|
||||
|
||||
Generally, you need to use the `aws s3` tool.
|
||||
|
||||
This documentation section could use an improvement. Ideally, we'd come up with a guide like the one used in [Copying data to Backblaze B2](#copying-data-to-backblaze-b2) - running `aws s3` in a container, etc.
|
||||
|
||||
#### Copying data to Backblaze B2
|
||||
|
||||
To copy to Backblaze B2, start a container like this:
|
||||
|
||||
```sh
|
||||
docker run -it --rm \
|
||||
-w /work \
|
||||
--env='B2_KEY_ID=YOUR_KEY_GOES_HERE' \
|
||||
--env='B2_KEY_SECRET=YOUR_SECRET_GOES_HERE' \
|
||||
--env='B2_BUCKET_NAME=YOUR_BUCKET_NAME_GOES_HERE' \
|
||||
--mount type=bind,src=/matrix/synapse/storage/media-store,dst=/work,ro \
|
||||
--entrypoint=/bin/sh \
|
||||
tianon/backblaze-b2:3.6.0 \
|
||||
-c 'b2 authorize-account $B2_KEY_ID $B2_KEY_SECRET && b2 sync /work b2://$B2_BUCKET_NAME --skipNewer'
|
||||
```
|
||||
|
||||
## Periodically cleaning up the local filesystem
|
||||
|
||||
As described in [How it works?](#how-it-works) above, when new media is uploaded to the Synapse homeserver, it's first stored locally and then also stored on the remote S3 storage.
|
||||
|
||||
By default, we periodically ensure that all local files are uploaded to S3 and are then removed from the local filesystem. This is done automatically using:
|
||||
|
||||
- the `/usr/local/bin/matrix-synapse-s3-storage-provider-migrate` script
|
||||
- .. invoked via the `matrix-synapse-s3-storage-provider-migrate.service` service
|
||||
- .. triggered by the `matrix-synapse-s3-storage-provider-migrate.timer` timer, every day at 05:00
|
||||
|
||||
So.. you don't need to perform any maintenance yourself.
|
|
@ -34,6 +34,8 @@ When you're done with all the configuration you'd like to do, continue with [Ins
|
|||
|
||||
- [Setting up the Jitsi video-conferencing platform](configuring-playbook-jitsi.md) (optional)
|
||||
|
||||
- [Setting up Etherpad](configuring-playbook-etherpad.md) (optional)
|
||||
|
||||
- [Setting up Dynamic DNS](configuring-playbook-dynamic-dns.md) (optional)
|
||||
|
||||
- [Enabling metrics and graphs (Prometheus, Grafana) for your Matrix server](configuring-playbook-prometheus-grafana.md) (optional)
|
||||
|
@ -86,6 +88,8 @@ When you're done with all the configuration you'd like to do, continue with [Ins
|
|||
|
||||
- [Setting up the LDAP password provider module](configuring-playbook-ldap-auth.md) (optional, advanced)
|
||||
|
||||
- [Setting up the ldap-registration-proxy](configuring-playbook-matrix-ldap-registration-proxy.md) (optional, advanced)
|
||||
|
||||
- [Setting up Synapse Simple Antispam](configuring-playbook-synapse-simple-antispam.md) (optional, advanced)
|
||||
|
||||
- [Setting up Matrix Corporal](configuring-playbook-matrix-corporal.md) (optional, advanced)
|
||||
|
|
|
@ -12,8 +12,8 @@ If your database name differs, be sure to change `matrix_synapse_database_databa
|
|||
|
||||
The playbook supports importing Postgres dump files in **text** (e.g. `pg_dump > dump.sql`) or **gzipped** formats (e.g. `pg_dump | gzip -c > dump.sql.gz`).
|
||||
|
||||
Importing multiple databases (as dumped by `pg_dumpall`) is also supported.
|
||||
But the migration might be a good moment, to "reset" a not properly working bridge. Be aware, that it might affect all users (new link to bridge, new roomes, ...)
|
||||
Importing multiple databases (as dumped by `pg_dumpall`) is also supported.
|
||||
But the migration might be a good moment, to "reset" a not properly working bridge. Be aware, that it might affect all users (new link to bridge, new rooms, ...)
|
||||
|
||||
Before doing the actual import, **you need to upload your Postgres dump file to the server** (any path is okay).
|
||||
|
||||
|
@ -24,11 +24,14 @@ To import, run this command (make sure to replace `<server-path-to-postgres-dump
|
|||
|
||||
```sh
|
||||
ansible-playbook -i inventory/hosts setup.yml \
|
||||
--extra-vars='server_path_postgres_dump=<server-path-to-postgres-dump.sql>' \
|
||||
--extra-vars='server_path_postgres_dump=<server-path-to-postgres-dump.sql> postgres_default_import_database=matrix' \
|
||||
--tags=import-postgres
|
||||
```
|
||||
|
||||
**Note**: `<server-path-to-postgres-dump.sql>` must be a file path to a Postgres dump file on the server (not on your local machine!).
|
||||
**Notes**:
|
||||
|
||||
- `<server-path-to-postgres-dump.sql>` must be a file path to a Postgres dump file on the server (not on your local machine!)
|
||||
- `postgres_default_import_database` defaults to `matrix`, which is useful for importing multiple databases (for dumps made with `pg_dumpall`). If you're importing a single database (e.g. `synapse`), consider changing `postgres_default_import_database` accordingly
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
@ -90,7 +93,7 @@ If not, you probably get this error. `synapse` is the correct table owner, but t
|
|||
"ERROR: role synapse does not exist"
|
||||
```
|
||||
|
||||
Once the database is clear and the ownership of the tables has been fixed in the SQL file, the import task should succeed.
|
||||
Once the database is clear and the ownership of the tables has been fixed in the SQL file, the import task should succeed.
|
||||
Check, if `--dbname` is set to `synapse` (not `matrix`) and replace paths (or even better, copy this line from your terminal)
|
||||
|
||||
```
|
||||
|
|
|
@ -1,3 +1,15 @@
|
|||
(cors) {
|
||||
@cors_preflight method OPTIONS
|
||||
|
||||
handle @cors_preflight {
|
||||
header Access-Control-Allow-Origin "{args.0}"
|
||||
header Access-Control-Allow-Methods "HEAD, GET, POST, PUT, PATCH, DELETE"
|
||||
header Access-Control-Allow-Headers "Content-Type, Authorization"
|
||||
header Access-Control-Max-Age "3600"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
matrix.DOMAIN.tld {
|
||||
|
||||
# creates letsencrypt certificate
|
||||
|
@ -81,6 +93,13 @@ matrix.DOMAIN.tld {
|
|||
header Access-Control-Allow-Origin *
|
||||
file_server
|
||||
}
|
||||
|
||||
# If you have other well-knowns already handled by your base domain, you can replace the above block by this one, along with the replacement suggested in the base domain
|
||||
#handle @wellknown {
|
||||
# # .well-known is handled by base domain
|
||||
# reverse_proxy https://DOMAIN.tld {
|
||||
# header_up Host {http.reverse_proxy.upstream.hostport}
|
||||
#}
|
||||
|
||||
handle {
|
||||
encode zstd gzip
|
||||
|
@ -114,6 +133,8 @@ element.DOMAIN.tld {
|
|||
# creates letsencrypt certificate
|
||||
# tls your@email.com
|
||||
|
||||
import cors https://*.DOMAIN.tld
|
||||
|
||||
header {
|
||||
# Enable HTTP Strict Transport Security (HSTS) to force clients to always connect via HTTPS
|
||||
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
|
||||
|
@ -123,6 +144,8 @@ element.DOMAIN.tld {
|
|||
X-Content-Type-Options "nosniff"
|
||||
# Disallow the site to be rendered within a frame (clickjacking protection)
|
||||
X-Frame-Options "DENY"
|
||||
# If using integrations that add frames to Element, such as Dimension and its integrations running on the same domain, it can be a good idea to limit sources allowed to be rendered
|
||||
# Content-Security-Policy frame-src https://*.DOMAIN.tld
|
||||
# X-Robots-Tag
|
||||
X-Robots-Tag "noindex, noarchive, nofollow"
|
||||
}
|
||||
|
@ -144,6 +167,8 @@ element.DOMAIN.tld {
|
|||
# # creates letsencrypt certificate
|
||||
# # tls your@email.com
|
||||
#
|
||||
# import cors https://*.DOMAIN.tld
|
||||
#
|
||||
# header {
|
||||
# # Enable HTTP Strict Transport Security (HSTS) to force clients to always connect via HTTPS
|
||||
# Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
|
||||
|
@ -151,8 +176,8 @@ element.DOMAIN.tld {
|
|||
# X-XSS-Protection "1; mode=block"
|
||||
# # Prevent some browsers from MIME-sniffing a response away from the declared Content-Type
|
||||
# X-Content-Type-Options "nosniff"
|
||||
# # Disallow the site to be rendered within a frame (clickjacking protection)
|
||||
# X-Frame-Options "DENY"
|
||||
# # Only allow same base domain to render this website in a frame; Can be removed if the client (Element for example) is hosted on another domain (clickjacking protection)
|
||||
# Content-Security-Policy frame-ancestors https://*.DOMAIN.tld
|
||||
# # X-Robots-Tag
|
||||
# X-Robots-Tag "noindex, noarchive, nofollow"
|
||||
# }
|
||||
|
@ -176,6 +201,8 @@ element.DOMAIN.tld {
|
|||
# creates letsencrypt certificate
|
||||
# tls your@email.com
|
||||
#
|
||||
# import cors https://*.DOMAIN.tld
|
||||
#
|
||||
# header {
|
||||
# # Enable HTTP Strict Transport Security (HSTS) to force clients to always connect via HTTPS
|
||||
# Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
|
||||
|
@ -185,9 +212,9 @@ element.DOMAIN.tld {
|
|||
#
|
||||
# # Prevent some browsers from MIME-sniffing a response away from the declared Content-Type
|
||||
# X-Content-Type-Options "nosniff"
|
||||
#
|
||||
# # Disallow the site to be rendered within a frame (clickjacking protection)
|
||||
# X-Frame-Options "SAMEORIGIN"
|
||||
|
||||
# # Only allow same base domain to render this website in a frame; Can be removed if the client (Element for example) is hosted on another domain
|
||||
# Content-Security-Policy frame-ancestors https://*.DOMAIN.tld
|
||||
#
|
||||
# # Disable some features
|
||||
# Feature-Policy "accelerometer 'none';ambient-light-sensor 'none'; autoplay 'none';camera 'none';encrypted-media 'none';focus-without-user-activation 'none'; geolocation 'none';gyroscope #'none';magnetometer 'none';microphone 'none';midi 'none';payment 'none';picture-in-picture 'none'; speaker 'none';sync-xhr 'none';usb 'none';vr 'none'"
|
||||
|
@ -225,6 +252,14 @@ element.DOMAIN.tld {
|
|||
# header_up Host {http.reverse_proxy.upstream.hostport}
|
||||
# }
|
||||
# }
|
||||
# # If you have other well-knowns already handled by your base domain, you can replace the above block by this one, along with the replacement suggested in the matrix subdomain
|
||||
# # handle /.well-known/* {
|
||||
# # encode zstd gzip
|
||||
# # header Cache-Control max-age=14400
|
||||
# # header Content-Type application/json
|
||||
# # header Access-Control-Allow-Origin *
|
||||
# #}
|
||||
#
|
||||
# # Configration for the base domain goes here
|
||||
# # handle {
|
||||
# # header -Server
|
||||
|
|
|
@ -139,7 +139,7 @@ matrix_appservice_webhooks_systemd_required_services_list: |
|
|||
# We don't enable bridges by default.
|
||||
matrix_appservice_slack_enabled: false
|
||||
|
||||
matrix_appservice_slack_container_image_self_build: "{{ matrix_architecture != 'amd64' }}"
|
||||
matrix_appservice_slack_container_image_self_build: "{{ matrix_architecture not in ['amd64', 'arm64'] }}"
|
||||
|
||||
# Normally, matrix-nginx-proxy is enabled and nginx can reach matrix-appservice-slack over the container network.
|
||||
# If matrix-nginx-proxy is not enabled, or you otherwise have a need for it, you can expose
|
||||
|
@ -1570,6 +1570,20 @@ matrix_jitsi_etherpad_base: "{{ matrix_etherpad_base_url if matrix_etherpad_enab
|
|||
# /matrix-jitsi
|
||||
#
|
||||
######################################################################
|
||||
######################################################################
|
||||
#
|
||||
# matrix-ldap-registration-proxy
|
||||
#
|
||||
######################################################################
|
||||
|
||||
# This is only for users with a specific LDAP setup
|
||||
matrix_ldap_registration_proxy_enabled: false
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# /matrix-ldap-registration-proxy
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
|
@ -2350,7 +2364,7 @@ matrix_synapse_systemd_required_services_list: |
|
|||
+
|
||||
(['matrix-postgres.service'] if matrix_postgres_enabled else [])
|
||||
+
|
||||
(['matrix-goofys'] if matrix_s3_media_store_enabled else [])
|
||||
(['matrix-goofys.service'] if matrix_s3_media_store_enabled else [])
|
||||
}}
|
||||
|
||||
matrix_synapse_systemd_wanted_services_list: |
|
||||
|
@ -2636,7 +2650,7 @@ matrix_dendrite_systemd_required_services_list: |
|
|||
+
|
||||
(['matrix-postgres.service'] if matrix_postgres_enabled else [])
|
||||
+
|
||||
(['matrix-goofys'] if matrix_s3_media_store_enabled else [])
|
||||
(['matrix-goofys.service'] if matrix_s3_media_store_enabled else [])
|
||||
}}
|
||||
|
||||
matrix_dendrite_systemd_wanted_services_list: |
|
||||
|
|
|
@ -14,13 +14,13 @@
|
|||
{% if matrix_nginx_proxy_enabled | default(False) %}
|
||||
{# Use the embedded DNS resolver in Docker containers to discover the service #}
|
||||
resolver 127.0.0.11 valid=5s;
|
||||
set $backend "matrix-bot-maubot:{{ matrix_bot_maubot_management_interface_port }}/$1";
|
||||
proxy_pass http://$backend;
|
||||
set $backend "matrix-bot-maubot:{{ matrix_bot_maubot_management_interface_port }}";
|
||||
proxy_pass http://$backend$request_uri;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
{% else %}
|
||||
{# Generic configuration for use outside of our container setup #}
|
||||
proxy_pass http://127.0.0.1:{{ matrix_bot_maubot_management_interface_port }}/$1;
|
||||
proxy_pass http://127.0.0.1:{{ matrix_bot_maubot_management_interface_port }}$request_uri;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
{% endif %}
|
||||
|
|
|
@ -1,22 +1,22 @@
|
|||
---
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/init.yml"
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/init.yml"
|
||||
tags:
|
||||
- always
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/validate_config.yml"
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/validate_config.yml"
|
||||
when: "run_setup|bool and matrix_bot_maubot_enabled|bool"
|
||||
tags:
|
||||
- setup-all
|
||||
- setup-bot-maubot
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/setup_install.yml"
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/setup_install.yml"
|
||||
when: "run_setup|bool and matrix_bot_maubot_enabled|bool"
|
||||
tags:
|
||||
- setup-all
|
||||
- setup-bot-maubot
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/setup_uninstall.yml"
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/setup_uninstall.yml"
|
||||
when: "run_setup|bool and not matrix_bot_maubot_enabled|bool"
|
||||
tags:
|
||||
- setup-all
|
||||
|
|
|
@ -1,136 +1,184 @@
|
|||
# Where the homeserver is located (client-server URL). This should point at
|
||||
# pantalaimon if you're using that.
|
||||
# Endpoint URL that Mjolnir uses to interact with the matrix homeserver (client-server API),
|
||||
# set this to the pantalaimon URL if you're using that.
|
||||
homeserverUrl: "{{ matrix_homeserver_url }}"
|
||||
|
||||
# The access token for the bot to use. Do not populate if using Pantalaimon.
|
||||
# Endpoint URL that Mjolnir could use to fetch events related to reports (client-server API and /_synapse/),
|
||||
# only set this to the public-internet homeserver client API URL, do NOT set this to the pantalaimon URL.
|
||||
rawHomeserverUrl: "{{ matrix_homeserver_url }}"
|
||||
|
||||
# Matrix Access Token to use, Mjolnir will only use this if pantalaimon.use is false.
|
||||
accessToken: "{{ matrix_bot_mjolnir_access_token }}"
|
||||
|
||||
# Pantalaimon options (https://github.com/matrix-org/pantalaimon)
|
||||
# Options related to Pantalaimon (https://github.com/matrix-org/pantalaimon)
|
||||
#pantalaimon:
|
||||
# # If true, accessToken above is ignored and the username/password below will be
|
||||
# # used instead. The access token of the bot will be stored in the dataPath.
|
||||
# # Whether or not Mjolnir will use pantalaimon to access the matrix homeserver,
|
||||
# # set to `true` if you're using pantalaimon.
|
||||
# #
|
||||
# # Be sure to point homeserverUrl to the pantalaimon instance.
|
||||
# #
|
||||
# # Mjolnir will log in using the given username and password once,
|
||||
# # then store the resulting access token in a file under dataPath.
|
||||
# use: false
|
||||
#
|
||||
# # The username to login with.
|
||||
# username: mjolnir
|
||||
#
|
||||
# # The password to login with. Can be removed after the bot has logged in once and
|
||||
# # stored the access token.
|
||||
# # The password Mjolnir will login with.
|
||||
# #
|
||||
# # After successfully logging in once, this will be ignored, so this value can be blanked after first startup.
|
||||
# password: your_password
|
||||
|
||||
# The directory the bot should store various bits of information in
|
||||
# The path Mjolnir will store its state/data in, leave default ("/data/storage") when using containers.
|
||||
dataPath: "/data"
|
||||
|
||||
# If true (the default), only users in the `managementRoom` can invite the bot
|
||||
# to new rooms.
|
||||
# If true (the default), Mjolnir will only accept invites from users present in managementRoom.
|
||||
autojoinOnlyIfManager: true
|
||||
|
||||
# If `autojoinOnlyIfManager` is false, only the members in this group can invite
|
||||
# If `autojoinOnlyIfManager` is false, only the members in this space can invite
|
||||
# the bot to new rooms.
|
||||
#acceptInvitesFromGroup: '+example:example.org'
|
||||
#acceptInvitesFromSpace: "!example:example.org"
|
||||
|
||||
# If the bot is invited to a room and it won't accept the invite (due to the
|
||||
# conditions above), report it to the management room. Defaults to disabled (no
|
||||
# reporting).
|
||||
# Whether Mjolnir should report ignored invites to the management room (if autojoinOnlyIfManager is true).
|
||||
recordIgnoredInvites: false
|
||||
|
||||
# The room ID where people can use the bot. The bot has no access controls, so
|
||||
# anyone in this room can use the bot - secure your room!
|
||||
# The room ID (or room alias) of the management room, anyone in this room can issue commands to Mjolnir.
|
||||
#
|
||||
# Mjolnir has no more granular access controls other than this, be sure you trust everyone in this room - secure it!
|
||||
#
|
||||
# This should be a room alias or room ID - not a matrix.to URL.
|
||||
# Note: Mjolnir is fairly verbose - expect a lot of messages from it.
|
||||
#
|
||||
# Note: By default, Mjolnir is fairly verbose - expect a lot of messages in this room.
|
||||
# (see verboseLogging to adjust this a bit.)
|
||||
managementRoom: "{{ matrix_bot_mjolnir_management_room }}"
|
||||
|
||||
# Set to false to make the management room a bit quieter.
|
||||
# Whether Mjolnir should log a lot more messages in the room,
|
||||
# mainly involves "all-OK" messages, and debugging messages for when mjolnir checks bans in a room.
|
||||
verboseLogging: false
|
||||
|
||||
# The log level for the logs themselves. One of DEBUG, INFO, WARN, and ERROR.
|
||||
# The log level of terminal (or container) output,
|
||||
# can be one of DEBUG, INFO, WARN and ERROR, in increasing order of importance and severity.
|
||||
#
|
||||
# This should be at INFO or DEBUG in order to get support for Mjolnir problems.
|
||||
logLevel: "INFO"
|
||||
|
||||
# Set to false to disable synchronizing the ban lists on startup. If true, this
|
||||
# is the same as running !mjolnir sync immediately after startup.
|
||||
# Whether or not Mjolnir should synchronize policy lists immediately after startup.
|
||||
# Equivalent to running '!mjolnir sync'.
|
||||
syncOnStartup: true
|
||||
|
||||
# Set to false to prevent Mjolnir from checking its permissions on startup. This
|
||||
# is recommended to be left as "true" to catch room permission problems (state
|
||||
# resets, etc) before Mjolnir is needed.
|
||||
# Whether or not Mjolnir should check moderation permissions in all protected rooms on startup.
|
||||
# Equivalent to running `!mjolnir verify`.
|
||||
verifyPermissionsOnStartup: true
|
||||
|
||||
# If true, Mjolnir won't actually ban users or apply server ACLs, but will
|
||||
# think it has. This is useful to see what it does in a scenario where the
|
||||
# bot might not be trusted fully, yet. Default false (do bans/ACLs).
|
||||
# Whether or not Mjolnir should actually apply bans and policy lists,
|
||||
# turn on to trial some untrusted configuration or lists.
|
||||
noop: false
|
||||
|
||||
# Set to true to use /joined_members instead of /state to figure out who is
|
||||
# in the room. Using /state is preferred because it means that users are
|
||||
# banned when they are invited instead of just when they join, though if your
|
||||
# server struggles with /state requests then set this to true.
|
||||
# Whether Mjolnir should check member lists quicker (by using a different endpoint),
|
||||
# keep in mind that enabling this will miss invited (but not joined) users.
|
||||
#
|
||||
# Turn on if your bot is in (very) large rooms, or in large amounts of rooms.
|
||||
fasterMembershipChecks: false
|
||||
|
||||
# A case-insensitive list of ban reasons to automatically redact a user's
|
||||
# messages for. Typically this is useful to avoid having to type two commands
|
||||
# to the bot. Use asterisks to represent globs (ie: "spam*testing" would match
|
||||
# "spam for testing" as well as "spamtesting").
|
||||
# A case-insensitive list of ban reasons to have the bot also automatically redact the user's messages for.
|
||||
#
|
||||
# If the bot sees you ban a user with a reason that is an (exact case-insensitive) match to this list,
|
||||
# it will also remove the user's messages automatically.
|
||||
#
|
||||
# Typically this is useful to avoid having to give two commands to the bot.
|
||||
# Advanced: Use asterisks to have the reason match using "globs"
|
||||
# (f.e. "spam*testing" would match "spam for testing" as well as "spamtesting").
|
||||
#
|
||||
# See here for more info: https://www.digitalocean.com/community/tools/glob
|
||||
# Note: Keep in mind that glob is NOT regex!
|
||||
automaticallyRedactForReasons:
|
||||
- "spam"
|
||||
- "advertising"
|
||||
- "spam"
|
||||
- "advertising"
|
||||
|
||||
# A list of rooms to protect (matrix.to URLs)
|
||||
# A list of rooms to protect. Mjolnir will add this to the list it knows from its account data.
|
||||
#
|
||||
# It won't, however, add it to the account data.
|
||||
# Manually add the room via '!mjolnir rooms add' to have it stay protected regardless if this config value changes.
|
||||
#
|
||||
# Note: These must be matrix.to URLs
|
||||
#protectedRooms:
|
||||
# - "https://matrix.to/#/#yourroom:example.org"
|
||||
|
||||
# Set this option to true to protect every room the bot is joined to. Note that
|
||||
# this effectively makes the protectedRooms and associated commands useless because
|
||||
# the bot by nature must be joined to the room to protect it.
|
||||
# Whether or not to add all joined rooms to the "protected rooms" list
|
||||
# (excluding the management room and watched policy list rooms, see below).
|
||||
#
|
||||
# Note: the management room is *excluded* from this condition. Add it to the
|
||||
# protected rooms to protect it.
|
||||
# Note that this effectively makes the protectedRooms and associated commands useless
|
||||
# for regular rooms.
|
||||
#
|
||||
# Note: ban list rooms the bot is watching but didn't create will not be protected.
|
||||
# Manually add these rooms to the protected rooms list if you want them protected.
|
||||
# Note: the management room is *excluded* from this condition.
|
||||
# Explicitly add it as a protected room to protect it.
|
||||
#
|
||||
# Note: Ban list rooms the bot is watching but didn't create will not be protected.
|
||||
# Explicitly add these rooms as a protected room list if you want them protected.
|
||||
protectAllJoinedRooms: false
|
||||
|
||||
# Increase this delay to have Mjölnir wait longer between two consecutive backgrounded
|
||||
# operations. The total duration of operations will be longer, but the homeserver won't
|
||||
# be affected as much. Conversely, decrease this delay to have Mjölnir chain operations
|
||||
# faster. The total duration of operations will generally be shorter, but the performance
|
||||
# of the homeserver may be more impacted.
|
||||
backgroundDelayMS: 500
|
||||
|
||||
# Server administration commands, these commands will only work if Mjolnir is
|
||||
# a global server administrator, and the bot's server is a Synapse instance.
|
||||
#admin:
|
||||
# # Whether or not Mjolnir can temporarily take control of any eligible account from the local homeserver who's in the room
|
||||
# # (with enough permissions) to "make" a user an admin.
|
||||
# #
|
||||
# # This only works if a local user with enough admin permissions is present in the room.
|
||||
# enableMakeRoomAdminCommand: false
|
||||
|
||||
# Misc options for command handling and commands
|
||||
commands:
|
||||
# If true, Mjolnir will respond to commands like !help and !ban instead of
|
||||
# requiring a prefix. This is useful if Mjolnir is the only bot running in
|
||||
# your management room.
|
||||
# Whether or not the `!mjolnir` prefix is necessary to submit commands.
|
||||
#
|
||||
# Note that Mjolnir can be pinged by display name instead of having to use
|
||||
# If `true`, will allow commands like `!ban`, `!help`, etc.
|
||||
#
|
||||
# Note: Mjolnir can also be pinged by display name instead of having to use
|
||||
# the !mjolnir prefix. For example, "my_moderator_bot: ban @spammer:example.org"
|
||||
# will ban a user.
|
||||
# will address only my_moderator_bot.
|
||||
allowNoPrefix: false
|
||||
|
||||
# In addition to the bot's display name, !mjolnir, and optionally no prefix
|
||||
# above, the bot will respond to these names. The items here can be used either
|
||||
# as display names or prefixed with exclamation points.
|
||||
# Any additional bot prefixes that Mjolnir will listen to. i.e. adding `mod` will allow `!mod help`.
|
||||
additionalPrefixes:
|
||||
- "mjolnir_bot"
|
||||
|
||||
# If true, ban commands that use wildcard characters require confirmation with
|
||||
# an extra `--force` argument
|
||||
# Whether or not commands with a wildcard (*) will require an additional `--force` argument
|
||||
# in the command to be able to be submitted.
|
||||
confirmWildcardBan: true
|
||||
|
||||
# Configuration specific to certain toggleable protections
|
||||
# Configuration specific to certain toggle-able protections
|
||||
#protections:
|
||||
# # Configuration for the wordlist plugin, which can ban users based if they say certain
|
||||
# # blocked words shortly after joining.
|
||||
# wordlist:
|
||||
# # A list of words which should be monitored by the bot. These will match if any part
|
||||
# # of the word is present in the message in any case. e.g. "hello" also matches
|
||||
# # "HEllO". Additionally, regular expressions can be used.
|
||||
# # A list of case-insensitive keywords that the WordList protection will watch for from new users.
|
||||
# #
|
||||
# # WordList will ban users who use these words when first joining a room, so take caution when selecting them.
|
||||
# #
|
||||
# # For advanced usage, regex can also be used, see the following links for more information;
|
||||
# # - https://www.digitalocean.com/community/tutorials/an-introduction-to-regular-expressions
|
||||
# # - https://regexr.com/
|
||||
# # - https://regexone.com/
|
||||
# words:
|
||||
# - "CaSe"
|
||||
# - "InSeNsAtIve"
|
||||
# - "WoRd"
|
||||
# - "LiSt"
|
||||
# - "LoReM"
|
||||
# - "IpSuM"
|
||||
# - "DoLoR"
|
||||
# - "aMeT"
|
||||
#
|
||||
# # How long after a user joins the server should the bot monitor their messages. After
|
||||
# # this time, users can say words from the wordlist without being banned automatically.
|
||||
# # Set to zero to disable (users will always be banned if they say a bad word)
|
||||
# # For how long (in minutes) the user is "new" to the WordList plugin.
|
||||
# #
|
||||
# # After this time, the user will no longer be banned for using a word in the above wordlist.
|
||||
# #
|
||||
# # Set to zero to disable the timeout and make users *always* appear "new".
|
||||
# # (users will always be banned if they say a bad word)
|
||||
# minutesBeforeTrusting: 20
|
||||
|
||||
# Options for monitoring the health of the bot
|
||||
# Options for advanced monitoring of the health of the bot.
|
||||
health:
|
||||
# healthz options. These options are best for use in container environments
|
||||
# like Kubernetes to detect how healthy the service is. The bot will report
|
||||
|
@ -160,3 +208,39 @@ health:
|
|||
# The HTTP status code which reports that the bot is not healthy/ready.
|
||||
# Defaults to 418.
|
||||
unhealthyStatus: 418
|
||||
|
||||
# Options for exposing web APIs.
|
||||
#web:
|
||||
# # Whether to enable web APIs.
|
||||
# enabled: false
|
||||
#
|
||||
# # The port to expose the webserver on. Defaults to 8080.
|
||||
# port: 8080
|
||||
#
|
||||
# # The address to listen for requests on. Defaults to only the current
|
||||
# # computer.
|
||||
# address: localhost
|
||||
#
|
||||
# # Alternative setting to open to the entire web. Be careful,
|
||||
# # as this will increase your security perimeter:
|
||||
# #
|
||||
# # address: "0.0.0.0"
|
||||
#
|
||||
# # A web API designed to intercept Matrix API
|
||||
# # POST /_matrix/client/r0/rooms/{roomId}/report/{eventId}
|
||||
# # and display readable abuse reports in the moderation room.
|
||||
# #
|
||||
# # If you wish to take advantage of this feature, you will need
|
||||
# # to configure a reverse proxy, see e.g. test/nginx.conf
|
||||
# abuseReporting:
|
||||
# # Whether to enable this feature.
|
||||
# enabled: false
|
||||
|
||||
# Whether or not to actively poll synapse for abuse reports, to be used
|
||||
# instead of intercepting client calls to synapse's abuse endpoint, when that
|
||||
# isn't possible/practical.
|
||||
pollReports: false
|
||||
|
||||
# Whether or not new reports, received either by webapi or polling,
|
||||
# should be printed to our managementRoom.
|
||||
displayReports: false
|
||||
|
|
|
@ -9,7 +9,7 @@ matrix_bot_postmoogle_docker_repo: "https://gitlab.com/etke.cc/postmoogle.git"
|
|||
matrix_bot_postmoogle_docker_repo_version: "{{ 'main' if matrix_bot_postmoogle_version == 'latest' else matrix_bot_postmoogle_version }}"
|
||||
matrix_bot_postmoogle_docker_src_files_path: "{{ matrix_base_data_path }}/postmoogle/docker-src"
|
||||
|
||||
matrix_bot_postmoogle_version: v0.9.4
|
||||
matrix_bot_postmoogle_version: v0.9.7
|
||||
matrix_bot_postmoogle_docker_image: "{{ matrix_bot_postmoogle_docker_image_name_prefix }}postmoogle:{{ matrix_bot_postmoogle_version }}"
|
||||
matrix_bot_postmoogle_docker_image_name_prefix: "{{ 'localhost/' if matrix_bot_postmoogle_container_image_self_build else 'registry.gitlab.com/etke.cc/' }}"
|
||||
matrix_bot_postmoogle_docker_image_force_pull: "{{ matrix_bot_postmoogle_docker_image.endswith(':latest') }}"
|
||||
|
@ -110,6 +110,9 @@ matrix_bot_postmoogle_noencryption: false
|
|||
|
||||
matrix_bot_postmoogle_domain: "{{ matrix_server_fqn_matrix }}"
|
||||
|
||||
# Password (passphrase) to encrypt account data
|
||||
matrix_bot_postmoogle_data_secret: ""
|
||||
|
||||
# in-container ports
|
||||
matrix_bot_postmoogle_port: '2525'
|
||||
matrix_bot_postmoogle_tls_port: '25587'
|
||||
|
|
|
@ -15,5 +15,6 @@ POSTMOOGLE_TLS_PORT={{ matrix_bot_postmoogle_tls_port }}
|
|||
POSTMOOGLE_TLS_CERT={{ matrix_bot_postmoogle_tls_cert }}
|
||||
POSTMOOGLE_TLS_KEY={{ matrix_bot_postmoogle_tls_key }}
|
||||
POSTMOOGLE_TLS_REQUIRED={{ matrix_bot_postmoogle_tls_required }}
|
||||
POSTMOOGLE_DATA_SECRET={{ matrix_bot_postmoogle_data_secret }}
|
||||
|
||||
{{ matrix_bot_postmoogle_environment_variables_extension }}
|
||||
|
|
|
@ -11,7 +11,7 @@ matrix_appservice_slack_docker_src_files_path: "{{ matrix_base_data_path }}/apps
|
|||
|
||||
# matrix_appservice_slack_version used to contain the full Docker image tag (e.g. `release-X.X.X`).
|
||||
# It's a bare version number now. We try to somewhat retain compatibility below.
|
||||
matrix_appservice_slack_version: 1.11.0
|
||||
matrix_appservice_slack_version: 2.0.1
|
||||
matrix_appservice_slack_docker_image: "{{ matrix_container_global_registry_prefix }}matrixdotorg/matrix-appservice-slack:{{ matrix_appservice_slack_docker_image_tag }}"
|
||||
matrix_appservice_slack_docker_image_tag: "{{ 'latest' if matrix_appservice_slack_version == 'latest' else ('release-' + matrix_appservice_slack_version) }}"
|
||||
matrix_appservice_slack_docker_image_force_pull: "{{ matrix_appservice_slack_docker_image.endswith(':latest') }}"
|
||||
|
|
|
@ -85,7 +85,7 @@
|
|||
msg: >-
|
||||
NOTE: You've enabled the Matrix Slack bridge but are not using the matrix-nginx-proxy
|
||||
reverse proxy.
|
||||
Please make sure that you're proxying the `{{ something }}`
|
||||
Please make sure that you're proxying the `{{ matrix_appservice_slack_public_endpoint }}`
|
||||
URL endpoint to the matrix-appservice-slack container.
|
||||
You can expose the container's port using the `matrix_appservice_slack_container_http_host_bind_port` variable.
|
||||
when: "matrix_appservice_slack_enabled | bool and not matrix_nginx_proxy_enabled | default(False) | bool"
|
||||
|
|
|
@ -10,7 +10,7 @@ matrix_hookshot_container_image_self_build: false
|
|||
matrix_hookshot_container_image_self_build_repo: "https://github.com/matrix-org/matrix-hookshot.git"
|
||||
matrix_hookshot_container_image_self_build_branch: "{{ 'main' if matrix_hookshot_version == 'latest' else matrix_hookshot_version }}"
|
||||
|
||||
matrix_hookshot_version: 2.2.0
|
||||
matrix_hookshot_version: 2.3.0
|
||||
|
||||
matrix_hookshot_docker_image: "{{ matrix_hookshot_docker_image_name_prefix }}halfshot/matrix-hookshot:{{ matrix_hookshot_version }}"
|
||||
matrix_hookshot_docker_image_name_prefix: "{{ 'localhost/' if matrix_hookshot_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||
|
@ -128,7 +128,7 @@ matrix_hookshot_generic_allow_js_transformation_functions: false
|
|||
matrix_hookshot_generic_user_id_prefix: '_webhooks_'
|
||||
|
||||
|
||||
matrix_hookshot_feeds_enabled: false
|
||||
matrix_hookshot_feeds_enabled: true
|
||||
# polling interval in seconds
|
||||
matrix_hookshot_feeds_interval: 600
|
||||
|
||||
|
|
|
@ -108,7 +108,7 @@ metrics:
|
|||
logging:
|
||||
# (Optional) Logging settings. You can have a severity debug,info,warn,error
|
||||
#
|
||||
level: info
|
||||
level: warn
|
||||
{% if matrix_hookshot_widgets_enabled %}
|
||||
widgets:
|
||||
# (Optional) EXPERIMENTAL support for complimentary widgets
|
||||
|
|
|
@ -8,7 +8,7 @@ matrix_mautrix_instagram_container_image_self_build: false
|
|||
matrix_mautrix_instagram_container_image_self_build_repo: "https://github.com/mautrix/instagram.git"
|
||||
matrix_mautrix_instagram_container_image_self_build_repo_version: "{{ 'master' if matrix_mautrix_instagram_version == 'latest' else matrix_mautrix_instagram_version }}"
|
||||
|
||||
matrix_mautrix_instagram_version: v0.2.1
|
||||
matrix_mautrix_instagram_version: latest
|
||||
# See: https://mau.dev/tulir/mautrix-instagram/container_registry
|
||||
matrix_mautrix_instagram_docker_image: "{{ matrix_mautrix_instagram_docker_image_name_prefix }}mautrix/instagram:{{ matrix_mautrix_instagram_version }}"
|
||||
matrix_mautrix_instagram_docker_image_name_prefix: "{{ 'localhost/' if matrix_mautrix_instagram_container_image_self_build else 'dock.mau.dev/' }}"
|
||||
|
|
|
@ -10,7 +10,7 @@ matrix_mautrix_signal_docker_repo_version: "{{ 'master' if matrix_mautrix_signal
|
|||
matrix_mautrix_signal_docker_src_files_path: "{{ matrix_base_data_path }}/mautrix-signal/docker-src"
|
||||
|
||||
matrix_mautrix_signal_version: v0.4.0
|
||||
matrix_mautrix_signal_daemon_version: 0.21.1
|
||||
matrix_mautrix_signal_daemon_version: 0.22.2
|
||||
# See: https://mau.dev/mautrix/signal/container_registry
|
||||
matrix_mautrix_signal_docker_image: "dock.mau.dev/mautrix/signal:{{ matrix_mautrix_signal_version }}"
|
||||
matrix_mautrix_signal_docker_image_force_pull: "{{ matrix_mautrix_signal_docker_image.endswith(':latest') }}"
|
||||
|
|
|
@ -38,6 +38,9 @@ matrix_mautrix_telegram_api_id: ''
|
|||
matrix_mautrix_telegram_api_hash: ''
|
||||
matrix_mautrix_telegram_bot_token: disabled
|
||||
|
||||
# Define the filter-mode
|
||||
matrix_mautrix_telegram_filter_mode: "blacklist"
|
||||
|
||||
# Whether or not the public-facing endpoints should be enabled (web-based login)
|
||||
matrix_mautrix_telegram_appservice_public_enabled: true
|
||||
|
||||
|
|
|
@ -273,12 +273,12 @@ bridge:
|
|||
# Filter mode to use. Either "blacklist" or "whitelist".
|
||||
# If the mode is "blacklist", the listed chats will never be bridged.
|
||||
# If the mode is "whitelist", only the listed chats can be bridged.
|
||||
mode: blacklist
|
||||
mode: {{ matrix_mautrix_telegram_filter_mode | to_json }}
|
||||
# The list of group/channel IDs to filter.
|
||||
list: []
|
||||
|
||||
# The prefix for commands. Only required in non-management rooms.
|
||||
command_prefix: "{{ matrix_mautrix_telegram_command_prefix }}"
|
||||
command_prefix: {{ matrix_mautrix_telegram_command_prefix | to_json }}
|
||||
|
||||
# Permissions for using the bridge.
|
||||
# Permitted values:
|
||||
|
@ -291,7 +291,7 @@ bridge:
|
|||
# * - All Matrix users
|
||||
# domain - All users on that homeserver
|
||||
# mxid - Specific user
|
||||
permissions: {{ matrix_mautrix_telegram_bridge_permissions|to_json }}
|
||||
permissions: {{ matrix_mautrix_telegram_bridge_permissions | to_json }}
|
||||
|
||||
# Options related to the message relay Telegram bot.
|
||||
relaybot:
|
||||
|
|
|
@ -8,7 +8,7 @@ matrix_mautrix_whatsapp_container_image_self_build: false
|
|||
matrix_mautrix_whatsapp_container_image_self_build_repo: "https://mau.dev/mautrix/whatsapp.git"
|
||||
matrix_mautrix_whatsapp_container_image_self_build_branch: "{{ 'master' if matrix_mautrix_whatsapp_version == 'latest' else matrix_mautrix_whatsapp_version }}"
|
||||
|
||||
matrix_mautrix_whatsapp_version: v0.7.0
|
||||
matrix_mautrix_whatsapp_version: v0.7.1
|
||||
# See: https://mau.dev/mautrix/whatsapp/container_registry
|
||||
matrix_mautrix_whatsapp_docker_image: "{{ matrix_mautrix_whatsapp_docker_image_name_prefix }}mautrix/whatsapp:{{ matrix_mautrix_whatsapp_version }}"
|
||||
matrix_mautrix_whatsapp_docker_image_name_prefix: "{{ 'localhost/' if matrix_mautrix_whatsapp_container_image_self_build else 'dock.mau.dev/' }}"
|
||||
|
|
|
@ -147,6 +147,12 @@ bridge:
|
|||
# provisioning endpoint is used or when a message comes in from that
|
||||
# chat.
|
||||
max_initial_conversations: -1
|
||||
# If this value is greater than 0, then if the conversation's last
|
||||
# message was more than this number of hours ago, then the conversation
|
||||
# will automatically be marked it as read.
|
||||
# Conversations that have a last message that is less than this number
|
||||
# of hours ago will have their unread status synced from WhatsApp.
|
||||
unread_hours_threshold: 0
|
||||
# Settings for immediate backfills. These backfills should generally be
|
||||
# small and their main purpose is to populate each of the initial chats
|
||||
# (as configured by max_initial_conversations) with a few messages so
|
||||
|
@ -228,7 +234,10 @@ bridge:
|
|||
# manually.
|
||||
login_shared_secret_map: {{ matrix_mautrix_whatsapp_bridge_login_shared_secret_map|to_json }}
|
||||
# Should the bridge explicitly set the avatar and room name for private chat portal rooms?
|
||||
# This is implicitly enabled in encrypted rooms.
|
||||
private_chat_portal_meta: false
|
||||
# Should group members be synced in parallel? This makes member sync faster
|
||||
parallel_member_sync: false
|
||||
# Should Matrix m.notice-type messages be bridged?
|
||||
bridge_notices: true
|
||||
# Set this to true to tell the bridge to re-send m.bridge events to all rooms on the next run.
|
||||
|
@ -281,6 +290,9 @@ bridge:
|
|||
# Send captions in the same message as images. This will send data compatible with both MSC2530 and MSC3552.
|
||||
# This is currently not supported in most clients.
|
||||
caption_in_message: false
|
||||
# Should Matrix edits be bridged to WhatsApp edits?
|
||||
# Official WhatsApp clients don't render edits yet, but once they do, the bridge should work with them right away.
|
||||
send_whatsapp_edits: false
|
||||
# Maximum time for handling Matrix events. Duration strings formatted for https://pkg.go.dev/time#ParseDuration
|
||||
# Null means there's no enforced timeout.
|
||||
message_handling_timeout:
|
||||
|
|
|
@ -10,7 +10,7 @@ matrix_client_element_container_image_self_build_repo: "https://github.com/vecto
|
|||
# - https://github.com/vector-im/element-web/issues/19544
|
||||
matrix_client_element_container_image_self_build_low_memory_system_patch_enabled: "{{ ansible_memtotal_mb < 4096 }}"
|
||||
|
||||
matrix_client_element_version: v1.11.8
|
||||
matrix_client_element_version: v1.11.10
|
||||
matrix_client_element_docker_image: "{{ matrix_client_element_docker_image_name_prefix }}vectorim/element-web:{{ matrix_client_element_version }}"
|
||||
matrix_client_element_docker_image_name_prefix: "{{ 'localhost/' if matrix_client_element_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||
matrix_client_element_docker_image_force_pull: "{{ matrix_client_element_docker_image.endswith(':latest') }}"
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/conduit/setup_install.yml"
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/conduit/setup_install.yml"
|
||||
when: "matrix_conduit_enabled | bool"
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/conduit/setup_uninstall.yml"
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/conduit/setup_uninstall.yml"
|
||||
when: "not matrix_conduit_enabled | bool"
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/init.yml"
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/init.yml"
|
||||
tags:
|
||||
- always
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/conduit/setup.yml"
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/conduit/setup.yml"
|
||||
when: run_setup | bool
|
||||
tags:
|
||||
- setup-all
|
||||
|
|
|
@ -23,7 +23,7 @@ matrix_corporal_container_extra_arguments: []
|
|||
# List of systemd services that matrix-corporal.service depends on
|
||||
matrix_corporal_systemd_required_services_list: ['docker.service']
|
||||
|
||||
matrix_corporal_version: 2.3.0
|
||||
matrix_corporal_version: 2.4.0
|
||||
matrix_corporal_docker_image: "{{ matrix_corporal_docker_image_name_prefix }}devture/matrix-corporal:{{ matrix_corporal_docker_image_tag }}"
|
||||
matrix_corporal_docker_image_name_prefix: "{{ 'localhost/' if matrix_corporal_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||
matrix_corporal_docker_image_tag: "{{ matrix_corporal_version }}" # for backward-compatibility
|
||||
|
|
|
@ -6,7 +6,7 @@ matrix_dendrite_enabled: true
|
|||
|
||||
matrix_dendrite_docker_image: "{{ matrix_dendrite_docker_image_name_prefix }}matrixdotorg/dendrite-monolith:{{ matrix_dendrite_docker_image_tag }}"
|
||||
matrix_dendrite_docker_image_name_prefix: "docker.io/"
|
||||
matrix_dendrite_docker_image_tag: "v0.9.9"
|
||||
matrix_dendrite_docker_image_tag: "v0.10.3"
|
||||
matrix_dendrite_docker_image_force_pull: "{{ matrix_dendrite_docker_image.endswith(':latest') }}"
|
||||
|
||||
matrix_dendrite_base_path: "{{ matrix_base_data_path }}/dendrite"
|
||||
|
|
|
@ -349,10 +349,16 @@ sync_api:
|
|||
# a reverse proxy server.
|
||||
# real_ip_header: X-Real-IP
|
||||
real_ip_header: {{ matrix_dendrite_sync_api_real_ip_header|to_json }}
|
||||
fulltext:
|
||||
# Configuration for the full-text search engine.
|
||||
search:
|
||||
# Whether or not search is enabled.
|
||||
enabled: false
|
||||
index_path: "./fulltextindex"
|
||||
language: "en" # more possible languages can be found at https://github.com/blevesearch/bleve/tree/master/analysis/lang
|
||||
# The path where the search index will be created in.
|
||||
index_path: "/matrix-media-store-parent/searchindex"
|
||||
# The language most likely to be used on the server - used when indexing, to
|
||||
# ensure the returned results match expectations. A full list of possible languages
|
||||
# can be found at https://github.com/blevesearch/bleve/tree/master/analysis/lang
|
||||
language: "en"
|
||||
|
||||
# Configuration for the User API.
|
||||
user_api:
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
matrix_grafana_enabled: true
|
||||
|
||||
matrix_grafana_version: 9.1.6
|
||||
matrix_grafana_version: 9.2.1
|
||||
matrix_grafana_docker_image: "{{ matrix_container_global_registry_prefix }}grafana/grafana:{{ matrix_grafana_version }}"
|
||||
matrix_grafana_docker_image_force_pull: "{{ matrix_grafana_docker_image.endswith(':latest') }}"
|
||||
|
||||
|
|
|
@ -72,7 +72,7 @@ matrix_jitsi_jibri_recorder_password: ''
|
|||
|
||||
matrix_jitsi_enable_lobby: false
|
||||
|
||||
matrix_jitsi_version: stable-7648-4
|
||||
matrix_jitsi_version: stable-7882
|
||||
matrix_jitsi_container_image_tag: "{{ matrix_jitsi_version }}" # for backward-compatibility
|
||||
|
||||
matrix_jitsi_web_docker_image: "{{ matrix_container_global_registry_prefix }}jitsi/web:{{ matrix_jitsi_container_image_tag }}"
|
||||
|
|
58
roles/matrix-ldap-registration-proxy/defaults/main.yml
Normal file
58
roles/matrix-ldap-registration-proxy/defaults/main.yml
Normal file
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
# matrix_ldap_registration_proxy - Want to build a large-scale Matrix server using external registration on LDAP?
|
||||
# Project source code URL: https://gitlab.com/activism.international/matrix_ldap_registration_proxy
|
||||
|
||||
matrix_ldap_registration_proxy_enabled: true
|
||||
|
||||
matrix_ldap_registration_proxy_docker_image: matrix_ldap_registration_proxy
|
||||
matrix_ldap_registration_proxy_container_image_self_build_repo: "https://gitlab.com/activism.international/matrix_ldap_registration_proxy.git"
|
||||
matrix_ldap_registration_proxy_container_image_self_build_branch: "{{ matrix_ldap_registration_proxy_version }}"
|
||||
|
||||
matrix_ldap_registration_proxy_version: "296246afc6a9b3105e67fcf6621cf05ebc74b873"
|
||||
|
||||
matrix_ldap_registration_proxy_base_path: "{{ matrix_base_data_path }}/matrix_ldap_registration_proxy"
|
||||
# We need the docker src directory to be named matrix_ldap_registration_proxy.
|
||||
matrix_ldap_registration_proxy_docker_src_files_path: "{{ matrix_ldap_registration_proxy_base_path }}/docker-src/matrix_ldap_registration_proxy"
|
||||
matrix_ldap_registration_proxy_config_path: "{{ matrix_ldap_registration_proxy_base_path }}/config"
|
||||
|
||||
matrix_ldap_registration_proxy_ldap_uri: ""
|
||||
matrix_ldap_registration_proxy_ldap_base_dn: ""
|
||||
matrix_ldap_registration_proxy_ldap_user: ""
|
||||
matrix_ldap_registration_proxy_ldap_password: ""
|
||||
matrix_ldap_registration_proxy_matrix_server_name: "{{ matrix_domain }}"
|
||||
matrix_ldap_registration_proxy_matrix_server_url: "https://{{ matrix_server_fqn_matrix }}"
|
||||
|
||||
matrix_ldap_registration_proxy_registration_endpoint: "/_matrix/client/r0/register"
|
||||
|
||||
# Controls whether the self-check feature should validate SSL certificates.
|
||||
matrix_matrix_ldap_registration_proxy_self_check_validate_certificates: true
|
||||
|
||||
matrix_ldap_registration_proxy_container_port: 8080
|
||||
# Controls whether the matrix_ldap_registration_proxy container exposes its HTTP port (tcp/{{ matrix_ldap_registration_proxy_container_port }} in the container).
|
||||
#
|
||||
# Takes an "<ip>:<port>" or "<port>" value (e.g. "127.0.0.1:8080"), or empty string to not expose.
|
||||
matrix_ldap_registration_proxy_container_http_host_bind_port: ''
|
||||
|
||||
# `matrix_ldap_registration_proxy_container_http_host_bind_port_number_raw` contains the raw port number extracted from `matrix_ldap_registration_proxy_container_http_host_bind_port`,
|
||||
# which can contain values like this: ('1234', '127.0.0.1:1234', '0.0.0.0:1234')
|
||||
matrix_ldap_registration_proxy_container_http_host_bind_port_number_raw: "{{ '' if matrix_ldap_registration_proxy_container_http_host_bind_port == '' else (matrix_ldap_registration_proxy_container_http_host_bind_port.split(':')[1] if ':' in matrix_ldap_registration_proxy_container_http_host_bind_port else matrix_ldap_registration_proxy_container_http_host_bind_port) }}"
|
||||
|
||||
matrix_ldap_registration_proxy_registration_addr_with_container: "matrix-ldap_registration-proxy:{{ matrix_ldap_registration_proxy_container_http_host_bind_port_number_raw }}"
|
||||
matrix_ldap_registration_proxy_registration_addr_sans_container: "127.0.0.1:{{ matrix_ldap_registration_proxy_container_http_host_bind_port_number_raw }}"
|
||||
|
||||
|
||||
# A list of extra arguments to pass to the container
|
||||
matrix_ldap_registration_proxy_container_extra_arguments: []
|
||||
|
||||
# List of systemd services that matrix_ldap_registration_proxy.service depends on
|
||||
matrix_ldap_registration_proxy_systemd_required_services_list: ['docker.service']
|
||||
|
||||
# List of systemd services that matrix_ldap_registration_proxy.service wants
|
||||
matrix_ldap_registration_proxy_systemd_wanted_services_list: []
|
||||
|
||||
# Additional environment variables to pass to the LDAP proxy environment variables.
|
||||
#
|
||||
# Example:
|
||||
# matrix_ldap_registration_proxy_env_variables_extension: |
|
||||
# KEY=value
|
||||
matrix_ldap_registration_proxy_env_variables_extension: ''
|
58
roles/matrix-ldap-registration-proxy/tasks/init.yml
Normal file
58
roles/matrix-ldap-registration-proxy/tasks/init.yml
Normal file
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
# See https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/1070
|
||||
# and https://github.com/spantaleev/matrix-docker-ansible-deploy/commit/1ab507349c752042d26def3e95884f6df8886b74#commitcomment-51108407
|
||||
- name: Fail if trying to self-build on Ansible < 2.8
|
||||
ansible.builtin.fail:
|
||||
msg: "To self-build the matrix_ldap_registration_proxy image, you should use Ansible 2.8 or higher. See docs/ansible.md"
|
||||
when: "ansible_version.major == 2 and ansible_version.minor < 8 and matrix_ldap_registration_proxy_container_image_self_build and matrix_ldap_registration_proxy_enabled | bool"
|
||||
|
||||
- ansible.builtin.set_fact:
|
||||
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-ldap-registration-proxy.service'] }}"
|
||||
when: matrix_ldap_registration_proxy_enabled | bool
|
||||
|
||||
- block:
|
||||
- name: Fail if matrix-nginx-proxy role already executed
|
||||
ansible.builtin.fail:
|
||||
msg: >-
|
||||
Trying to append Matrix LDAP registration proxy's reverse-proxying configuration to matrix-nginx-proxy,
|
||||
but it's pointless since the matrix-nginx-proxy role had already executed.
|
||||
To fix this, please change the order of roles in your playbook,
|
||||
so that the matrix-nginx-proxy role would run after the matrix-bridge-mautrix-telegram role.
|
||||
when: matrix_nginx_proxy_role_executed | default(False) | bool
|
||||
|
||||
- name: Generate Matrix LDAP registration proxy proxying configuration for matrix-nginx-proxy
|
||||
ansible.builtin.set_fact:
|
||||
matrix_ldap_registration_proxy_matrix_nginx_proxy_configuration: |
|
||||
location {{ matrix_ldap_registration_proxy_registration_endpoint }} {
|
||||
{% if matrix_nginx_proxy_enabled | default(False) %}
|
||||
{# Use the embedded DNS resolver in Docker containers to discover the service #}
|
||||
resolver 127.0.0.11 valid=5s;
|
||||
set $backend "{{ matrix_ldap_registration_proxy_registration_addr_with_container }}";
|
||||
proxy_pass http://$backend/register;
|
||||
{% else %}
|
||||
{# Generic configuration for use outside of our container setup #}
|
||||
proxy_pass http://{{ matrix_ldap_registration_proxy_registration_addr_sans_container }}/register;
|
||||
{% endif %}
|
||||
}
|
||||
|
||||
- name: Register Matrix LDAP registration proxy proxying configuration with matrix-nginx-proxy
|
||||
ansible.builtin.set_fact:
|
||||
matrix_nginx_proxy_proxy_matrix_additional_server_configuration_blocks: |
|
||||
{{
|
||||
matrix_nginx_proxy_proxy_matrix_additional_server_configuration_blocks | default([])
|
||||
+
|
||||
[matrix_ldap_registration_proxy_matrix_nginx_proxy_configuration]
|
||||
}}
|
||||
- name: Warn about reverse-proxying if matrix-nginx-proxy not used
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
NOTE: You've enabled the Matrix LDAP registration proxy bridge but are not using the matrix-nginx-proxy
|
||||
reverse proxy.
|
||||
Please make sure that you're proxying the `{{ matrix_ldap_registration_proxy_public_endpoint }}`
|
||||
URL endpoint to the matrix-ldap-proxy container.
|
||||
You can expose the container's port using the `matrix_ldap_registration_proxy_container_http_host_bind_port` variable.
|
||||
when: "not matrix_nginx_proxy_enabled | default(False) | bool"
|
||||
|
||||
tags:
|
||||
- always
|
||||
when: matrix_ldap_registration_proxy_enabled | bool
|
23
roles/matrix-ldap-registration-proxy/tasks/main.yml
Normal file
23
roles/matrix-ldap-registration-proxy/tasks/main.yml
Normal file
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/init.yml"
|
||||
tags:
|
||||
- always
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/validate_config.yml"
|
||||
when: "run_setup | bool and matrix_ldap_registration_proxy_enabled | bool"
|
||||
tags:
|
||||
- setup-all
|
||||
- setup-matrix-ldap-registration-proxy
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/setup_install.yml"
|
||||
when: "run_setup | bool and matrix_ldap_registration_proxy_enabled | bool"
|
||||
tags:
|
||||
- setup-all
|
||||
- setup-matrix-ldap-registration-proxy
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/setup_uninstall.yml"
|
||||
when: "run_setup | bool and not matrix_ldap_registration_proxy_enabled | bool"
|
||||
tags:
|
||||
- setup-all
|
||||
- setup-matrix-ldap-registration-proxy
|
63
roles/matrix-ldap-registration-proxy/tasks/setup_install.yml
Normal file
63
roles/matrix-ldap-registration-proxy/tasks/setup_install.yml
Normal file
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
|
||||
- name: Ensure matrix_ldap_registration_proxy paths exist
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: directory
|
||||
mode: 0750
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
with_items:
|
||||
- {path: "{{ matrix_ldap_registration_proxy_config_path }}", when: true}
|
||||
- {path: "{{ matrix_ldap_registration_proxy_docker_src_files_path }}", when: true}
|
||||
when: "item.when | bool"
|
||||
|
||||
- ansible.builtin.set_fact:
|
||||
matrix_ldap_registration_proxy_requires_restart: false
|
||||
|
||||
- name: Ensure matrix_ldap_registration_proxy repository is present on self-build
|
||||
ansible.builtin.git:
|
||||
repo: "{{ matrix_ldap_registration_proxy_container_image_self_build_repo }}"
|
||||
dest: "{{ matrix_ldap_registration_proxy_docker_src_files_path }}"
|
||||
version: "{{ matrix_ldap_registration_proxy_container_image_self_build_branch }}"
|
||||
force: "yes"
|
||||
become: true
|
||||
become_user: "{{ matrix_user_username }}"
|
||||
register: matrix_ldap_registration_proxy_git_pull_results
|
||||
|
||||
- name: Ensure matrix_ldap_registration_proxy Docker image is built
|
||||
docker_image:
|
||||
name: "{{ matrix_ldap_registration_proxy_docker_image }}"
|
||||
source: build
|
||||
force_source: "{{ matrix_ldap_registration_proxy_git_pull_results.changed }}"
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
path: "{{ matrix_ldap_registration_proxy_docker_src_files_path }}"
|
||||
pull: true
|
||||
when: true
|
||||
|
||||
- name: Ensure matrix_ldap_registration_proxy config installed
|
||||
ansible.builtin.template:
|
||||
src: "{{ role_path }}/templates/ldap-registration-proxy.env.j2"
|
||||
dest: "{{ matrix_ldap_registration_proxy_config_path }}/ldap-registration-proxy.env"
|
||||
mode: 0644
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
|
||||
- name: Ensure matrix-ldap-registration-proxy.service installed
|
||||
ansible.builtin.template:
|
||||
src: "{{ role_path }}/templates/systemd/matrix-ldap-registration-proxy.service.j2"
|
||||
dest: "{{ matrix_systemd_path }}/matrix-ldap-registration-proxy.service"
|
||||
mode: 0644
|
||||
register: matrix_ldap_registration_proxy_systemd_service_result
|
||||
|
||||
- name: Ensure systemd reloaded after matrix-ldap-registration-proxy.service installation
|
||||
ansible.builtin.service:
|
||||
daemon_reload: true
|
||||
when: "matrix_ldap_registration_proxy_systemd_service_result.changed | bool"
|
||||
|
||||
- name: Ensure matrix-ldap-registration-proxy.service restarted, if necessary
|
||||
ansible.builtin.service:
|
||||
name: "matrix-ldap-registration-proxy.service"
|
||||
state: restarted
|
||||
when: "matrix_ldap_registration_proxy_requires_restart | bool"
|
|
@ -0,0 +1,36 @@
|
|||
---
|
||||
|
||||
- name: Check existence of matrix-matrix_ldap_registration_proxy service
|
||||
ansible.builtin.stat:
|
||||
path: "{{ matrix_systemd_path }}/matrix-ldap-registration-proxy.service"
|
||||
register: matrix_ldap_registration_proxy_service_stat
|
||||
|
||||
- name: Ensure matrix-matrix_ldap_registration_proxy is stopped
|
||||
ansible.builtin.service:
|
||||
name: matrix-matrix_ldap_registration_proxy
|
||||
state: stopped
|
||||
enabled: false
|
||||
daemon_reload: true
|
||||
register: stopping_result
|
||||
when: "matrix_ldap_registration_proxy_service_stat.stat.exists | bool"
|
||||
|
||||
- name: Ensure matrix-ldap-registration-proxy.service doesn't exist
|
||||
ansible.builtin.file:
|
||||
path: "{{ matrix_systemd_path }}/matrix-ldap-registration-proxy.service"
|
||||
state: absent
|
||||
when: "matrix_ldap_registration_proxy_service_stat.stat.exists | bool"
|
||||
|
||||
- name: Ensure systemd reloaded after matrix-ldap-registration-proxy.service removal
|
||||
ansible.builtin.service:
|
||||
daemon_reload: true
|
||||
when: "matrix_ldap_registration_proxy_service_stat.stat.exists | bool"
|
||||
|
||||
- name: Ensure Matrix matrix_ldap_registration_proxy paths don't exist
|
||||
ansible.builtin.file:
|
||||
path: "{{ matrix_ldap_registration_proxy_base_path }}"
|
||||
state: absent
|
||||
|
||||
- name: Ensure matrix_ldap_registration_proxy Docker image doesn't exist
|
||||
docker_image:
|
||||
name: "{{ matrix_ldap_registration_proxy_docker_image }}"
|
||||
state: absent
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
|
||||
- name: Fail if required settings not defined
|
||||
ansible.builtin.fail:
|
||||
msg: >-
|
||||
You need to define a required configuration setting (`{{ item }}`).
|
||||
when: "vars[item] == ''"
|
||||
with_items:
|
||||
- "matrix_ldap_registration_proxy_ldap_uri"
|
||||
- "matrix_ldap_registration_proxy_ldap_base_dn"
|
||||
- "matrix_ldap_registration_proxy_ldap_user"
|
||||
- "matrix_ldap_registration_proxy_ldap_password"
|
|
@ -0,0 +1,35 @@
|
|||
# please specify the configuration here
|
||||
#
|
||||
# these settings are mandatory
|
||||
|
||||
# The server to connect to. Please note it must be accessible from the Docker network
|
||||
# example: `ldap://127.0.0.1:389`
|
||||
LDAP_SERVER={{ matrix_ldap_registration_proxy_ldap_uri }}
|
||||
|
||||
# the base DN used for user creation
|
||||
|
||||
LDAP_BASE_DN={{ matrix_ldap_registration_proxy_ldap_base_dn }}
|
||||
|
||||
# the privileged user used for user creation including it's DN
|
||||
# example: `uid=admin,cn=users,cn=accounts,dc=example,dc=org`
|
||||
|
||||
LDAP_USER={{ matrix_ldap_registration_proxy_ldap_user }}
|
||||
|
||||
# the password of the `LDAP_USER` used for authentication
|
||||
LDAP_PASSWORD={{ matrix_ldap_registration_proxy_ldap_password }}
|
||||
|
||||
# the human-readable server name of your Matrix server as used in the Matrix ID
|
||||
# example: `example.org`
|
||||
MATRIX_SERVER_NAME={{ matrix_ldap_registration_proxy_matrix_server_name }}
|
||||
|
||||
# the url to access the Matrix server API without trailing `/`
|
||||
# example: `https://matrix.example.org`
|
||||
MATRIX_SERVER_URL={{ matrix_ldap_registration_proxy_matrix_server_url }}
|
||||
|
||||
# these settings are optional:
|
||||
|
||||
# Specify the port to listen on. Default to 8080
|
||||
LISTEN_PORT={{ matrix_ldap_registration_proxy_container_port }}
|
||||
|
||||
# Use this to extend the configuration with custom variables
|
||||
{{ matrix_ldap_registration_proxy_env_variables_extension }}
|
|
@ -0,0 +1,43 @@
|
|||
#jinja2: lstrip_blocks: "True"
|
||||
[Unit]
|
||||
Description=matrix_ldap_registration_proxy
|
||||
{% for service in matrix_ldap_registration_proxy_systemd_required_services_list %}
|
||||
Requires={{ service }}
|
||||
After={{ service }}
|
||||
{% endfor %}
|
||||
{% for service in matrix_ldap_registration_proxy_systemd_wanted_services_list %}
|
||||
Wants={{ service }}
|
||||
{% endfor %}
|
||||
DefaultDependencies=no
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
Environment="HOME={{ matrix_systemd_unit_home_path }}"
|
||||
ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-ldap-registration-proxy 2>/dev/null || true'
|
||||
ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-ldap-registration-proxy 2>/dev/null || true'
|
||||
|
||||
# matrix_ldap_registration_proxy writes an SQLite shared library (libsqlitejdbc.so) to /tmp and executes it from there,
|
||||
# so /tmp needs to be mounted with an exec option.
|
||||
ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-ldap-registration-proxy \
|
||||
--log-driver=none \
|
||||
--user={{ matrix_user_uid }}:{{ matrix_user_gid }} \
|
||||
--cap-drop=ALL \
|
||||
--read-only \
|
||||
--network={{ matrix_docker_network }} \
|
||||
{% if matrix_ldap_registration_proxy_container_http_host_bind_port %}
|
||||
-p {{ matrix_ldap_registration_proxy_container_http_host_bind_port }}:{{ matrix_ldap_registration_proxy_container_port }} \
|
||||
{% endif %}
|
||||
--env-file {{ matrix_ldap_registration_proxy_config_path }}/ldap-registration-proxy.env \
|
||||
{% for arg in matrix_ldap_registration_proxy_container_extra_arguments %}
|
||||
{{ arg }} \
|
||||
{% endfor %}
|
||||
{{ matrix_ldap_registration_proxy_docker_image }}
|
||||
|
||||
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-ldap-registration-proxy 2>/dev/null || true'
|
||||
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-ldap-registration-proxy 2>/dev/null || true'
|
||||
Restart=always
|
||||
RestartSec=30
|
||||
SyslogIdentifier=matrix-ldap-registration-proxy
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
|
@ -24,11 +24,11 @@ matrix_postgres_architecture: amd64
|
|||
matrix_postgres_docker_image_suffix: "{{ '-alpine' if matrix_postgres_architecture in ['amd64', 'arm64'] else '' }}"
|
||||
|
||||
matrix_postgres_docker_image_v9: "{{ matrix_container_global_registry_prefix }}postgres:9.6.24{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v10: "{{ matrix_container_global_registry_prefix }}postgres:10.21{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v11: "{{ matrix_container_global_registry_prefix }}postgres:11.16{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v12: "{{ matrix_container_global_registry_prefix }}postgres:12.11{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v13: "{{ matrix_container_global_registry_prefix }}postgres:13.7{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v14: "{{ matrix_container_global_registry_prefix }}postgres:14.4{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v10: "{{ matrix_container_global_registry_prefix }}postgres:10.22{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v11: "{{ matrix_container_global_registry_prefix }}postgres:11.17{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v12: "{{ matrix_container_global_registry_prefix }}postgres:12.12{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v13: "{{ matrix_container_global_registry_prefix }}postgres:13.8{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_v14: "{{ matrix_container_global_registry_prefix }}postgres:14.5{{ matrix_postgres_docker_image_suffix }}"
|
||||
matrix_postgres_docker_image_latest: "{{ matrix_postgres_docker_image_v14 }}"
|
||||
|
||||
# This variable is assigned at runtime. Overriding its value has no effect.
|
||||
|
|
|
@ -67,7 +67,7 @@
|
|||
become: false
|
||||
when: "matrix_postgres_service_start_result.changed | bool"
|
||||
|
||||
- name: Import SQLite database from {{ sqlite_database_path }} into Postgres
|
||||
- name: Import SQLite database from {{ sqlite_database_path }} into Postgres # noqa name[template]
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
{{ matrix_host_command_docker }} run
|
||||
|
@ -83,7 +83,7 @@
|
|||
register: matrix_postgres_import_generic_sqlite_db_import_result
|
||||
changed_when: matrix_postgres_import_generic_sqlite_db_import_result.rc == 0
|
||||
|
||||
- name: Archive SQLite database ({{ sqlite_database_path }} -> {{ sqlite_database_path }}.backup)
|
||||
- name: Archive SQLite database ({{ sqlite_database_path }} -> {{ sqlite_database_path }}.backup) # noqa name[template]
|
||||
ansible.builtin.command:
|
||||
cmd: "mv {{ sqlite_database_path }} {{ sqlite_database_path }}.backup"
|
||||
register: matrix_postgres_import_generic_sqlite_db_move_result
|
||||
|
|
|
@ -108,4 +108,5 @@
|
|||
async: "{{ postgres_import_wait_time }}"
|
||||
poll: 10
|
||||
register: matrix_postgres_import_postgres_command_result
|
||||
changed_when: matrix_postgres_import_postgres_command_result.rc == 0
|
||||
failed_when: not matrix_postgres_import_postgres_command_result.finished
|
||||
changed_when: matrix_postgres_import_postgres_command_result.finished and matrix_postgres_import_postgres_command_result.rc == 0
|
||||
|
|
|
@ -83,7 +83,7 @@
|
|||
--mount type=bind,src={{ matrix_synapse_config_dir_path }},dst=/data
|
||||
--mount type=bind,src={{ matrix_synapse_config_dir_path }},dst=/matrix-media-store-parent/media-store
|
||||
--mount type=bind,src={{ server_path_homeserver_db }},dst=/{{ server_path_homeserver_db | basename }}
|
||||
{{ matrix_synapse_docker_image }}
|
||||
{{ matrix_synapse_docker_image_final }}
|
||||
/usr/local/bin/synapse_port_db --sqlite-database /{{ server_path_homeserver_db | basename }} --postgres-config /data/homeserver.yaml
|
||||
register: matrix_postgres_import_synapse_sqlite_db_result
|
||||
changed_when: matrix_postgres_import_synapse_sqlite_db_result.rc == 0
|
||||
|
|
|
@ -118,7 +118,7 @@
|
|||
failed_when: false
|
||||
with_items: "{{ matrix_postgres_db_migration_request.systemd_services_to_stop }}"
|
||||
|
||||
- name: Import {{ matrix_postgres_db_migration_request.engine_old }} database from {{ matrix_postgres_db_migration_request.src }} into Postgres
|
||||
- name: Import {{ matrix_postgres_db_migration_request.engine_old }} database from {{ matrix_postgres_db_migration_request.src }} into Postgres # noqa name[template]
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
{{ matrix_host_command_docker }} run
|
||||
|
@ -158,7 +158,7 @@
|
|||
register: matrix_postgres_migrate_db_to_postgres_additional_queries_result
|
||||
changed_when: matrix_postgres_migrate_db_to_postgres_additional_queries_result.rc == 0
|
||||
|
||||
- name: Archive {{ matrix_postgres_db_migration_request.engine_old }} database ({{ matrix_postgres_db_migration_request.src }} -> {{ matrix_postgres_db_migration_request.src }}.backup)
|
||||
- name: Archive {{ matrix_postgres_db_migration_request.engine_old }} database ({{ matrix_postgres_db_migration_request.src }} -> {{ matrix_postgres_db_migration_request.src }}.backup) # noqa name[template]
|
||||
ansible.builtin.command:
|
||||
cmd: "mv {{ matrix_postgres_db_migration_request.src }} {{ matrix_postgres_db_migration_request.src }}.backup"
|
||||
register: matrix_postgres_migrate_db_to_postgres_move_result
|
||||
|
|
|
@ -78,7 +78,8 @@
|
|||
async: "{{ postgres_vacuum_wait_time }}"
|
||||
poll: 10
|
||||
register: matrix_postgres_synapse_vacuum_result
|
||||
changed_when: matrix_postgres_synapse_vacuum_result.rc == 0
|
||||
failed_when: not matrix_postgres_synapse_vacuum_result.finished
|
||||
changed_when: matrix_postgres_synapse_vacuum_result.finished and matrix_postgres_synapse_vacuum_result.rc == 0
|
||||
|
||||
# Intentionally show the results
|
||||
- ansible.builtin.debug: var="matrix_postgres_synapse_vacuum_result"
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
matrix_prometheus_enabled: false
|
||||
|
||||
matrix_prometheus_version: v2.38.0
|
||||
matrix_prometheus_version: v2.39.1
|
||||
matrix_prometheus_docker_image: "{{ matrix_container_global_registry_prefix }}prom/prometheus:{{ matrix_prometheus_version }}"
|
||||
matrix_prometheus_docker_image_force_pull: "{{ matrix_prometheus_docker_image.endswith(':latest') }}"
|
||||
|
||||
|
|
|
@ -7,18 +7,55 @@ matrix_synapse_enabled: true
|
|||
matrix_synapse_container_image_self_build: false
|
||||
matrix_synapse_container_image_self_build_repo: "https://github.com/matrix-org/synapse.git"
|
||||
|
||||
# matrix_synapse_container_image_customizations_enabled controls whether a customized Synapse image will be built.
|
||||
#
|
||||
# We toggle this variable to `true` when certain features which require a custom build are enabled.
|
||||
# Feel free to toggle this to `true` yourself and specify build steps in `matrix_synapse_container_image_customizations_dockerfile_body_custom`.
|
||||
#
|
||||
# See:
|
||||
# - `roles/matrix-synapse/templates/synapse/customizations/Dockerfile.j2`
|
||||
# - `matrix_synapse_container_image_customizations_dockerfile_body_custom`
|
||||
# - `matrix_synapse_docker_image_customized`
|
||||
# - `matrix_synapse_docker_image_final`
|
||||
matrix_synapse_container_image_customizations_enabled: "{{ matrix_synapse_ext_synapse_s3_storage_provider_enabled }}"
|
||||
|
||||
# Controls whether custom build steps will be added to the Dockerfile for installing s3-storage-provider.
|
||||
# The version that will be installed is specified in `matrix_synapse_ext_synapse_s3_storage_provider_version`.
|
||||
matrix_synapse_container_image_customizations_s3_storage_provider_installation_enabled: "{{ matrix_synapse_ext_synapse_s3_storage_provider_enabled }}"
|
||||
|
||||
# matrix_synapse_container_image_customizations_dockerfile_body contains your custom Dockerfile steps
|
||||
# for building your customized Synapse image based on the original (upstream) image (`matrix_synapse_docker_image`).
|
||||
# A `FROM ...` clause is included automatically so you don't have to.
|
||||
#
|
||||
# Example:
|
||||
# matrix_synapse_container_image_customizations_dockerfile_body_custom: |
|
||||
# RUN echo 'This is a custom step for building the customized Docker image for Synapse.'
|
||||
# RUN echo 'You can override matrix_synapse_container_image_customizations_dockerfile_body_custom to add your own steps.'
|
||||
# RUN echo 'You do NOT need to include a FROM clause yourself.'
|
||||
matrix_synapse_container_image_customizations_dockerfile_body_custom: ''
|
||||
|
||||
matrix_synapse_docker_image: "{{ matrix_synapse_docker_image_name_prefix }}matrixdotorg/synapse:{{ matrix_synapse_docker_image_tag }}"
|
||||
matrix_synapse_docker_image_name_prefix: "{{ 'localhost/' if matrix_synapse_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||
matrix_synapse_version: v1.68.0
|
||||
matrix_synapse_version: v1.69.0
|
||||
matrix_synapse_docker_image_tag: "{{ matrix_synapse_version }}"
|
||||
matrix_synapse_docker_image_force_pull: "{{ matrix_synapse_docker_image.endswith(':latest') }}"
|
||||
|
||||
# matrix_synapse_docker_image_customized is the name of the locally built Synapse image
|
||||
# which adds various customizations on top of the original (upstream) Synapse image.
|
||||
# This image will be based on the upstream `matrix_synapse_docker_image` image, only if `matrix_synapse_container_image_customizations_enabled: true`.
|
||||
matrix_synapse_docker_image_customized: "localhost/matrixdotorg/synapse:{{ matrix_synapse_docker_image_tag }}-customized"
|
||||
|
||||
# matrix_synapse_docker_image_final holds the name of the Synapse image to run depending on whether or not customizations are enabled.
|
||||
matrix_synapse_docker_image_final: "{{ matrix_synapse_docker_image_customized if matrix_synapse_container_image_customizations_enabled else matrix_synapse_docker_image }} "
|
||||
|
||||
matrix_synapse_base_path: "{{ matrix_base_data_path }}/synapse"
|
||||
matrix_synapse_docker_src_files_path: "{{ matrix_synapse_base_path }}/docker-src"
|
||||
matrix_synapse_customized_docker_src_files_path: "{{ matrix_synapse_base_path }}/customized-docker-src"
|
||||
matrix_synapse_config_dir_path: "{{ matrix_synapse_base_path }}/config"
|
||||
matrix_synapse_storage_path: "{{ matrix_synapse_base_path }}/storage"
|
||||
matrix_synapse_media_store_path: "{{ matrix_synapse_storage_path }}/media-store"
|
||||
matrix_synapse_ext_path: "{{ matrix_synapse_base_path }}/ext"
|
||||
matrix_synapse_ext_s3_storage_provider_path: "{{ matrix_synapse_base_path }}/ext/s3-storage-provider"
|
||||
|
||||
matrix_synapse_container_client_api_port: 8008
|
||||
|
||||
|
@ -754,6 +791,32 @@ matrix_synapse_ext_encryption_config_yaml: |
|
|||
patch_power_levels: {{ matrix_synapse_ext_encryption_disabler_patch_power_levels | to_json }}
|
||||
|
||||
|
||||
# matrix_synapse_ext_synapse_s3_storage_provider_enabled controls whether to enable https://github.com/matrix-org/synapse-s3-storage-provider
|
||||
# Installing it requires building a customized Docker image for Synapse (see `matrix_synapse_container_image_customizations_enabled`).
|
||||
# Enabling this will enable customizations and inject the appropriate Dockerfile clauses for installing synapse-s3-storage-provider.
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_enabled: false
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_version: 1.1.2
|
||||
# Controls whether media from this (local) server is stored in s3-storage-provider
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_store_local: true
|
||||
# Controls whether media from remote servers is stored in s3-storage-provider
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_store_remote: true
|
||||
# Controls whether files are stored to S3 at the same time they are stored on the local filesystem.
|
||||
# For slightly improved reliability, consider setting this to `true`.
|
||||
# Even with asynchronous uploading to S3 (`false` value), data loss shouldn't be possible,
|
||||
# because the local filesystem is a reliable data store anyway.
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_store_synchronous: false
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_bucket: ''
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_region_name: ''
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url: ''
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id: ''
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key: ''
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_storage_class: STANDARD
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_config_threadpool_size: 40
|
||||
# matrix_synapse_ext_synapse_s3_storage_provider_update_db_day_count is a day value (number) for the `s3_media_upload update-db` command.
|
||||
# It specifies how old files need to have been inactive to be eligible for migration from the local filesystem to the S3 data store.
|
||||
# By default, we use `0` which says "all files are eligible for migration".
|
||||
matrix_synapse_ext_synapse_s3_storage_provider_update_db_day_count: 0
|
||||
|
||||
matrix_s3_media_store_enabled: false
|
||||
matrix_s3_media_store_custom_endpoint_enabled: false
|
||||
matrix_s3_goofys_docker_image: "ewoutp/goofys:latest"
|
||||
|
@ -798,6 +861,30 @@ matrix_synapse_spam_checker: []
|
|||
# Certain Synapse extensions that you can enable below auto-inject themselves into `matrix_synapse_modules` at runtime.
|
||||
matrix_synapse_modules: []
|
||||
|
||||
# matrix_synapse_media_storage_providers contains the Synapse `media_storage_providers` configuration setting.
|
||||
# To add your own custom `media_storage_providers`, use `matrix_synapse_media_storage_providers_custom`.
|
||||
matrix_synapse_media_storage_providers: "{{ matrix_synapse_media_storage_providers_auto + matrix_synapse_media_storage_providers_custom }}"
|
||||
|
||||
# matrix_synapse_media_storage_providers_auto contains a list of storage providers that are added by the playbook based on other configuration
|
||||
matrix_synapse_media_storage_providers_auto: |
|
||||
{{
|
||||
[]
|
||||
+
|
||||
[
|
||||
lookup('ansible.builtin.template', role_path + '/templates/synapse/ext/s3-storage-provider/media_storage_provider.yaml.j2') | from_yaml
|
||||
] if matrix_synapse_ext_synapse_s3_storage_provider_enabled else []
|
||||
}}
|
||||
|
||||
# matrix_synapse_media_storage_providers_custom contains your own custom list of storage providers.
|
||||
# You're meant to define each custom module as valid keys and values, not as a YAML string that needs to be parsed.
|
||||
#
|
||||
# Example:
|
||||
# matrix_synapse_media_storage_providers_custom:
|
||||
# - module: module.SomeModule
|
||||
# store_local: True
|
||||
# # ...
|
||||
matrix_synapse_media_storage_providers_custom: []
|
||||
|
||||
matrix_synapse_encryption_enabled_by_default_for_room_type: "off"
|
||||
|
||||
matrix_synapse_trusted_key_servers:
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
|
||||
- ansible.builtin.set_fact:
|
||||
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-synapse-s3-storage-provider-migrate.timer'] }}"
|
||||
when: matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
10
roles/matrix-synapse/tasks/ext/s3-storage-provider/setup.yml
Normal file
10
roles/matrix-synapse/tasks/ext/s3-storage-provider/setup.yml
Normal file
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/s3-storage-provider/validate_config.yml"
|
||||
when: matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/s3-storage-provider/setup_install.yml"
|
||||
when: matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/s3-storage-provider/setup_uninstall.yml"
|
||||
when: not matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
|
||||
# We install this into Synapse by making `matrix_synapse_ext_synapse_s3_storage_provider_enabled` influence other variables:
|
||||
# - `matrix_synapse_media_storage_providers` (via `matrix_synapse_media_storage_providers_auto`)
|
||||
# - `matrix_synapse_container_image_customizations_enabled`
|
||||
# - `matrix_synapse_container_image_customizations_s3_storage_provider_installation_enabled`
|
||||
#
|
||||
# Below are additional tasks for setting up various helper scripts, etc.
|
||||
|
||||
- name: Ensure s3-storage-provider env file installed
|
||||
ansible.builtin.template:
|
||||
src: "{{ role_path }}/templates/synapse/ext/s3-storage-provider/env.j2"
|
||||
dest: "{{ matrix_synapse_ext_s3_storage_provider_path }}/env"
|
||||
mode: 0640
|
||||
|
||||
- name: Ensure s3-storage-provider data path exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ matrix_synapse_ext_s3_storage_provider_path }}/data"
|
||||
state: directory
|
||||
mode: 0750
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
|
||||
- name: Ensure s3-storage-provider database.yaml file installed
|
||||
ansible.builtin.template:
|
||||
src: "{{ role_path }}/templates/synapse/ext/s3-storage-provider/database.yaml.j2"
|
||||
dest: "{{ matrix_synapse_ext_s3_storage_provider_path }}/data/database.yaml"
|
||||
mode: 0640
|
||||
|
||||
- name: Ensure s3-storage-provider scripts installed
|
||||
ansible.builtin.template:
|
||||
src: "{{ role_path }}/templates/synapse/ext/s3-storage-provider/usr-local-bin/{{ item }}.j2"
|
||||
dest: "{{ matrix_local_bin_path }}/{{ item }}"
|
||||
mode: 0750
|
||||
with_items:
|
||||
- matrix-synapse-s3-storage-provider-shell
|
||||
- matrix-synapse-s3-storage-provider-migrate
|
||||
|
||||
- name: Ensure matrix-synapse-s3-storage-provider-migrate.service and timer are installed
|
||||
ansible.builtin.template:
|
||||
src: "{{ role_path }}/templates/synapse/ext/s3-storage-provider/systemd/{{ item }}.j2"
|
||||
dest: "{{ matrix_systemd_path }}/{{ item }}"
|
||||
mode: 0640
|
||||
with_items:
|
||||
- matrix-synapse-s3-storage-provider-migrate.service
|
||||
- matrix-synapse-s3-storage-provider-migrate.timer
|
||||
register: matrix_synapse_s3_storage_provider_systemd_service_result
|
||||
|
||||
- name: Ensure systemd reloaded after matrix-synapse-s3-storage-provider-migrate.service installation
|
||||
ansible.builtin.service:
|
||||
daemon_reload: true
|
||||
when: matrix_synapse_s3_storage_provider_systemd_service_result.changed | bool
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
|
||||
- name: Ensure matrix-synapse-s3-storage-provider-migrate.service and timer don't exist
|
||||
ansible.builtin.file:
|
||||
path: "{{ matrix_systemd_path }}/{{ item }}"
|
||||
state: absent
|
||||
with_items:
|
||||
- matrix-synapse-s3-storage-provider-migrate.timer
|
||||
- matrix-synapse-s3-storage-provider-migrate.service
|
||||
register: matrix_synapse_s3_storage_provider_migrate_sevice_removal
|
||||
|
||||
- name: Ensure systemd reloaded after matrix-synapse-s3-storage-provider-migrate.service removal
|
||||
ansible.builtin.service:
|
||||
daemon_reload: true
|
||||
when: matrix_synapse_s3_storage_provider_migrate_sevice_removal.changed | bool
|
||||
|
||||
- name: Ensure s3-storage-provider files don't exist
|
||||
ansible.builtin.file:
|
||||
path: "{{ item }}"
|
||||
state: absent
|
||||
with_items:
|
||||
- "{{ matrix_local_bin_path }}/matrix-synapse-s3-storage-provider-shell"
|
||||
- "{{ matrix_local_bin_path }}/matrix-synapse-s3-storage-provider-migrate"
|
||||
- "{{ matrix_synapse_ext_s3_storage_provider_path }}"
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
|
||||
- name: Fail if required s3-storage-provider settings not defined
|
||||
ansible.builtin.fail:
|
||||
msg: >-
|
||||
You need to define a required configuration setting (`{{ item }}`) for using s3-storage-provider.
|
||||
when: "vars[item] == ''"
|
||||
with_items:
|
||||
- "matrix_synapse_ext_synapse_s3_storage_provider_config_bucket"
|
||||
- "matrix_synapse_ext_synapse_s3_storage_provider_config_region_name"
|
||||
- "matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id"
|
||||
- "matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key"
|
||||
|
||||
- name: Fail if required matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url looks invalid
|
||||
ansible.builtin.fail:
|
||||
msg: >-
|
||||
`matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url` needs to look like a URL (`http://` or `https://` prefix).
|
||||
when: "matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url != '' and not matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url.startswith('http')"
|
|
@ -11,3 +11,5 @@
|
|||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/synapse-simple-antispam/setup.yml"
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/mjolnir-antispam/setup.yml"
|
||||
|
||||
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/s3-storage-provider/setup.yml"
|
||||
|
|
|
@ -26,6 +26,9 @@
|
|||
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-goofys.service'] }}"
|
||||
when: matrix_s3_media_store_enabled | bool
|
||||
|
||||
- ansible.builtin.include_tasks: "{{ role_path }}/tasks/ext/s3-storage-provider/init.yml"
|
||||
when: matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
||||
|
||||
- when: matrix_synapse_enabled | bool and matrix_synapse_metrics_proxying_enabled | bool
|
||||
block:
|
||||
- name: Fail if matrix-nginx-proxy role already executed
|
||||
|
|
|
@ -21,7 +21,8 @@
|
|||
async: "{{ matrix_synapse_rust_synapse_compress_state_compress_room_time }}"
|
||||
poll: 10
|
||||
register: matrix_synapse_rust_synapse_compress_state_compress_room_command_result
|
||||
changed_when: matrix_synapse_rust_synapse_compress_state_compress_room_command_result.rc == 0
|
||||
failed_when: not matrix_synapse_rust_synapse_compress_state_compress_room_command_result.finished
|
||||
changed_when: matrix_synapse_rust_synapse_compress_state_compress_room_command_result.finished and matrix_synapse_rust_synapse_compress_state_compress_room_command_result.rc == 0
|
||||
|
||||
- ansible.builtin.debug: var="matrix_synapse_rust_synapse_compress_state_compress_room_command_result"
|
||||
|
||||
|
@ -44,7 +45,8 @@
|
|||
async: "{{ matrix_synapse_rust_synapse_compress_state_psql_import_time }}"
|
||||
poll: 10
|
||||
register: matrix_synapse_rust_synapse_compress_state_psql_import_command_result
|
||||
changed_when: matrix_synapse_rust_synapse_compress_state_psql_import_command_result.rc == 0
|
||||
failed_when: not matrix_synapse_rust_synapse_compress_state_psql_import_command_result.finished
|
||||
changed_when: matrix_synapse_rust_synapse_compress_state_psql_import_command_result.finished and matrix_synapse_rust_synapse_compress_state_psql_import_command_result.rc == 0
|
||||
|
||||
- name: Clean up
|
||||
ansible.builtin.file:
|
||||
|
|
|
@ -70,6 +70,7 @@
|
|||
async: "{{ matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time }}"
|
||||
poll: 10
|
||||
register: matrix_synapse_rust_synapse_compress_state_find_rooms_command_result
|
||||
failed_when: not matrix_synapse_rust_synapse_compress_state_find_rooms_command_result.finished
|
||||
changed_when: false
|
||||
|
||||
# We expect the output to be like this:
|
||||
|
|
|
@ -11,6 +11,8 @@
|
|||
- {path: "{{ matrix_synapse_config_dir_path }}", when: true}
|
||||
- {path: "{{ matrix_synapse_ext_path }}", when: true}
|
||||
- {path: "{{ matrix_synapse_docker_src_files_path }}", when: "{{ matrix_synapse_container_image_self_build }}"}
|
||||
- {path: "{{ matrix_synapse_customized_docker_src_files_path }}", when: "{{ matrix_synapse_container_image_customizations_enabled }}"}
|
||||
- {path: "{{ matrix_synapse_ext_s3_storage_provider_path }}", when: "{{ matrix_synapse_ext_synapse_s3_storage_provider_enabled }}"}
|
||||
# We handle matrix_synapse_media_store_path elsewhere (in ./synapse/setup_install.yml),
|
||||
# because if it's using Goofys and it's already mounted (from before),
|
||||
# trying to chown/chmod it here will cause trouble.
|
||||
|
|
|
@ -62,6 +62,25 @@
|
|||
delay: "{{ matrix_container_retries_delay }}"
|
||||
until: result is not failed
|
||||
|
||||
- when: "matrix_synapse_container_image_customizations_enabled | bool"
|
||||
block:
|
||||
- name: Ensure customizations Dockerfile is created
|
||||
ansible.builtin.template:
|
||||
src: "{{ role_path }}/templates/synapse/customizations/Dockerfile.j2"
|
||||
dest: "{{ matrix_synapse_customized_docker_src_files_path }}/Dockerfile"
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
mode: 0640
|
||||
|
||||
- name: Ensure customized Docker image for Synapse is built
|
||||
docker_image:
|
||||
name: "{{ matrix_synapse_docker_image_customized }}"
|
||||
source: build
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
path: "{{ matrix_synapse_customized_docker_src_files_path }}"
|
||||
pull: true
|
||||
|
||||
- name: Check if a Synapse signing key exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ matrix_synapse_config_dir_path }}/{{ matrix_server_fqn_matrix }}.signing.key"
|
||||
|
|
|
@ -27,8 +27,11 @@
|
|||
|
||||
- name: Ensure Synapse Docker image doesn't exist
|
||||
docker_image:
|
||||
name: "{{ matrix_synapse_docker_image }}"
|
||||
name: "{{ item }}"
|
||||
state: absent
|
||||
with_items:
|
||||
- "{{ matrix_synapse_docker_image_final }}"
|
||||
- "{{ matrix_synapse_docker_image }}"
|
||||
|
||||
- name: Ensure sample prometheus.yml for external scraping is deleted
|
||||
ansible.builtin.file:
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
FROM {{ matrix_synapse_docker_image }}
|
||||
|
||||
{% if matrix_synapse_container_image_customizations_s3_storage_provider_installation_enabled %}
|
||||
RUN pip install synapse-s3-storage-provider=={{ matrix_synapse_ext_synapse_s3_storage_provider_version }}
|
||||
{% endif %}
|
||||
|
||||
{{ matrix_synapse_container_image_customizations_dockerfile_body_custom }}
|
|
@ -0,0 +1,5 @@
|
|||
user: {{ matrix_synapse_database_user | to_json }}
|
||||
password: {{ matrix_synapse_database_password | to_json }}
|
||||
database: {{ matrix_synapse_database_database | to_json }}
|
||||
host: {{ matrix_synapse_database_host | to_json }}
|
||||
port: {{ matrix_synapse_database_port | to_json }}
|
|
@ -0,0 +1,11 @@
|
|||
AWS_ACCESS_KEY_ID={{ matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id }}
|
||||
AWS_SECRET_ACCESS_KEY={{ matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key }}
|
||||
AWS_DEFAULT_REGION={{ matrix_synapse_ext_synapse_s3_storage_provider_config_region_name }}
|
||||
|
||||
ENDPOINT={{ matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url }}
|
||||
BUCKET={{ matrix_synapse_ext_synapse_s3_storage_provider_config_bucket }}
|
||||
STORAGE_CLASS={{ matrix_synapse_ext_synapse_s3_storage_provider_config_storage_class }}
|
||||
|
||||
MEDIA_PATH=/matrix-media-store-parent/{{ matrix_synapse_media_store_directory_name }}
|
||||
|
||||
UPDATE_DB_DURATION={{ matrix_synapse_ext_synapse_s3_storage_provider_update_db_day_count }}d
|
|
@ -0,0 +1,14 @@
|
|||
module: s3_storage_provider.S3StorageProviderBackend
|
||||
store_local: {{ matrix_synapse_ext_synapse_s3_storage_provider_store_local | to_json }}
|
||||
store_remote: {{ matrix_synapse_ext_synapse_s3_storage_provider_store_remote | to_json }}
|
||||
store_synchronous: {{ matrix_synapse_ext_synapse_s3_storage_provider_store_synchronous | to_json }}
|
||||
config:
|
||||
bucket: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_bucket | to_json }}
|
||||
region_name: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_region_name | to_json }}
|
||||
endpoint_url: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url | to_json }}
|
||||
access_key_id: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id | to_json }}
|
||||
secret_access_key: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key | to_json }}
|
||||
|
||||
storage_class: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_storage_class | to_json }}
|
||||
|
||||
threadpool_size: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_threadpool_size | to_json }}
|
|
@ -0,0 +1,7 @@
|
|||
[Unit]
|
||||
Description=Migrates locally-stored Synapse media store files to S3
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment="HOME={{ matrix_systemd_unit_home_path }}"
|
||||
ExecStart={{ matrix_local_bin_path }}/matrix-synapse-s3-storage-provider-migrate
|
|
@ -0,0 +1,9 @@
|
|||
[Unit]
|
||||
Description=Migrates locally-stored Synapse media store files to S3
|
||||
|
||||
[Timer]
|
||||
Unit=matrix-synapse-s3-storage-provider-migrate.service
|
||||
OnCalendar=*-*-* 05:00:00
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
|
@ -0,0 +1,13 @@
|
|||
#jinja2: lstrip_blocks: "True"
|
||||
#!/bin/bash
|
||||
|
||||
{{ matrix_host_command_docker }} run \
|
||||
--rm \
|
||||
--env-file={{ matrix_synapse_ext_s3_storage_provider_path }}/env \
|
||||
--mount type=bind,src={{ matrix_synapse_storage_path }},dst=/matrix-media-store-parent,bind-propagation=slave \
|
||||
--mount type=bind,src={{ matrix_synapse_ext_s3_storage_provider_path }}/data,dst=/data \
|
||||
--workdir=/data \
|
||||
--network={{ matrix_docker_network }} \
|
||||
--entrypoint=/bin/bash \
|
||||
{{ matrix_synapse_docker_image_final }} \
|
||||
-c 's3_media_upload update-db $UPDATE_DB_DURATION && s3_media_upload --no-progress check-deleted $MEDIA_PATH && s3_media_upload --no-progress upload $MEDIA_PATH $BUCKET --delete --storage-class $STORAGE_CLASS --endpoint-url $ENDPOINT'
|
|
@ -0,0 +1,13 @@
|
|||
#jinja2: lstrip_blocks: "True"
|
||||
#!/bin/bash
|
||||
|
||||
{{ matrix_host_command_docker }} run \
|
||||
-it \
|
||||
--rm \
|
||||
--env-file={{ matrix_synapse_ext_s3_storage_provider_path }}/env \
|
||||
--mount type=bind,src={{ matrix_synapse_storage_path }},dst=/matrix-media-store-parent,bind-propagation=slave \
|
||||
--mount type=bind,src={{ matrix_synapse_ext_s3_storage_provider_path }}/data,dst=/data \
|
||||
--workdir=/data \
|
||||
--network={{ matrix_docker_network }} \
|
||||
--entrypoint=/bin/bash \
|
||||
{{ matrix_synapse_docker_image_final }}
|
|
@ -1030,6 +1030,7 @@ media_store_path: "/matrix-media-store-parent/{{ matrix_synapse_media_store_dire
|
|||
# store_synchronous: false
|
||||
# config:
|
||||
# directory: /mnt/some/other/directory
|
||||
media_storage_providers: {{ matrix_synapse_media_storage_providers | to_json }}
|
||||
|
||||
# The largest allowed upload size in bytes
|
||||
#
|
||||
|
|
|
@ -42,7 +42,7 @@ ExecStart={{ matrix_host_command_docker }} run --rm --name {{ matrix_synapse_wor
|
|||
{% for arg in matrix_synapse_container_arguments %}
|
||||
{{ arg }} \
|
||||
{% endfor %}
|
||||
{{ matrix_synapse_docker_image }} \
|
||||
{{ matrix_synapse_docker_image_final }} \
|
||||
run -m synapse.app.{{ matrix_synapse_worker_details.app }} -c /data/homeserver.yaml -c /data/{{ matrix_synapse_worker_config_file_name }}
|
||||
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-synapse \
|
|||
{% for arg in matrix_synapse_container_arguments %}
|
||||
{{ arg }} \
|
||||
{% endfor %}
|
||||
{{ matrix_synapse_docker_image }} \
|
||||
{{ matrix_synapse_docker_image_final }} \
|
||||
run -m synapse.app.homeserver -c /data/homeserver.yaml
|
||||
|
||||
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-synapse 2>/dev/null || true'
|
||||
|
|
Loading…
Reference in a new issue