* add timeout param for nginx proxy
default value matrix_nginx_proxy_request_timeout is 60s
* default matrix_nginx_proxy_request_timeout - 60s
* few more variables for request timeout
* Update nginx.conf.j2
* Update nginx.conf.j2
It's mildly annoying when trying to execute these scripts while logged
in as a regular user, as the missing execute permissions will hinder
autocompletion even when trying to use with sudo.
These shell scripts don't contain secrets, but may fail when ran by a
regular user. The failure is due to the lack of access to the /matrix
directory, and does not result in any damage.
Jitsi-meet enabled websockets by default, claiming better reliability.
Matrix-nginx-proxy configuration has been set up according to the
Prosody documentation: https://prosody.im/doc/websocket
**HSTS Preloading:**
In its strongest and recommended form, the [HSTS policy](https://www.chromium.org/hsts) includes all subdomains, and indicates a willingness to be “preloaded” into browsers:
`Strict-Transport-Security: max-age=31536000; includeSubDomains; preload`
**X-Xss-Protection:**
`1; mode=block` which tells the browser to block the response if it detects an attack rather than sanitising the script.
This variable was previously undefined in the role and was only getting
defined via `group_vars/matrix_servers`.
We now properly initialize it (and its good default value) in the role
itself.
Self-checks against the .well-known URIs look for the HTTP header
"Access-Control-Allow-Origin" indicating that the remode endpoint
supports CORS. But the remote server is not required to include
said header in the response if the HTTP request does not include
the "Origin" header. This is in accordance with the specification
[1] stating: 'A CORS request is an HTTP request that includes an
"Origin" header.'
This is in fact true for Gitlab pages hosting and that's why the
issue was identified.
Let's specify "Origin" header in the respective uri tasks performing
the HTTP request and ensure a CORS request.
[1] https://fetch.spec.whatwg.org/#http-requests
We have a flow like this:
1. matrix.DOMAIN vhost (matrix-domain.conf)
2. matrix-synapse vhost (matrix-synapse.conf); or matrix-corporal container, if enabled
3. (optional) matrix-synapse vhost (matrix-synapse.conf), if matrix-corporal enabled
4. matrix-synapse container
We are setting `X-Forwarded-For` correctly in step #1, but were
overwriting it in step #2 with something inaccurate.
Not doing anything in step #2 is better than doing the wrong thing.
It's probably best if we append another reverse-proxy address there
though, although what we're doing now (with this patch) seems to yield
the correct result (when matrix-corporal is not enabled).
When matrix-corporal is enabled, we still seem to do the wrong thing for
some reason. It's something to be fixed later on.
People who were disabling matrix-nginx-proxy (in favor of their own
nginx webserver) and also overriding `matrix_federation_public_port`,
found that the generated nginx configuration still hardcoded `8448`,
which forced their nginx server to use that, regardless of the fact
that `matrix_federation_public_port` was pointing elsewhere.
We now allow for the in-container federation port to be configurable,
and also automatically wire things properly.
We're talking about a webserver running on the same machine, which
imports the configuration files generated by the `matrix-nginx-proxy`
in the `/matrix/nginx-proxy/conf.d` directory.
Users who run an nginx webserver on some other machine will need to do
something different.
This give us the possibility to run multiple instances of
workers that that don't expose a port.
Right now, we don't support that, but in the future we could
run multiple `federation_sender` or `pusher` workers, without
them fighting over naming (previously, they'd all be named
something like `matrix-synapse-worker-pusher-0`, because
they'd all define `port` as `0`).
I felt that adding another variable was probably going to be the easiest way to do this. I may end up adding another variable to enable this feature, for consistency with some of the other things.
These are just defensive cleanup tasks that we run.
In the good case, there's nothing to kill or remove, so they trigger an
error like this:
> Error response from daemon: Cannot kill container: something: No such container: something
and:
> Error: No such container: something
People often ask us if this is a problem, so instead of always having to
answer with "no, this is to be expected", we'd rather eliminate it now
and make logs cleaner.
In the event that:
- a container is really stuck and needs cleanup using kill/rm
- and cleanup fails, and we fail to report it because of error
suppression (`2>/dev/null`)
.. we'd still get an error when launching ("container name already in use .."),
so it shouldn't be too hard to investigate.
This switches the `docker exec` method of spawning
Synapse workers inside the `matrix-synapse` container with
dedicated containers for each worker.
We also have dedicated systemd services for each worker,
so this are now:
- more consistent with everything else (we don't use systemd
instantiated services anywhere)
- we don't need the "parse systemd instance name into worker name +
port" part
- we don't need to keep track of PIDs manually
- we don't need jq (less depenendencies)
- workers dying would be restarted by systemd correctly, like any other
service
- `docker ps` shows each worker separately and we can observe resource
usage