As of docker-jitsi-meet stable-6433 [1], `/config/interface_config.js`
is regenerated on every boot. The correct way to modify the interface
config is now via `/config/custom-interface_config.js`, which is
appended to a default copy of `interface_config.js` by
`/etc/cont-init.d/10-config` on every boot of the docker image.
Given that `interface_config.js` is considered deprecated by upstream
(all options will eventually be moved to `config.js`), we also deprecate
the `matrix_jitsi_web_interface_config_*` variables in favour of
`matrix_jitsi_web_custom_interface_config_extension`.
[1] https://github.com/jitsi/docker-jitsi-meet/blob/stable-6433/CHANGELOG.md#stable-6433
The check was checking for an empty string in `matrix_jitsi_prosody_auth_internal_accounts`,
which is unlikely to happen. We should check for an empty list instead.
The check was not validating username/password values, so telling the user that they need a non-empty
username/password is misleading. It was merely checking if there's at least one entry in the list.
This patch adjusts the check and message accordingly.
Until now, we were leaving services "enabled"
(symlinks in /etc/systemd/system/multi-user.target.wants/).
We clean these up now. Broken symlinks may still exist in older
installations that enabled/disabled services. We're not taking care
to fix these up. It's just a cosmetic defect anyway.
I changed the conditional statement in prosody systemd template to bind the localhost port by default if people have set ```matrix_nginx_proxy_enabled == false ```.
Hopefully that should make it the default behaviour now.
Jitsi-meet enabled websockets by default, claiming better reliability.
Matrix-nginx-proxy configuration has been set up according to the
Prosody documentation: https://prosody.im/doc/websocket
These are just defensive cleanup tasks that we run.
In the good case, there's nothing to kill or remove, so they trigger an
error like this:
> Error response from daemon: Cannot kill container: something: No such container: something
and:
> Error: No such container: something
People often ask us if this is a problem, so instead of always having to
answer with "no, this is to be expected", we'd rather eliminate it now
and make logs cleaner.
In the event that:
- a container is really stuck and needs cleanup using kill/rm
- and cleanup fails, and we fail to report it because of error
suppression (`2>/dev/null`)
.. we'd still get an error when launching ("container name already in use .."),
so it shouldn't be too hard to investigate.