Related to https://gitlab.com/mx-puppet/discord/mx-puppet-discord/-/issues/117
Looks like the bridge is too quick to start and fails to initialize
itself by connecting to Synapse. It's mostly observed after a system
reboot, because Synapse (and everything else) is slower to start.
Once mx-puppet-discord fails to initialize itself, a "No relay found"
error will be observed any time you try to relay a Matrix message to
Discord. Relaying messages in the other direction (Discord to Matrix)
also fails.
With this workaround (longer delay on mx-puppet-discord startup), I
observe mx-puppet-discord working well, even after a full reboot.
Of course, a proper fix is preferable, instead of delaying by a magic
number of seconds.
People often report and ask about these "failures".
More-so previously, when the `docker kill/rm` output was collected,
but it still happens now when people do `systemctl status
matrix-something` and notice that it says "FAILURE".
Suppressing to avoid further time being wasted on saying "this is
expected".
Reverts b1b4ba501f, 90c9801c56, a3c84f78ca, ..
I haven't really traced it (yet), but on some servers, I'm observing
`ansible-playbook ... --tags=start` completing very slowly, waiting
to stop services. I can't reproduce this on all Matrix servers I manage.
I suspect that either the systemd version is to blame or that some
specific service is not responding well to some `docker kill/rm` command.
`ExecStop` seems to work great in all cases and it's what we've been
using for a very long time, so I'm reverting to that.
These are just defensive cleanup tasks that we run.
In the good case, there's nothing to kill or remove, so they trigger an
error like this:
> Error response from daemon: Cannot kill container: something: No such container: something
and:
> Error: No such container: something
People often ask us if this is a problem, so instead of always having to
answer with "no, this is to be expected", we'd rather eliminate it now
and make logs cleaner.
In the event that:
- a container is really stuck and needs cleanup using kill/rm
- and cleanup fails, and we fail to report it because of error
suppression (`2>/dev/null`)
.. we'd still get an error when launching ("container name already in use .."),
so it shouldn't be too hard to investigate.
We log everything in systemd/journald for every service already,
so there's no need for double-logging, bridges rotating log files
manually and other such nonsense.
If a service is enabled, a database for it is created in postgres with a uniqque password. The service can then use this database for data storage instead of relying on sqlite.
The Docker 19.04 -> 20.10 upgrade contains the following change
in `/usr/lib/systemd/system/docker.service`:
```
-BindsTo=containerd.service
-After=network-online.target firewalld.service containerd.service
+After=network-online.target firewalld.service containerd.service multi-user.target
-Requires=docker.socket
+Requires=docker.socket containerd.service
Wants=network-online.target
```
The `multi-user.target` requirement in `After` seems to be in conflict
with our `WantedBy=multi-user.target` and `After=docker.service` /
`Requires=docker.service` definitions, causing the following error on
startup for all of our systemd services:
> Job matrix-synapse.service/start deleted to break ordering cycle starting with multi-user.target/start
A workaround which appears to work is to add `DefaultDependencies=no`
to all of our services.