People often report and ask about these "failures".
More-so previously, when the `docker kill/rm` output was collected,
but it still happens now when people do `systemctl status
matrix-something` and notice that it says "FAILURE".
Suppressing to avoid further time being wasted on saying "this is
expected".
The `to_nice_yaml` helper will by default wrap any string YAML values on
the first space after column 80. This can in worst case yield invalid
YAML syntax. More details in Ansible's documentation here:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#formatting-data-yaml-and-json
In short, you need to explicitly provide a custom width argument of a
high number of some kind to avoid the line wrapping.
Reverts b1b4ba501f, 90c9801c56, a3c84f78ca, ..
I haven't really traced it (yet), but on some servers, I'm observing
`ansible-playbook ... --tags=start` completing very slowly, waiting
to stop services. I can't reproduce this on all Matrix servers I manage.
I suspect that either the systemd version is to blame or that some
specific service is not responding well to some `docker kill/rm` command.
`ExecStop` seems to work great in all cases and it's what we've been
using for a very long time, so I'm reverting to that.
Until now, we were leaving services "enabled"
(symlinks in /etc/systemd/system/multi-user.target.wants/).
We clean these up now. Broken symlinks may still exist in older
installations that enabled/disabled services. We're not taking care
to fix these up. It's just a cosmetic defect anyway.
If they do, our next playbook runs would simply revert it
and report "changed" for that task.
There's no benefit to letting the bridge spew a new config file.
This does not apply to the mautrix whatsapp bridge, because that one
is written in Go (not Python) and takes different flags. There's no
equivalent flag there.
Fixes a regression introduced in f6097fbba1, which was cauing Synapse
to die with this error message:
> ValueError: sender_localpart needs characters which are not URL encoded.
These are just defensive cleanup tasks that we run.
In the good case, there's nothing to kill or remove, so they trigger an
error like this:
> Error response from daemon: Cannot kill container: something: No such container: something
and:
> Error: No such container: something
People often ask us if this is a problem, so instead of always having to
answer with "no, this is to be expected", we'd rather eliminate it now
and make logs cleaner.
In the event that:
- a container is really stuck and needs cleanup using kill/rm
- and cleanup fails, and we fail to report it because of error
suppression (`2>/dev/null`)
.. we'd still get an error when launching ("container name already in use .."),
so it shouldn't be too hard to investigate.