Merge branch 'master' into nginx-update

This commit is contained in:
Jörg Thalheim 2020-08-24 13:42:11 +01:00 committed by GitHub
commit 4c9ad3ca79
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
877 changed files with 19456 additions and 16288 deletions

9
.github/CODEOWNERS vendored
View file

@ -195,10 +195,11 @@
/pkgs/top-level/php-packages.nix @NixOS/php
# Podman, CRI-O modules and related
/nixos/modules/virtualisation/containers.nix @NixOS/podman
/nixos/modules/virtualisation/cri-o.nix @NixOS/podman
/nixos/modules/virtualisation/podman.nix @NixOS/podman
/nixos/tests/podman.nix @NixOS/podman
/nixos/modules/virtualisation/containers.nix @NixOS/podman @zowoq
/nixos/modules/virtualisation/cri-o.nix @NixOS/podman @zowoq
/nixos/modules/virtualisation/podman.nix @NixOS/podman @zowoq
/nixos/tests/cri-o.nix @NixOS/podman @zowoq
/nixos/tests/podman.nix @NixOS/podman @zowoq
# Blockchains
/pkgs/applications/blockchains @mmahut

1
.gitignore vendored
View file

@ -12,6 +12,7 @@ result-*
.DS_Store
.mypy_cache
__pycache__
/pkgs/development/libraries/qt-5/*/tmp/
/pkgs/desktops/kde-5/*/tmp/

View file

@ -191,6 +191,8 @@ androidenv.emulateApp {
}
```
Additional flags may be applied to the Android SDK's emulator through the runtime environment variable `$NIX_ANDROID_EMULATOR_FLAGS`.
It is also possible to specify an APK to deploy inside the emulator
and the package and activity names to launch it:

View file

@ -538,8 +538,123 @@ buildPythonPackage rec {
```
Note also the line `doCheck = false;`, we explicitly disabled running the test-suite.
#### Testing Python Packages
#### Develop local package
It is highly encouraged to have testing as part of the package build. This
helps to avoid situations where the package was able to build and install,
but is not usable at runtime. Currently, all packages will use the `test`
command provided by the setup.py (i.e. `python setup.py test`). However,
this is currently deprecated https://github.com/pypa/setuptools/pull/1878
and your package should provide its own checkPhase.
*NOTE:* The `checkPhase` for python maps to the `installCheckPhase` on a
normal derivation. This is due to many python packages not behaving well
to the pre-installed version of the package. Version info, and natively
compiled extensions generally only exist in the install directory, and
thus can cause issues when a test suite asserts on that behavior.
*NOTE:* Tests should only be disabled if they don't agree with nix
(e.g. external dependencies, network access, flakey tests), however,
as many tests should be enabled as possible. Failing tests can still be
a good indication that the package is not in a valid state.
#### Using pytest
Pytest is the most common test runner for python repositories. A trivial
test run would be:
```
checkInputs = [ pytest ];
checkPhase = "pytest";
```
However, many repositories' test suites do not translate well to nix's build
sandbox, and will generally need many tests to be disabled.
To filter tests using pytest, one can do the following:
```
checkInputs = [ pytest ];
# avoid tests which need additional data or touch network
checkPhase = ''
pytest tests/ --ignore=tests/integration -k 'not download and not update'
'';
```
`--ignore` will tell pytest to ignore that file or directory from being
collected as part of a test run. This is useful is a file uses a package
which is not available in nixpkgs, thus skipping that test file is much
easier than having to create a new package.
`-k` is used to define a predicate for test names. In this example, we are
filtering out tests which contain `download` or `update` in their test case name.
Only one `-k` argument is allows, and thus a long predicate should be concatenated
with "\" and wrapped to the next line.
*NOTE:* In pytest==6.0.1, the use of "\" to continue a line (e.g. `-k 'not download \'`) has
been removed, in this case, it's recommended to use `pytestCheckHook`.
#### Using pytestCheckHook
`pytestCheckHook` is a convenient hook which will substitute the setuptools
`test` command for a checkPhase which runs `pytest`. This is also beneficial
when a package may need many items disabled to run the test suite.
Using the example above, the analagous pytestCheckHook usage would be:
```
checkInputs = [ pytestCheckHook ];
# requires additional data
pytestFlagsArray = [ "tests/" "--ignore=tests/integration" ];
disabledTests = [
# touches network
"download"
"update"
];
```
This is expecially useful when tests need to be conditionallydisabled,
for example:
```
disabledTests = [
# touches network
"download"
"update"
] ++ lib.optionals (pythonAtLeast "3.8") [
# broken due to python3.8 async changes
"async"
] ++ lib.optionals stdenv.isDarwin [
# can fail when building with other packages
"socket"
];
```
Trying to concatenate the related strings to disable tests in a regular checkPhase
would be much harder to read. This also enables us to comment on why specific tests
are disabled.
#### Using pythonImportsCheck
Although unit tests are highly prefered to valid correctness of a package. Not
all packages have test suites that can be ran easily, and some have none at all.
To help ensure the package still works, `pythonImportsCheck` can attempt to import
the listed modules.
```
pythonImportsCheck = [ "requests" "urllib" ];
```
roughly translates to:
```
postCheck = ''
PYTHONPATH=$out/${python.sitePackages}:$PYTHONPATH
python -c "import requests; import urllib"
'';
```
However, this is done in it's own phase, and not dependent on whether `doCheck = true;`
This can also be useful in verifying that the package doesn't assume commonly
present packages (e.g. `setuptools`)
### Develop local package
As a Python developer you're likely aware of [development mode](http://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode)
(`python setup.py develop`); instead of installing the package this command
@ -1017,7 +1132,7 @@ are used in `buildPythonPackage`.
- `pipBuildHook` to build a wheel using `pip` and PEP 517. Note a build system
(e.g. `setuptools` or `flit`) should still be added as `nativeBuildInput`.
- `pipInstallHook` to install wheels.
- `pytestCheckHook` to run tests with `pytest`.
- `pytestCheckHook` to run tests with `pytest`. See [example usage](#using-pytestcheckhook).
- `pythonCatchConflictsHook` to check whether a Python package is not already existing.
- `pythonImportsCheckHook` to check whether importing the listed modules works.
- `pythonRemoveBinBytecode` to remove bytecode from the `/bin` folder.

View file

@ -254,7 +254,7 @@ let f(h, h + 1, i) = i + h
<variablelist>
<title>Variables specifying dependencies</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-depsBuildBuild">
<term>
<varname>depsBuildBuild</varname>
</term>
@ -267,7 +267,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-nativeBuildInputs">
<term>
<varname>nativeBuildInputs</varname>
</term>
@ -280,7 +280,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-depsBuildTarget">
<term>
<varname>depsBuildTarget</varname>
</term>
@ -296,7 +296,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-depsHostHost">
<term>
<varname>depsHostHost</varname>
</term>
@ -306,7 +306,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-buildInputs">
<term>
<varname>buildInputs</varname>
</term>
@ -319,7 +319,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-depsTargetTarget">
<term>
<varname>depsTargetTarget</varname>
</term>
@ -329,7 +329,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-depsBuildBuildPropagated">
<term>
<varname>depsBuildBuildPropagated</varname>
</term>
@ -339,7 +339,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-propagatedNativeBuildInputs">
<term>
<varname>propagatedNativeBuildInputs</varname>
</term>
@ -349,7 +349,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-depsBuildTargetPropagated">
<term>
<varname>depsBuildTargetPropagated</varname>
</term>
@ -359,7 +359,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-depsHostHostPropagated">
<term>
<varname>depsHostHostPropagated</varname>
</term>
@ -369,7 +369,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-propagatedBuildInputs">
<term>
<varname>propagatedBuildInputs</varname>
</term>
@ -379,7 +379,7 @@ let f(h, h + 1, i) = i + h
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-depsTargetTargetPropagated">
<term>
<varname>depsTargetTargetPropagated</varname>
</term>
@ -396,7 +396,7 @@ let f(h, h + 1, i) = i + h
<variablelist>
<title>Variables affecting <literal>stdenv</literal> initialisation</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-NIX_DEBUG">
<term>
<varname>NIX_DEBUG</varname>
</term>
@ -410,7 +410,7 @@ let f(h, h + 1, i) = i + h
<variablelist>
<title>Attributes affecting build properties</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-enableParallelBuilding">
<term>
<varname>enableParallelBuilding</varname>
</term>
@ -427,7 +427,7 @@ let f(h, h + 1, i) = i + h
<variablelist>
<title>Special variables</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-passthru">
<term>
<varname>passthru</varname>
</term>
@ -504,7 +504,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
There are a number of variables that control what phases are executed and in what order:
<variablelist>
<title>Variables affecting phase control</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-phases">
<term>
<varname>phases</varname>
</term>
@ -517,7 +517,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-prePhases">
<term>
<varname>prePhases</varname>
</term>
@ -527,7 +527,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preConfigurePhases">
<term>
<varname>preConfigurePhases</varname>
</term>
@ -537,7 +537,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preBuildPhases">
<term>
<varname>preBuildPhases</varname>
</term>
@ -547,7 +547,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preInstallPhases">
<term>
<varname>preInstallPhases</varname>
</term>
@ -557,7 +557,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preFixupPhases">
<term>
<varname>preFixupPhases</varname>
</term>
@ -567,7 +567,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preDistPhases">
<term>
<varname>preDistPhases</varname>
</term>
@ -577,7 +577,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postPhases">
<term>
<varname>postPhases</varname>
</term>
@ -635,7 +635,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
<variablelist>
<title>Variables controlling the unpack phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-src">
<term>
<varname>srcs</varname> / <varname>src</varname>
</term>
@ -645,7 +645,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-sourceRoot">
<term>
<varname>sourceRoot</varname>
</term>
@ -655,7 +655,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-setSourceRoot">
<term>
<varname>setSourceRoot</varname>
</term>
@ -665,7 +665,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preUnpack">
<term>
<varname>preUnpack</varname>
</term>
@ -675,7 +675,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postUnpack">
<term>
<varname>postUnpack</varname>
</term>
@ -685,7 +685,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontUnpack">
<term>
<varname>dontUnpack</varname>
</term>
@ -695,7 +695,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontMakeSourcesWritable">
<term>
<varname>dontMakeSourcesWritable</varname>
</term>
@ -705,7 +705,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-unpackCmd">
<term>
<varname>unpackCmd</varname>
</term>
@ -727,7 +727,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
<variablelist>
<title>Variables controlling the patch phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontPatch">
<term>
<varname>dontPatch</varname>
</term>
@ -737,7 +737,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-patches">
<term>
<varname>patches</varname>
</term>
@ -747,7 +747,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-patchFlags">
<term>
<varname>patchFlags</varname>
</term>
@ -757,7 +757,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-prePatch">
<term>
<varname>prePatch</varname>
</term>
@ -767,7 +767,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postPatch">
<term>
<varname>postPatch</varname>
</term>
@ -789,7 +789,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
<variablelist>
<title>Variables controlling the configure phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-configureScript">
<term>
<varname>configureScript</varname>
</term>
@ -799,7 +799,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-configureFlags">
<term>
<varname>configureFlags</varname>
</term>
@ -809,7 +809,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontConfigure">
<term>
<varname>dontConfigure</varname>
</term>
@ -819,7 +819,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-configureFlagsArray">
<term>
<varname>configureFlagsArray</varname>
</term>
@ -829,7 +829,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontAddPrefix">
<term>
<varname>dontAddPrefix</varname>
</term>
@ -839,7 +839,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-prefix">
<term>
<varname>prefix</varname>
</term>
@ -849,7 +849,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-prefixKey">
<term>
<varname>prefixKey</varname>
</term>
@ -859,7 +859,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontAddDisableDepTrack">
<term>
<varname>dontAddDisableDepTrack</varname>
</term>
@ -869,7 +869,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontFixLibtool">
<term>
<varname>dontFixLibtool</varname>
</term>
@ -885,7 +885,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontDisableStatic">
<term>
<varname>dontDisableStatic</varname>
</term>
@ -898,7 +898,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-configurePlatforms">
<term>
<varname>configurePlatforms</varname>
</term>
@ -913,7 +913,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preConfigure">
<term>
<varname>preConfigure</varname>
</term>
@ -923,7 +923,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postConfigure">
<term>
<varname>postConfigure</varname>
</term>
@ -945,7 +945,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
<variablelist>
<title>Variables controlling the build phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontBuild">
<term>
<varname>dontBuild</varname>
</term>
@ -955,7 +955,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-makefile">
<term>
<varname>makefile</varname>
</term>
@ -965,7 +965,7 @@ passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ]
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-makeFlags">
<term>
<varname>makeFlags</varname>
</term>
@ -983,7 +983,7 @@ makeFlags = [ "PREFIX=$(out)" ];
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-makeFlagsArray">
<term>
<varname>makeFlagsArray</varname>
</term>
@ -999,7 +999,7 @@ preBuild = ''
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-buildFlags">
<term>
<varname>buildFlags</varname> / <varname>buildFlagsArray</varname>
</term>
@ -1009,7 +1009,7 @@ preBuild = ''
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preBuild">
<term>
<varname>preBuild</varname>
</term>
@ -1019,7 +1019,7 @@ preBuild = ''
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postBuild">
<term>
<varname>postBuild</varname>
</term>
@ -1049,7 +1049,7 @@ preBuild = ''
<variablelist>
<title>Variables controlling the check phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-doCheck">
<term>
<varname>doCheck</varname>
</term>
@ -1067,11 +1067,11 @@ preBuild = ''
</term>
<listitem>
<para>
See the build phase for details.
See the <link xlink:href="#var-stdenv-makeFlags">build phase</link> for details.
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-checkTarget">
<term>
<varname>checkTarget</varname>
</term>
@ -1081,7 +1081,7 @@ preBuild = ''
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-checkFlags">
<term>
<varname>checkFlags</varname> / <varname>checkFlagsArray</varname>
</term>
@ -1091,7 +1091,7 @@ preBuild = ''
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-checkInputs">
<term>
<varname>checkInputs</varname>
</term>
@ -1101,7 +1101,7 @@ preBuild = ''
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preCheck">
<term>
<varname>preCheck</varname>
</term>
@ -1111,7 +1111,7 @@ preBuild = ''
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postCheck">
<term>
<varname>postCheck</varname>
</term>
@ -1133,7 +1133,7 @@ preBuild = ''
<variablelist>
<title>Variables controlling the install phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontInstall">
<term>
<varname>dontInstall</varname>
</term>
@ -1149,11 +1149,11 @@ preBuild = ''
</term>
<listitem>
<para>
See the build phase for details.
See the <link xlink:href="#var-stdenv-makeFlags">build phase</link> for details.
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-installTargets">
<term>
<varname>installTargets</varname>
</term>
@ -1165,7 +1165,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-installFlags">
<term>
<varname>installFlags</varname> / <varname>installFlagsArray</varname>
</term>
@ -1175,7 +1175,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preInstall">
<term>
<varname>preInstall</varname>
</term>
@ -1185,7 +1185,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postInstall">
<term>
<varname>postInstall</varname>
</term>
@ -1229,7 +1229,7 @@ installTargets = "install-bin install-doc";</programlisting>
<variablelist>
<title>Variables controlling the fixup phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontFixup">
<term>
<varname>dontFixup</varname>
</term>
@ -1239,7 +1239,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontStrip">
<term>
<varname>dontStrip</varname>
</term>
@ -1249,7 +1249,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontStripHost">
<term>
<varname>dontStripHost</varname>
</term>
@ -1259,7 +1259,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontStripTarget">
<term>
<varname>dontStripTarget</varname>
</term>
@ -1269,7 +1269,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontMoveSbin">
<term>
<varname>dontMoveSbin</varname>
</term>
@ -1279,7 +1279,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-stripAllList">
<term>
<varname>stripAllList</varname>
</term>
@ -1289,7 +1289,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-stripAllFlags">
<term>
<varname>stripAllFlags</varname>
</term>
@ -1299,7 +1299,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-stripDebugList">
<term>
<varname>stripDebugList</varname>
</term>
@ -1309,7 +1309,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-stripDebugFlags">
<term>
<varname>stripDebugFlags</varname>
</term>
@ -1319,7 +1319,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontPatchELF">
<term>
<varname>dontPatchELF</varname>
</term>
@ -1329,7 +1329,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontPatchShebangs">
<term>
<varname>dontPatchShebangs</varname>
</term>
@ -1339,7 +1339,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontPruneLibtoolFiles">
<term>
<varname>dontPruneLibtoolFiles</varname>
</term>
@ -1349,7 +1349,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-forceShare">
<term>
<varname>forceShare</varname>
</term>
@ -1359,7 +1359,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-setupHook">
<term>
<varname>setupHook</varname>
</term>
@ -1370,7 +1370,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preFixup">
<term>
<varname>preFixup</varname>
</term>
@ -1380,7 +1380,7 @@ installTargets = "install-bin install-doc";</programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postFixup">
<term>
<varname>postFixup</varname>
</term>
@ -1419,7 +1419,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
<variablelist>
<title>Variables controlling the installCheck phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-doInstallCheck">
<term>
<varname>doInstallCheck</varname>
</term>
@ -1431,7 +1431,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-installCheckTarget">
<term>
<varname>installCheckTarget</varname>
</term>
@ -1441,7 +1441,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-installCheckFlags">
<term>
<varname>installCheckFlags</varname> / <varname>installCheckFlagsArray</varname>
</term>
@ -1451,7 +1451,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-installCheckInputs">
<term>
<varname>installCheckInputs</varname>
</term>
@ -1461,7 +1461,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preInstallCheck">
<term>
<varname>preInstallCheck</varname>
</term>
@ -1471,7 +1471,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postInstallCheck">
<term>
<varname>postInstallCheck</varname>
</term>
@ -1493,7 +1493,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
<variablelist>
<title>Variables controlling the distribution phase</title>
<varlistentry>
<varlistentry xml:id="var-stdenv-distTarget">
<term>
<varname>distTarget</varname>
</term>
@ -1503,7 +1503,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-distFlags">
<term>
<varname>distFlags</varname> / <varname>distFlagsArray</varname>
</term>
@ -1513,7 +1513,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-tarballs">
<term>
<varname>tarballs</varname>
</term>
@ -1523,7 +1523,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-dontCopyDist">
<term>
<varname>dontCopyDist</varname>
</term>
@ -1533,7 +1533,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-preDist">
<term>
<varname>preDist</varname>
</term>
@ -1543,7 +1543,7 @@ set debug-file-directory ~/.nix-profile/lib/debug
</para>
</listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="var-stdenv-postDist">
<term>
<varname>postDist</varname>
</term>

View file

@ -85,6 +85,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = ''Beerware License'';
};
blueOak100 = spdx {
spdxId = "BlueOak-1.0.0";
fullName = "Blue Oak Model License 1.0.0";
};
bsd0 = spdx {
spdxId = "0BSD";
fullName = "BSD Zero Clause License";

View file

@ -115,8 +115,19 @@ rec {
checkUnmatched =
if config._module.check && config._module.freeformType == null && merged.unmatchedDefns != [] then
let inherit (head merged.unmatchedDefns) file prefix;
in throw "The option `${showOption prefix}' defined in `${file}' does not exist."
let
firstDef = head merged.unmatchedDefns;
baseMsg = "The option `${showOption (prefix ++ firstDef.prefix)}' defined in `${firstDef.file}' does not exist.";
in
if attrNames options == [ "_module" ]
then throw ''
${baseMsg}
However there are no options defined in `${showOption prefix}'. Are you sure you've
declared your options properly? This can happen if you e.g. declared your options in `types.submodule'
under `config' rather than `options'.
''
else throw baseMsg
else null;
result = builtins.seq checkUnmatched {

View file

@ -26,6 +26,13 @@
`handle == github` is strongly preferred whenever `github` is an acceptable attribute name and is short and convenient.
If `github` begins with a numeral, `handle` should be prefixed with an underscore.
```nix
_1example = {
github = "1example";
};
```
Add PGP/GPG keys only if you actually use them to sign commits and/or mail.
To get the required PGP/GPG values for a key run
@ -41,7 +48,7 @@
See `./scripts/check-maintainer-github-handles.sh` for an example on how to work with this data.
*/
{
"0x4A6F" = {
_0x4A6F = {
email = "mail-maintainer@0x4A6F.dev";
name = "Joachim Ernst";
github = "0x4A6F";
@ -51,7 +58,7 @@
fingerprint = "F466 A548 AD3F C1F1 8C88 4576 8702 7528 B006 D66D";
}];
};
"1000101" = {
_1000101 = {
email = "b1000101@pm.me";
github = "1000101";
githubId = 791309;
@ -247,6 +254,12 @@
githubId = 732652;
name = "Andreas Herrmann";
};
ahrzb = {
email = "ahrzb5@gmail.com";
github = "ahrzb";
githubId = 5220438;
name = "AmirHossein Roozbahani";
};
ahuzik = {
email = "ales.guzik@gmail.com";
github = "alesguzik";
@ -459,6 +472,12 @@
githubId = 858965;
name = "Andrew Morsillo";
};
andehen = {
email = "git@andehen.net";
github = "andehen";
githubId = 754494;
name = "Anders Asheim Hennum";
};
andersk = {
email = "andersk@mit.edu";
github = "andersk";
@ -1884,7 +1903,7 @@
githubId = 4971975;
name = "Janne Heß";
};
"dasj19" = {
dasj19 = {
email = "daniel@serbanescu.dk";
github = "dasj19";
githubId = 7589338;
@ -2460,6 +2479,12 @@
githubId = 97852;
name = "Ellis Whitehead";
};
elkowar = {
email = "thereal.elkowar@gmail.com";
github = "elkowar";
githubId = 5300871;
name = "Leon Kowarschick";
};
elohmeier = {
email = "elo-nixos@nerdworks.de";
github = "elohmeier";
@ -3330,6 +3355,12 @@
githubId = 131599;
name = "Martin Weinelt";
};
hh = {
email = "hh@m-labs.hk";
github = "HarryMakes";
githubId = 66358631;
name = "Harry Ho";
};
hhm = {
email = "heehooman+nixpkgs@gmail.com";
github = "hhm0";
@ -3702,6 +3733,12 @@
}];
name = "Jiri Daněk";
};
jdbaldry = {
email = "jack.baldry@grafana.com";
github = "jdbaldry";
githubId = 4599384;
name = "Jack Baldry";
};
jdehaas = {
email = "qqlq@nullptr.club";
github = "jeroendehaas";
@ -3822,6 +3859,12 @@
githubId = 51518420;
name = "jitwit";
};
jjjollyjim = {
email = "jamie@kwiius.com";
github = "JJJollyjim";
githubId = 691552;
name = "Jamie McClymont";
};
jk = {
email = "hello+nixpkgs@j-k.io";
github = "06kellyjac";
@ -4177,6 +4220,12 @@
githubId = 87115;
name = "Wael Nasreddine";
};
kalekseev = {
email = "mail@kalekseev.com";
github = "kalekseev";
githubId = 367259;
name = "Konstantin Alekseev";
};
kamadorueda = {
name = "Kevin Amado";
email = "kamadorueda@gmail.com";
@ -4414,6 +4463,12 @@
githubId = 524268;
name = "Koral";
};
koslambrou = {
email = "koslambrou@gmail.com";
github = "koslambrou";
githubId = 2037002;
name = "Konstantinos";
};
kovirobi = {
email = "kovirobi@gmail.com";
github = "kovirobi";
@ -5181,6 +5236,12 @@
githubId = 35892750;
name = "Maxine Aubrey";
};
maxxk = {
email = "maxim.krivchikov@gmail.com";
github = "maxxk";
githubId = 1191859;
name = "Maxim Krivchikov";
};
mbakke = {
email = "mbakke@fastmail.com";
github = "mbakke";
@ -5944,6 +6005,12 @@
githubId = 1224006;
name = "Roberto Abdelkader Martínez Pérez";
};
nilsirl = {
email = "nils@nilsand.re";
github = "NilsIrl";
githubId = 26231126;
name = "Nils ANDRÉ-CHANG";
};
ninjatrappeur = {
email = "felix@alternativebit.fr";
github = "ninjatrappeur";
@ -6328,6 +6395,12 @@
githubId = 157610;
name = "Piotr Bogdan";
};
pblkt = {
email = "pebblekite@gmail.com";
github = "pblkt";
githubId = 6498458;
name = "pebble kite";
};
pcarrier = {
email = "pc@rrier.ca";
github = "pcarrier";
@ -6712,6 +6785,12 @@
githubId = 115877;
name = "Kenny Shen";
};
quentini = {
email = "quentini@airmail.cc";
github = "QuentinI";
githubId = 18196237;
name = "Quentin Inkling";
};
qyliss = {
email = "hi@alyssa.is";
github = "alyssais";
@ -7948,6 +8027,12 @@
githubId = 332289;
name = "Rafał Łasocha";
};
syberant = {
email = "sybrand@neuralcoding.com";
github = "syberant";
githubId = 20063502;
name = "Sybrand Aarnoutse";
};
symphorien = {
email = "symphorien_nixpkgs@xlumurb.eu";
github = "symphorien";
@ -8084,6 +8169,12 @@
githubId = 863327;
name = "Tyler Benster";
};
tcbravo = {
email = "tomas.bravo@protonmail.ch";
github = "tcbravo";
githubId = 66133083;
name = "Tomas Bravo";
};
tckmn = {
email = "andy@tck.mn";
github = "tckmn";
@ -8192,7 +8283,7 @@
githubId = 8547242;
name = "Stefan Rohrbacher";
};
"thelegy" = {
thelegy = {
email = "mail+nixos@0jb.de";
github = "thelegy";
githubId = 3105057;
@ -8410,6 +8501,12 @@
githubId = 207457;
name = "Matthieu Chevrier";
};
trepetti = {
email = "trepetti@cs.columbia.edu";
github = "trepetti";
githubId = 25440339;
name = "Tom Repetti";
};
trevorj = {
email = "nix@trevor.joynson.io";
github = "akatrevorjay";
@ -8468,6 +8565,12 @@
githubId = 699403;
name = "Tomas Vestelind";
};
tviti = {
email = "tviti@hawaii.edu";
github = "tviti";
githubId = 2251912;
name = "Taylor Viti";
};
tvorog = {
email = "marszaripov@gmail.com";
github = "tvorog";
@ -8737,6 +8840,14 @@
githubId = 13259982;
name = "Vanessa McHale";
};
voidless = {
email = "julius.schmitt@yahoo.de";
github = "voidIess";
githubId = 45292658;
name = "Julius Schmitt";
};
volhovm = {
email = "volhovm.cs@gmail.com";
github = "volhovm";
@ -9069,6 +9180,16 @@
fingerprint = "85F8 E850 F8F2 F823 F934 535B EC50 6589 9AEA AF4C";
}];
};
yusdacra = {
email = "y.bera003.06@protonmail.com";
github = "yusdacra";
githubId = 19897088;
name = "Yusuf Bera Ertan";
keys = [{
longkeyid = "rsa2048/0x61807181F60EFCB2";
fingerprint = "9270 66BD 8125 A45B 4AC4 0326 6180 7181 F60E FCB2";
}];
};
yvesf = {
email = "yvesf+nix@xapek.org";
github = "yvesf";
@ -9341,4 +9462,10 @@
github = "fzakaria";
githubId = 605070;
};
yevhenshymotiuk = {
name = "Yevhen Shymotiuk";
email = "yevhenshymotiuk@gmail.com";
github = "yevhenshymotiuk";
githubId = 44244245;
};
}

View file

@ -70,35 +70,12 @@ Platform Vendor Advanced Micro Devices, Inc.</screen>
Core Next</link> (GCN) GPUs are supported through the
<package>rocm-opencl-icd</package> package. Adding this package to
<xref linkend="opt-hardware.opengl.extraPackages"/> enables OpenCL
support. However, OpenCL Image support is provided through the
non-free <package>rocm-runtime-ext</package> package. This package can
be added to the same configuration option, but requires that
<varname>allowUnfree</varname> option is is enabled for nixpkgs. Full
OpenCL support on supported AMD GPUs is thus enabled as follows:
support:
<programlisting><xref linkend="opt-hardware.opengl.extraPackages"/> = [
rocm-opencl-icd
rocm-runtime-ext
];</programlisting>
</para>
<para>
It is also possible to use the OpenCL Image extension without a
system-wide installation of the <package>rocm-runtime-ext</package>
package by setting the <varname>ROCR_EXT_DIR</varname> environment
variable to the directory that contains the extension:
<screen><prompt>$</prompt> export \
ROCR_EXT_DIR=`nix-build '&lt;nixpkgs&gt;' --no-out-link -A rocm-runtime-ext`/lib/rocm-runtime-ext</screen>
</para>
<para>
With either approach, you can verify that OpenCL Image support
is indeed working with the <command>clinfo</command> command:
<screen><prompt>$</prompt> clinfo | grep Image
Image support Yes</screen>
</para>
</section>
<section xml:id="sec-gpu-accel-opencl-intel">

View file

@ -136,7 +136,7 @@
<filename>/mnt</filename>:
</para>
<screen>
# nixos-enter /mnt
# nixos-enter --root /mnt
</screen>
<para>
Run a shell command:

View file

@ -128,7 +128,7 @@ GRANT ALL PRIVILEGES ON *.* TO 'mysql'@'localhost' WITH GRANT OPTION;
</listitem>
<listitem>
<para>
Two new option <link linkend="opt-documentation.man.generateCaches">documentation.man.generateCaches</link>
The new option <link linkend="opt-documentation.man.generateCaches">documentation.man.generateCaches</link>
has been added to automatically generate the <literal>man-db</literal> caches, which are needed by utilities
like <command>whatis</command> and <command>apropos</command>. The caches are generated during the build of
the NixOS configuration: since this can be expensive when a large number of packages are installed, the
@ -137,7 +137,7 @@ GRANT ALL PRIVILEGES ON *.* TO 'mysql'@'localhost' WITH GRANT OPTION;
</listitem>
<listitem>
<para>
<varname>services.postfix.sslCACert</varname> was replaced by <varname>services.postfix.tlsTrustedAuthorities</varname> which now defaults to system certifcate authorities.
<varname>services.postfix.sslCACert</varname> was replaced by <varname>services.postfix.tlsTrustedAuthorities</varname> which now defaults to system certificate authorities.
</para>
</listitem>
<listitem>
@ -156,6 +156,54 @@ GRANT ALL PRIVILEGES ON *.* TO 'mysql'@'localhost' WITH GRANT OPTION;
Support for built-in LCDs in various pieces of Logitech hardware (keyboards and USB speakers). <varname>hardware.logitech.lcd.enable</varname> enables support for all hardware supported by the g15daemon project.
</para>
</listitem>
<listitem>
<para>
Zabbix now defaults to 5.0, updated from 4.4. Please carefully read through
<link xlink:href="https://www.zabbix.com/documentation/current/manual/installation/upgrade/sources">the upgrade guide</link>
and apply any changes required. Be sure to take special note of the section on
<link xlink:href="https://www.zabbix.com/documentation/current/manual/installation/upgrade_notes_500#enabling_extended_range_of_numeric_float_values">enabling extended range of numeric (float) values</link>
as you will need to apply this database migration manually.
</para>
<para>
If you are using Zabbix Server with a MySQL or MariaDB database you should note that using a character set of <literal>utf8</literal> and a collate of <literal>utf8_bin</literal> has become mandatory with
this release. See the upstream <link xlink:href="https://support.zabbix.com/browse/ZBX-17357">issue</link> for further discussion. Before upgrading you should check the character set and collation used by
your database and ensure they are correct:
<programlisting>
SELECT
default_character_set_name,
default_collation_name
FROM
information_schema.schemata
WHERE
schema_name = 'zabbix';
</programlisting>
If these values are not correct you should take a backup of your database and convert the character set and collation as required. Here is an
<link xlink:href="https://www.zabbix.com/forum/zabbix-help/396573-reinstall-after-upgrade?p=396891#post396891">example</link> of how to do so, taken from
the Zabbix forums:
<programlisting>
ALTER DATABASE `zabbix` DEFAULT CHARACTER SET utf8 COLLATE utf8_bin;
-- the following will produce a list of SQL commands you should subsequently execute
SELECT CONCAT("ALTER TABLE ", TABLE_NAME," CONVERT TO CHARACTER SET utf8 COLLATE utf8_bin;") AS ExecuteTheString
FROM information_schema.`COLUMNS`
WHERE table_schema = "zabbix" AND COLLATION_NAME = "utf8_general_ci";
</programlisting>
</para>
</listitem>
<listitem>
<para>
The NixOS module system now supports freeform modules as a mix between <literal>types.attrsOf</literal> and <literal>types.submodule</literal>. These allow you to explicitly declare a subset of options while still permitting definitions without an associated option. See <xref linkend='sec-freeform-modules'/> for how to use them.
</para>
</listitem>
<listitem>
<para>
The GRUB module gained support for basic password protection, which
allows to restrict non-default entries in the boot menu to one or more
users. The users and passwords are defined via the option
<option>boot.loader.grub.users</option>.
Note: Password support is only avaiable in GRUB version 2.
</para>
</listitem>
</itemizedlist>
</section>
@ -199,12 +247,10 @@ GRANT ALL PRIVILEGES ON *.* TO 'mysql'@'localhost' WITH GRANT OPTION;
in the source tree for downloaded modules instead of using go's <link
xlink:href="https://golang.org/cmd/go/#hdr-Module_proxy_protocol">module
proxy protocol</link>. This storage format is simpler and therefore less
likekly to break with future versions of go. As a result
likely to break with future versions of go. As a result
<literal>buildGoModule</literal> switched from
<literal>modSha256</literal> to the <literal>vendorSha256</literal>
attribute to pin fetched version data. <literal>buildGoModule</literal>
still accepts <literal>modSha256</literal> with a warning, but support will
be removed in the next release.
attribute to pin fetched version data.
</para>
</listitem>
<listitem>
@ -213,7 +259,7 @@ GRANT ALL PRIVILEGES ON *.* TO 'mysql'@'localhost' WITH GRANT OPTION;
<link xlink:href="https://grafana.com/docs/grafana/latest/guides/whats-new-in-v6-4/">deprecated in Grafana</link>
and the <package>phantomjs</package> project is
<link xlink:href="https://github.com/ariya/phantomjs/issues/15344#issue-302015362">currently unmaintained</link>.
It can still be enabled by providing <literal>phantomJsSupport = true</literal> to the package instanciation:
It can still be enabled by providing <literal>phantomJsSupport = true</literal> to the package instantiation:
<programlisting>{
services.grafana.package = pkgs.grafana.overrideAttrs (oldAttrs: rec {
phantomJsSupport = false;
@ -225,7 +271,7 @@ GRANT ALL PRIVILEGES ON *.* TO 'mysql'@'localhost' WITH GRANT OPTION;
<para>
The <link linkend="opt-services.supybot.enable">supybot</link> module now uses <literal>/var/lib/supybot</literal>
as its default <link linkend="opt-services.supybot.stateDir">stateDir</link> path if <literal>stateVersion</literal>
is 20.09 or higher. It also enables number of
is 20.09 or higher. It also enables a number of
<link xlink:href="https://www.freedesktop.org/software/systemd/man/systemd.exec.html#Sandboxing">systemd sandboxing options</link>
which may possibly interfere with some plugins. If this is the case you can disable the options through attributes in
<option>systemd.services.supybot.serviceConfig</option>.
@ -678,11 +724,19 @@ services.dokuwiki."mywiki" = {
<listitem>
<para>
The <xref linkend="opt-services.postgresql.dataDir"/> option is now set to <literal>"/var/lib/postgresql/${cfg.package.psqlSchema}"</literal> regardless of your
<xref linkend="opt-system.stateVersion"/>. Users with an existing postgresql install that have a <xref linkend="opt-system.stateVersion"/> of <literal>17.09</literal> or below
<xref linkend="opt-system.stateVersion"/>. Users with an existing postgresql install that have a <xref linkend="opt-system.stateVersion"/> of <literal>17.03</literal> or below
should double check what the value of their <xref linkend="opt-services.postgresql.dataDir"/> option is (<literal>/var/db/postgresql</literal>) and then explicitly
set this value to maintain compatibility:
<programlisting>
services.postgresql.dataDir = "/var/db/postgresql";
</programlisting>
</para>
<para>
The postgresql module now expects there to be a database super user account called <literal>postgres</literal> regardless of your <xref linkend="opt-system.stateVersion"/>. Users
with an existing postgresql install that have a <xref linkend="opt-system.stateVersion"/> of <literal>17.03</literal> or below should run the following SQL statements as a
database super admin user before upgrading:
<programlisting>
CREATE ROLE postgres LOGIN SUPERUSER;
</programlisting>
</para>
</listitem>
@ -691,6 +745,13 @@ services.postgresql.dataDir = "/var/db/postgresql";
The USBGuard module now removes options and instead hardcodes values for <literal>IPCAccessControlFiles</literal>, <literal>ruleFiles</literal>, and <literal>auditFilePath</literal>. Audit logs can be found in the journal.
</para>
</listitem>
<listitem>
<para>
The NixOS module system now evaluates option definitions more strictly, allowing it to detect a larger set of problems.
As a result, what previously evaluated may not do so anymore.
See <link xlink:href="https://github.com/NixOS/nixpkgs/pull/82743#issuecomment-674520472">the PR that changed this</link> for more info.
</para>
</listitem>
</itemizedlist>
</section>
@ -911,6 +972,14 @@ services.transmission.settings.rpc-bind-address = "0.0.0.0";
<para>
Nginx module <literal>nginxModules.fastcgi-cache-purge</literal> renamed to official name <literal>nginxModules.cache-purge</literal>.
Nginx module <literal>nginxModules.ngx_aws_auth</literal> renamed to official name <literal>nginxModules.aws-auth</literal>.
The packages <package>perl</package>, <package>rsync</package> and <package>strace</package> were removed from <option>systemPackages</option>. If you need them, install them again with <code><xref linkend="opt-environment.systemPackages"/> = with pkgs; [ perl rsync strace ];</code> in your <filename>configuration.nix</filename>.
</para>
</listitem>
<listitem>
<para>
The <literal>undervolt</literal> option no longer needs to apply its
settings every 30s. If they still become undone, open an issue and restore
the previous behaviour using <literal>undervolt.useTimer</literal>.
</para>
</listitem>
</itemizedlist>

View file

@ -24,11 +24,11 @@
check ? true
, prefix ? []
, lib ? import ../../lib
, extraModules ? let e = builtins.getEnv "NIXOS_EXTRA_MODULE_PATH";
in if e == "" then [] else [(import e)]
}:
let extraArgs_ = extraArgs; pkgs_ = pkgs;
extraModules = let e = builtins.getEnv "NIXOS_EXTRA_MODULE_PATH";
in if e == "" then [] else [(import e)];
in
let

View file

@ -424,15 +424,18 @@ class Machine:
output += out
return output
def fail(self, *commands: str) -> None:
def fail(self, *commands: str) -> str:
"""Execute each command and check that it fails."""
output = ""
for command in commands:
with self.nested("must fail: {}".format(command)):
status, output = self.execute(command)
(status, out) = self.execute(command)
if status == 0:
raise Exception(
"command `{}` unexpectedly succeeded".format(command)
)
output += out
return output
def wait_until_succeeds(self, command: str) -> str:
"""Wait until a command returns success and return its output.

View file

@ -63,8 +63,8 @@ in {
fsType = "ext4";
configFile = pkgs.writeText "configuration.nix"
''
{
imports = [ <nixpkgs/nixos/modules/virtualisation/amazon-image.nix> ];
{ modulesPath, ... }: {
imports = [ "''${modulesPath}/virtualisation/amazon-image.nix" ];
${optionalString config.ec2.hvm ''
ec2.hvm = true;
''}

View file

@ -29,7 +29,7 @@ log() {
echo "$@" >&2
}
if [ -z "$1" ]; then
if [ "$#" -ne 1 ]; then
log "Usage: ./upload-amazon-image.sh IMAGE_OUTPUT"
exit 1
fi

View file

@ -1,292 +0,0 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.fonts.fontconfig;
fcBool = x: "<bool>" + (boolToString x) + "</bool>";
# back-supported fontconfig version and package
# version is used for font cache generation
supportVersion = "210";
supportPkg = pkgs."fontconfig_${supportVersion}";
# latest fontconfig version and package
# version is used for configuration folder name, /etc/fonts/VERSION/
# note: format differs from supportVersion and can not be used with makeCacheConf
latestVersion = pkgs.fontconfig.configVersion;
latestPkg = pkgs.fontconfig;
# supported version fonts.conf
supportFontsConf = pkgs.makeFontsConf { fontconfig = supportPkg; fontDirectories = config.fonts.fonts; };
# configuration file to read fontconfig cache
# version dependent
# priority 0
cacheConfSupport = makeCacheConf { version = supportVersion; };
cacheConfLatest = makeCacheConf {};
# generate the font cache setting file for a fontconfig version
# use latest when no version is passed
makeCacheConf = { version ? null }:
let
fcPackage = if version == null
then "fontconfig"
else "fontconfig_${version}";
makeCache = fontconfig: pkgs.makeFontsCache { inherit fontconfig; fontDirectories = config.fonts.fonts; };
cache = makeCache pkgs.${fcPackage};
cache32 = makeCache pkgs.pkgsi686Linux.${fcPackage};
in
pkgs.writeText "fc-00-nixos-cache.conf" ''
<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<!-- Font directories -->
${concatStringsSep "\n" (map (font: "<dir>${font}</dir>") config.fonts.fonts)}
<!-- Pre-generated font caches -->
<cachedir>${cache}</cachedir>
${optionalString (pkgs.stdenv.isx86_64 && cfg.cache32Bit) ''
<cachedir>${cache32}</cachedir>
''}
</fontconfig>
'';
# local configuration file
localConf = pkgs.writeText "fc-local.conf" cfg.localConf;
# rendering settings configuration files
# priority 10
hintingConf = pkgs.writeText "fc-10-hinting.conf" ''
<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<!-- Default rendering settings -->
<match target="pattern">
<edit mode="append" name="hinting">
${fcBool cfg.hinting.enable}
</edit>
<edit mode="append" name="autohint">
${fcBool cfg.hinting.autohint}
</edit>
<edit mode="append" name="hintstyle">
<const>hintslight</const>
</edit>
</match>
</fontconfig>
'';
antialiasConf = pkgs.writeText "fc-10-antialias.conf" ''
<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<!-- Default rendering settings -->
<match target="pattern">
<edit mode="append" name="antialias">
${fcBool cfg.antialias}
</edit>
</match>
</fontconfig>
'';
subpixelConf = pkgs.writeText "fc-10-subpixel.conf" ''
<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<!-- Default rendering settings -->
<match target="pattern">
<edit mode="append" name="rgba">
<const>${cfg.subpixel.rgba}</const>
</edit>
<edit mode="append" name="lcdfilter">
<const>lcd${cfg.subpixel.lcdfilter}</const>
</edit>
</match>
</fontconfig>
'';
dpiConf = pkgs.writeText "fc-11-dpi.conf" ''
<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<match target="pattern">
<edit name="dpi" mode="assign">
<double>${toString cfg.dpi}</double>
</edit>
</match>
</fontconfig>
'';
# default fonts configuration file
# priority 52
defaultFontsConf =
let genDefault = fonts: name:
optionalString (fonts != []) ''
<alias>
<family>${name}</family>
<prefer>
${concatStringsSep ""
(map (font: ''
<family>${font}</family>
'') fonts)}
</prefer>
</alias>
'';
in
pkgs.writeText "fc-52-nixos-default-fonts.conf" ''
<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<!-- Default fonts -->
${genDefault cfg.defaultFonts.sansSerif "sans-serif"}
${genDefault cfg.defaultFonts.serif "serif"}
${genDefault cfg.defaultFonts.monospace "monospace"}
</fontconfig>
'';
# reject Type 1 fonts
# priority 53
rejectType1 = pkgs.writeText "fc-53-nixos-reject-type1.conf" ''
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<!-- Reject Type 1 fonts -->
<selectfont>
<rejectfont>
<pattern>
<patelt name="fontformat"><string>Type 1</string></patelt>
</pattern>
</rejectfont>
</selectfont>
</fontconfig>
'';
# The configuration to be included in /etc/font/
penultimateConf = pkgs.runCommand "fontconfig-penultimate-conf" {
preferLocalBuild = true;
} ''
support_folder=$out/etc/fonts/conf.d
latest_folder=$out/etc/fonts/${latestVersion}/conf.d
mkdir -p $support_folder
mkdir -p $latest_folder
# fonts.conf
ln -s ${supportFontsConf} $support_folder/../fonts.conf
ln -s ${latestPkg.out}/etc/fonts/fonts.conf \
$latest_folder/../fonts.conf
# fontconfig-penultimate various configuration files
ln -s ${pkgs.fontconfig-penultimate}/etc/fonts/conf.d/*.conf \
$support_folder
ln -s ${pkgs.fontconfig-penultimate}/etc/fonts/conf.d/*.conf \
$latest_folder
ln -s ${cacheConfSupport} $support_folder/00-nixos-cache.conf
ln -s ${cacheConfLatest} $latest_folder/00-nixos-cache.conf
rm $support_folder/10-antialias.conf $latest_folder/10-antialias.conf
ln -s ${antialiasConf} $support_folder/10-antialias.conf
ln -s ${antialiasConf} $latest_folder/10-antialias.conf
rm $support_folder/10-hinting.conf $latest_folder/10-hinting.conf
ln -s ${hintingConf} $support_folder/10-hinting.conf
ln -s ${hintingConf} $latest_folder/10-hinting.conf
${optionalString cfg.useEmbeddedBitmaps ''
rm $support_folder/10-no-embedded-bitmaps.conf
rm $latest_folder/10-no-embedded-bitmaps.conf
''}
rm $support_folder/10-subpixel.conf $latest_folder/10-subpixel.conf
ln -s ${subpixelConf} $support_folder/10-subpixel.conf
ln -s ${subpixelConf} $latest_folder/10-subpixel.conf
${optionalString (cfg.dpi != 0) ''
ln -s ${dpiConf} $support_folder/11-dpi.conf
ln -s ${dpiConf} $latest_folder/11-dpi.conf
''}
# 50-user.conf
${optionalString (!cfg.includeUserConf) ''
rm $support_folder/50-user.conf
rm $latest_folder/50-user.conf
''}
# 51-local.conf
rm $latest_folder/51-local.conf
substitute \
${pkgs.fontconfig-penultimate}/etc/fonts/conf.d/51-local.conf \
$latest_folder/51-local.conf \
--replace local.conf /etc/fonts/${latestVersion}/local.conf
# local.conf (indirect priority 51)
${optionalString (cfg.localConf != "") ''
ln -s ${localConf} $support_folder/../local.conf
ln -s ${localConf} $latest_folder/../local.conf
''}
# 52-nixos-default-fonts.conf
ln -s ${defaultFontsConf} $support_folder/52-nixos-default-fonts.conf
ln -s ${defaultFontsConf} $latest_folder/52-nixos-default-fonts.conf
# 53-no-bitmaps.conf
${optionalString cfg.allowBitmaps ''
rm $support_folder/53-no-bitmaps.conf
rm $latest_folder/53-no-bitmaps.conf
''}
${optionalString (!cfg.allowType1) ''
# 53-nixos-reject-type1.conf
ln -s ${rejectType1} $support_folder/53-nixos-reject-type1.conf
ln -s ${rejectType1} $latest_folder/53-nixos-reject-type1.conf
''}
'';
in
{
options = {
fonts = {
fontconfig = {
penultimate = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Enable fontconfig-penultimate settings to supplement the
NixOS defaults by providing per-font rendering defaults and
metric aliases.
'';
};
};
};
};
};
config = mkIf (config.fonts.fontconfig.enable && config.fonts.fontconfig.penultimate.enable) {
fonts.fontconfig.confPackages = [ penultimateConf ];
};
}

View file

@ -190,13 +190,6 @@ let
ln -s ${pkg.out}/etc/fonts/conf.d/*.conf \
$dst/
# update 51-local.conf path to look at local.conf
rm $dst/51-local.conf
substitute ${pkg.out}/etc/fonts/conf.d/51-local.conf \
$dst/51-local.conf \
--replace local.conf /etc/fonts/${pkg.configVersion}/local.conf
# 00-nixos-cache.conf
ln -s ${cacheConf} $dst/00-nixos-cache.conf
@ -204,8 +197,10 @@ let
ln -s ${renderConf} $dst/10-nixos-rendering.conf
# 50-user.conf
${optionalString (!cfg.includeUserConf) ''
rm $dst/50-user.conf
# Since latest fontconfig looks for default files inside the package,
# we had to move this one elsewhere to be able to exclude it here.
${optionalString cfg.includeUserConf ''
ln -s ${pkg.out}/etc/fonts/conf.d.bak/50-user.conf $dst/50-user.conf
''}
# local.conf (indirect priority 51)
@ -455,7 +450,7 @@ in
environment.systemPackages = [ pkgs.fontconfig ];
environment.etc.fonts.source = "${fontconfigEtc}/etc/fonts/";
})
(mkIf (cfg.enable && !cfg.penultimate.enable) {
(mkIf cfg.enable {
fonts.fontconfig.confPackages = [ confPkg ];
})
];

View file

@ -27,6 +27,7 @@ with lib;
fonts.fontconfig.enable = false;
nixpkgs.overlays = singleton (const (super: {
cairo = super.cairo.override { x11Support = false; };
dbus = super.dbus.override { x11Support = false; };
networkmanager-fortisslvpn = super.networkmanager-fortisslvpn.override { withGnome = false; };
networkmanager-l2tp = super.networkmanager-l2tp.override { withGnome = false; };
@ -35,6 +36,7 @@ with lib;
networkmanager-vpnc = super.networkmanager-vpnc.override { withGnome = false; };
networkmanager-iodine = super.networkmanager-iodine.override { withGnome = false; };
gobject-introspection = super.gobject-introspection.override { x11Support = false; };
qemu = super.qemu.override { gtkSupport = false; spiceSupport = false; sdlSupport = false; };
}));
};
}

View file

@ -33,14 +33,11 @@ let
pkgs.ncurses
pkgs.netcat
config.programs.ssh.package
pkgs.perl
pkgs.procps
pkgs.rsync
pkgs.strace
pkgs.su
pkgs.time
pkgs.utillinux
pkgs.which # 88K size
pkgs.which
pkgs.zstd
];

View file

@ -26,7 +26,7 @@ with lib;
####### implementation
config = mkIf config.hardware.onlykey.enable {
services.udev.extraRules = builtin.readFile ./onlykey.udev;
services.udev.extraRules = builtins.readFile ./onlykey.udev;
};

View file

@ -1,4 +1,4 @@
#! @shell@ -e
#! @runtimeShell@ -e
# Shows the usage of this command to the user

View file

@ -1,4 +1,4 @@
#! @shell@
#! @runtimeShell@
set -e

View file

@ -1,4 +1,4 @@
#! @shell@
#! @runtimeShell@
set -e
shopt -s nullglob

View file

@ -1,6 +1,6 @@
#! @shell@
#! @runtimeShell@
if [ -x "@shell@" ]; then export SHELL="@shell@"; fi;
if [ -x "@runtimeShell@" ]; then export SHELL="@runtimeShell@"; fi;
set -e
set -o pipefail

View file

@ -1,4 +1,4 @@
#! @shell@
#! @runtimeShell@
case "$1" in
-h|--help)

View file

@ -14,11 +14,13 @@ let
nixos-build-vms = makeProg {
name = "nixos-build-vms";
src = ./nixos-build-vms/nixos-build-vms.sh;
inherit (pkgs) runtimeShell;
};
nixos-install = makeProg {
name = "nixos-install";
src = ./nixos-install.sh;
inherit (pkgs) runtimeShell;
nix = config.nix.package.out;
path = makeBinPath [ nixos-enter ];
};
@ -28,6 +30,7 @@ let
makeProg {
name = "nixos-rebuild";
src = ./nixos-rebuild.sh;
inherit (pkgs) runtimeShell;
nix = config.nix.package.out;
nix_x86_64_linux = fallback.x86_64-linux;
nix_i686_linux = fallback.i686-linux;
@ -50,6 +53,7 @@ let
nixos-version = makeProg {
name = "nixos-version";
src = ./nixos-version.sh;
inherit (pkgs) runtimeShell;
inherit (config.system.nixos) version codeName revision;
inherit (config.system) configurationRevision;
json = builtins.toJSON ({
@ -64,6 +68,7 @@ let
nixos-enter = makeProg {
name = "nixos-enter";
src = ./nixos-enter.sh;
inherit (pkgs) runtimeShell;
};
in

View file

@ -198,7 +198,7 @@ in
bosun = 161;
kubernetes = 162;
peerflix = 163;
chronos = 164;
#chronos = 164; # removed 2020-08-15
gitlab = 165;
tox-bootstrapd = 166;
cadvisor = 167;
@ -247,7 +247,7 @@ in
bepasty = 215;
# pumpio = 216; # unused, removed 2018-02-24
nm-openvpn = 217;
mathics = 218;
# mathics = 218; # unused, removed 2020-08-15
ejabberd = 219;
postsrsd = 220;
opendkim = 221;
@ -321,7 +321,7 @@ in
monetdb = 290;
restic = 291;
openvpn = 292;
meguca = 293;
# meguca = 293; # removed 2020-08-21
yarn = 294;
hdfs = 295;
mapred = 296;
@ -622,7 +622,7 @@ in
monetdb = 290;
restic = 291;
openvpn = 292;
meguca = 293;
# meguca = 293; # removed 2020-08-21
yarn = 294;
hdfs = 295;
mapred = 296;

View file

@ -1,7 +1,6 @@
[
./config/debug-info.nix
./config/fonts/fontconfig.nix
./config/fonts/fontconfig-penultimate.nix
./config/fonts/fontdir.nix
./config/fonts/fonts.nix
./config/fonts/ghostscript.nix
@ -466,14 +465,11 @@
./services/misc/leaps.nix
./services/misc/lidarr.nix
./services/misc/mame.nix
./services/misc/mathics.nix
./services/misc/matrix-appservice-discord.nix
./services/misc/matrix-synapse.nix
./services/misc/mautrix-telegram.nix
./services/misc/mbpfan.nix
./services/misc/mediatomb.nix
./services/misc/mesos-master.nix
./services/misc/mesos-slave.nix
./services/misc/metabase.nix
./services/misc/mwlib.nix
./services/misc/nix-daemon.nix
@ -786,10 +782,8 @@
./services/networking/znc/default.nix
./services/printing/cupsd.nix
./services/scheduling/atd.nix
./services/scheduling/chronos.nix
./services/scheduling/cron.nix
./services/scheduling/fcron.nix
./services/scheduling/marathon.nix
./services/search/elasticsearch.nix
./services/search/elasticsearch-curator.nix
./services/search/hound.nix
@ -871,6 +865,7 @@
./services/web-apps/moinmoin.nix
./services/web-apps/restya-board.nix
./services/web-apps/sogo.nix
./services/web-apps/rss-bridge.nix
./services/web-apps/tt-rss.nix
./services/web-apps/trac.nix
./services/web-apps/trilium.nix
@ -891,7 +886,6 @@
./services/web-servers/lighttpd/collectd.nix
./services/web-servers/lighttpd/default.nix
./services/web-servers/lighttpd/gitweb.nix
./services/web-servers/meguca.nix
./services/web-servers/mighttpd2.nix
./services/web-servers/minio.nix
./services/web-servers/molly-brown.nix

View file

@ -26,6 +26,7 @@
pkgs.fuse
pkgs.fuse3
pkgs.sshfs-fuse
pkgs.rsync
pkgs.socat
pkgs.screen

View file

@ -17,8 +17,12 @@ with lib;
(mkAliasOptionModule [ "environment" "checkConfigurationOptions" ] [ "_module" "check" ])
# Completely removed modules
(mkRemovedOptionModule [ "fonts" "fontconfig" "penultimate" ] "The corresponding package has removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "chronos" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "firefox" "syncserver" "user" ] "")
(mkRemovedOptionModule [ "services" "firefox" "syncserver" "group" ] "")
(mkRemovedOptionModule [ "services" "marathon" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "mesos" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "winstone" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "networking" "vpnc" ] "Use environment.etc.\"vpnc/service.conf\" instead.")
(mkRemovedOptionModule [ "environment" "blcr" "enable" ] "The BLCR module has been removed")
@ -28,6 +32,7 @@ with lib;
(mkRemovedOptionModule [ "services" "osquery" ] "The osquery module has been removed")
(mkRemovedOptionModule [ "services" "fourStore" ] "The fourStore module has been removed")
(mkRemovedOptionModule [ "services" "fourStoreEndpoint" ] "The fourStoreEndpoint module has been removed")
(mkRemovedOptionModule [ "services" "mathics" ] "The Mathics module has been removed")
(mkRemovedOptionModule [ "programs" "way-cooler" ] ("way-cooler is abandoned by its author: " +
"https://way-cooler.org/blog/2020/01/09/way-cooler-post-mortem.html"))
(mkRemovedOptionModule [ "services" "xserver" "multitouch" ] ''
@ -43,6 +48,7 @@ with lib;
instead, or any other display manager in NixOS as they all support auto-login.
'')
(mkRemovedOptionModule [ "services" "dnscrypt-proxy" ] "Use services.dnscrypt-proxy2 instead")
(mkRemovedOptionModule [ "services" "meguca" ] "Use meguca has been removed from nixpkgs")
(mkRemovedOptionModule ["hardware" "brightnessctl" ] ''
The brightnessctl module was removed because newer versions of
brightnessctl don't require the udev rules anymore (they can use the

View file

@ -150,6 +150,14 @@ let
'';
};
extraLegoFlags = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Additional global flags to pass to all lego commands.
'';
};
extraLegoRenewFlags = mkOption {
type = types.listOf types.str;
default = [];
@ -157,6 +165,14 @@ let
Additional flags to pass to lego renew.
'';
};
extraLegoRunFlags = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Additional flags to pass to lego run.
'';
};
};
};
@ -313,9 +329,10 @@ in
++ optionals (data.dnsProvider != null && !data.dnsPropagationCheck) [ "--dns.disable-cp" ]
++ concatLists (mapAttrsToList (name: root: [ "-d" name ]) data.extraDomains)
++ (if data.dnsProvider != null then [ "--dns" data.dnsProvider ] else [ "--http" "--http.webroot" data.webroot ])
++ optionals (cfg.server != null || data.server != null) ["--server" (if data.server == null then cfg.server else data.server)];
++ optionals (cfg.server != null || data.server != null) ["--server" (if data.server == null then cfg.server else data.server)]
++ data.extraLegoFlags;
certOpts = optionals data.ocspMustStaple [ "--must-staple" ];
runOpts = escapeShellArgs (globalOpts ++ [ "run" ] ++ certOpts);
runOpts = escapeShellArgs (globalOpts ++ [ "run" ] ++ certOpts ++ data.extraLegoRunFlags);
renewOpts = escapeShellArgs (globalOpts ++
[ "renew" "--days" (toString cfg.validMinDays) ] ++
certOpts ++ data.extraLegoRenewFlags);

View file

@ -160,8 +160,11 @@ in
config = {
security.wrappers = {
# These are mount related wrappers that require the +s permission.
fusermount.source = "${pkgs.fuse}/bin/fusermount";
fusermount3.source = "${pkgs.fuse3}/bin/fusermount3";
mount.source = "${lib.getBin pkgs.utillinux}/bin/mount";
umount.source = "${lib.getBin pkgs.utillinux}/bin/umount";
};
boot.specialFileSystems.${parentWrapperDir} = {

View file

@ -225,14 +225,15 @@ in
Contents of the <filename>recovery.conf</filename> file.
'';
};
superUser = mkOption {
type = types.str;
default= if versionAtLeast config.system.stateVersion "17.09" then "postgres" else "root";
default = "postgres";
internal = true;
readOnly = true;
description = ''
NixOS traditionally used 'root' as superuser, most other distros use 'postgres'.
From 17.09 we also try to follow this standard. Internal since changing this value
would lead to breakage while setting up databases.
PostgreSQL superuser account to use for various operations. Internal since changing
this value would lead to breakage while setting up databases.
'';
};
};
@ -310,6 +311,35 @@ in
''}
'';
# Wait for PostgreSQL to be ready to accept connections.
postStart =
''
PSQL="psql --port=${toString cfg.port}"
while ! $PSQL -d postgres -c "" 2> /dev/null; do
if ! kill -0 "$MAINPID"; then exit 1; fi
sleep 0.1
done
if test -e "${cfg.dataDir}/.first_startup"; then
${optionalString (cfg.initialScript != null) ''
$PSQL -f "${cfg.initialScript}" -d postgres
''}
rm -f "${cfg.dataDir}/.first_startup"
fi
'' + optionalString (cfg.ensureDatabases != []) ''
${concatMapStrings (database: ''
$PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = '${database}'" | grep -q 1 || $PSQL -tAc 'CREATE DATABASE "${database}"'
'') cfg.ensureDatabases}
'' + ''
${concatMapStrings (user: ''
$PSQL -tAc "SELECT 1 FROM pg_roles WHERE rolname='${user.name}'" | grep -q 1 || $PSQL -tAc 'CREATE USER "${user.name}"'
${concatStringsSep "\n" (mapAttrsToList (database: permission: ''
$PSQL -tAc 'GRANT ${permission} ON ${database} TO "${user.name}"'
'') user.ensurePermissions)}
'') cfg.ensureUsers}
'';
serviceConfig = mkMerge [
{ ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
User = "postgres";
@ -329,40 +359,6 @@ in
TimeoutSec = 120;
ExecStart = "${postgresql}/bin/postgres";
# Wait for PostgreSQL to be ready to accept connections.
ExecStartPost =
let
setupScript = pkgs.writeScript "postgresql-setup" (''
#!${pkgs.runtimeShell} -e
PSQL="${pkgs.utillinux}/bin/runuser -u ${cfg.superUser} -- psql --port=${toString cfg.port}"
while ! $PSQL -d postgres -c "" 2> /dev/null; do
if ! kill -0 "$MAINPID"; then exit 1; fi
sleep 0.1
done
if test -e "${cfg.dataDir}/.first_startup"; then
${optionalString (cfg.initialScript != null) ''
$PSQL -f "${cfg.initialScript}" -d postgres
''}
rm -f "${cfg.dataDir}/.first_startup"
fi
'' + optionalString (cfg.ensureDatabases != []) ''
${concatMapStrings (database: ''
$PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = '${database}'" | grep -q 1 || $PSQL -tAc 'CREATE DATABASE "${database}"'
'') cfg.ensureDatabases}
'' + ''
${concatMapStrings (user: ''
$PSQL -tAc "SELECT 1 FROM pg_roles WHERE rolname='${user.name}'" | grep -q 1 || $PSQL -tAc 'CREATE USER "${user.name}"'
${concatStringsSep "\n" (mapAttrsToList (database: permission: ''
$PSQL -tAc 'GRANT ${permission} ON ${database} TO "${user.name}"'
'') user.ensurePermissions)}
'') cfg.ensureUsers}
'');
in
"+${setupScript}";
}
(mkIf (cfg.dataDir == "/var/lib/postgresql/${cfg.package.psqlSchema}") {
StateDirectory = "postgresql postgresql/${cfg.package.psqlSchema}";

View file

@ -15,26 +15,27 @@ let
fi
'';
desktopApplicationFile = pkgs.writeTextFile {
name = "emacsclient.desktop";
destination = "/share/applications/emacsclient.desktop";
text = ''
[Desktop Entry]
Name=Emacsclient
GenericName=Text Editor
Comment=Edit text
MimeType=text/english;text/plain;text/x-makefile;text/x-c++hdr;text/x-c++src;text/x-chdr;text/x-csrc;text/x-java;text/x-moc;text/x-pascal;text/x-tcl;text/x-tex;application/x-shellscript;text/x-c;text/x-c++;
Exec=emacseditor %F
Icon=emacs
Type=Application
Terminal=false
Categories=Development;TextEditor;
StartupWMClass=Emacs
Keywords=Text;Editor;
'';
};
desktopApplicationFile = pkgs.writeTextFile {
name = "emacsclient.desktop";
destination = "/share/applications/emacsclient.desktop";
text = ''
[Desktop Entry]
Name=Emacsclient
GenericName=Text Editor
Comment=Edit text
MimeType=text/english;text/plain;text/x-makefile;text/x-c++hdr;text/x-c++src;text/x-chdr;text/x-csrc;text/x-java;text/x-moc;text/x-pascal;text/x-tcl;text/x-tex;application/x-shellscript;text/x-c;text/x-c++;
Exec=emacseditor %F
Icon=emacs
Type=Application
Terminal=false
Categories=Development;TextEditor;
StartupWMClass=Emacs
Keywords=Text;Editor;
'';
};
in {
in
{
options.services.emacs = {
enable = mkOption {
@ -86,10 +87,10 @@ in {
description = "Emacs: the extensible, self-documenting text editor";
serviceConfig = {
Type = "forking";
Type = "forking";
ExecStart = "${pkgs.bash}/bin/bash -c 'source ${config.system.build.setEnvironment}; exec ${cfg.package}/bin/emacs --daemon'";
ExecStop = "${cfg.package}/bin/emacsclient --eval (kill-emacs)";
Restart = "always";
ExecStop = "${cfg.package}/bin/emacsclient --eval (kill-emacs)";
Restart = "always";
};
} // optionalAttrs cfg.enable { wantedBy = [ "default.target" ]; };

View file

@ -53,11 +53,11 @@
<varname>emacs</varname>
</term>
<term>
<varname>emacs25</varname>
<varname>emacs</varname>
</term>
<listitem>
<para>
The latest stable version of Emacs 25 using the
The latest stable version of Emacs using the
<link
xlink:href="http://www.gtk.org">GTK 2</link>
widget toolkit.
@ -66,11 +66,11 @@
</varlistentry>
<varlistentry>
<term>
<varname>emacs25-nox</varname>
<varname>emacs-nox</varname>
</term>
<listitem>
<para>
Emacs 25 built without any dependency on X11 libraries.
Emacs built without any dependency on X11 libraries.
</para>
</listitem>
</varlistentry>
@ -79,11 +79,11 @@
<varname>emacsMacport</varname>
</term>
<term>
<varname>emacs25Macport</varname>
<varname>emacsMacport</varname>
</term>
<listitem>
<para>
Emacs 25 with the "Mac port" patches, providing a more native look and
Emacs with the "Mac port" patches, providing a more native look and
feel under macOS.
</para>
</listitem>

View file

@ -103,6 +103,17 @@ in
The temperature target on battery power in Celsius degrees.
'';
};
useTimer = mkOption {
type = types.bool;
default = false;
description = ''
Whether to set a timer that applies the undervolt settings every 30s.
This will cause spam in the journal but might be required for some
hardware under specific conditions.
Enable this if your undervolt settings don't hold.
'';
};
};
config = mkIf cfg.enable {
@ -114,6 +125,11 @@ in
path = [ pkgs.undervolt ];
description = "Intel Undervolting Service";
# Apply undervolt on boot, nixos generation switch and resume
wantedBy = [ "multi-user.target" "post-resume.target" ];
after = [ "post-resume.target" ]; # Not sure why but it won't work without this
serviceConfig = {
Type = "oneshot";
Restart = "no";
@ -121,7 +137,7 @@ in
};
};
systemd.timers.undervolt = {
systemd.timers.undervolt = mkIf cfg.useTimer {
description = "Undervolt timer to ensure voltage settings are always applied";
partOf = [ "undervolt.service" ];
wantedBy = [ "multi-user.target" ];

View file

@ -5,54 +5,93 @@ with lib;
let
cfg = config.services.logrotate;
pathOptions = {
pathOpts = {
options = {
enable = mkOption {
type = types.bool;
default = true;
description = ''
Whether to enable log rotation for this path. This can be used to explicitly disable
logging that has been configured by NixOS.
'';
};
path = mkOption {
type = types.str;
description = "The path to log files to be rotated";
description = ''
The path to log files to be rotated.
'';
};
user = mkOption {
type = types.str;
description = "The user account to use for rotation";
type = with types; nullOr str;
default = null;
description = ''
The user account to use for rotation.
'';
};
group = mkOption {
type = types.str;
description = "The group to use for rotation";
type = with types; nullOr str;
default = null;
description = ''
The group to use for rotation.
'';
};
frequency = mkOption {
type = types.enum [
"daily" "weekly" "monthly" "yearly"
];
type = types.enum [ "daily" "weekly" "monthly" "yearly" ];
default = "daily";
description = "How often to rotate the logs";
description = ''
How often to rotate the logs.
'';
};
keep = mkOption {
type = types.int;
default = 20;
description = "How many rotations to keep";
description = ''
How many rotations to keep.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = "Extra logrotate config options for this path";
description = ''
Extra logrotate config options for this path. Refer to
<link xlink:href="https://linux.die.net/man/8/logrotate"/> for details.
'';
};
priority = mkOption {
type = types.int;
default = 1000;
description = ''
Order of this logrotate block in relation to the others. The semantics are
the same as with `lib.mkOrder`. Smaller values have a greater priority.
'';
};
};
};
pathConfig = options: ''
"${options.path}" {
su ${options.user} ${options.group}
${options.frequency}
config.extraConfig = ''
missingok
notifempty
rotate ${toString options.keep}
${options.extraConfig}
'';
};
mkConf = pathOpts: ''
# generated by NixOS using the `services.logrotate.paths.${pathOpts.name}` attribute set
"${pathOpts.path}" {
${optionalString (pathOpts.user != null || pathOpts.group != null) "su ${pathOpts.user} ${pathOpts.group}"}
${pathOpts.frequency}
rotate ${toString pathOpts.keep}
${pathOpts.extraConfig}
}
'';
configFile = pkgs.writeText "logrotate.conf" (
(concatStringsSep "\n" ((map pathConfig cfg.paths) ++ [cfg.extraConfig]))
);
paths = sortProperties (mapAttrsToList (name: pathOpts: pathOpts // { name = name; }) (filterAttrs (_: pathOpts: pathOpts.enable) cfg.paths));
configFile = pkgs.writeText "logrotate.conf" (concatStringsSep "\n" ((map mkConf paths) ++ [ cfg.extraConfig ]));
in
{
@ -65,41 +104,66 @@ in
enable = mkEnableOption "the logrotate systemd service";
paths = mkOption {
type = types.listOf (types.submodule pathOptions);
default = [];
description = "List of attribute sets with paths to rotate";
example = {
"/var/log/myapp/*.log" = {
user = "myuser";
group = "mygroup";
rotate = "weekly";
keep = 5;
};
};
type = with types; attrsOf (submodule pathOpts);
default = {};
description = ''
Attribute set of paths to rotate. The order each block appears in the generated configuration file
can be controlled by the <link linkend="opt-services.logrotate.paths._name_.priority">priority</link> option
using the same semantics as `lib.mkOrder`. Smaller values have a greater priority.
'';
example = literalExample ''
{
httpd = {
path = "/var/log/httpd/*.log";
user = config.services.httpd.user;
group = config.services.httpd.group;
keep = 7;
};
myapp = {
path = "/var/log/myapp/*.log";
user = "myuser";
group = "mygroup";
frequency = "weekly";
keep = 5;
priority = 1;
};
}
'';
};
extraConfig = mkOption {
default = "";
type = types.lines;
description = ''
Extra contents to add to the logrotate config file.
See https://linux.die.net/man/8/logrotate
Extra contents to append to the logrotate configuration file. Refer to
<link xlink:href="https://linux.die.net/man/8/logrotate"/> for details.
'';
};
};
};
config = mkIf cfg.enable {
systemd.services.logrotate = {
description = "Logrotate Service";
wantedBy = [ "multi-user.target" ];
startAt = "*-*-* *:05:00";
assertions = mapAttrsToList (name: pathOpts:
{ assertion = (pathOpts.user != null) == (pathOpts.group != null);
message = ''
If either of `services.logrotate.paths.${name}.user` or `services.logrotate.paths.${name}.group` are specified then *both* must be specified.
'';
}
) cfg.paths;
serviceConfig.Restart = "no";
serviceConfig.User = "root";
systemd.services.logrotate = {
description = "Logrotate Service";
wantedBy = [ "multi-user.target" ];
startAt = "*-*-* *:05:00";
script = ''
exec ${pkgs.logrotate}/sbin/logrotate ${configFile}
'';
serviceConfig = {
Restart = "no";
User = "root";
};
};
};
}

View file

@ -4,13 +4,9 @@ with lib;
let
cfg = config.services.logstash;
pluginPath = lib.concatStringsSep ":" cfg.plugins;
havePluginPath = lib.length cfg.plugins > 0;
ops = lib.optionalString;
verbosityFlag = "--log.level " + cfg.logLevel;
pluginsPath = "--path.plugins ${pluginPath}";
logstashConf = pkgs.writeText "logstash.conf" ''
input {
${cfg.inputConfig}
@ -173,7 +169,7 @@ in
ExecStart = concatStringsSep " " (filter (s: stringLength s != 0) [
"${cfg.package}/bin/logstash"
"-w ${toString cfg.filterWorkers}"
(ops havePluginPath pluginsPath)
(concatMapStringsSep " " (x: "--path.plugins ${x}") cfg.plugins)
"${verbosityFlag}"
"-f ${logstashConf}"
"--path.settings ${logstashSettingsDir}"

View file

@ -1,4 +1,4 @@
{ config, lib, pkgs, ... }:
{ options, config, lib, pkgs, ... }:
with lib;
@ -83,11 +83,11 @@ let
)
(
optionalString (cfg.mailboxes != []) ''
optionalString (cfg.mailboxes != {}) ''
protocol imap {
namespace inbox {
inbox=yes
${concatStringsSep "\n" (map mailboxConfig cfg.mailboxes)}
${concatStringsSep "\n" (map mailboxConfig (attrValues cfg.mailboxes))}
}
}
''
@ -131,12 +131,13 @@ let
special_use = \${toString mailbox.specialUse}
'' + "}";
mailboxes = { ... }: {
mailboxes = { name, ... }: {
options = {
name = mkOption {
type = types.nullOr (types.strMatching ''[^"]+'');
type = types.strMatching ''[^"]+'';
example = "Spam";
default = null;
default = name;
readOnly = true;
description = "The name of the mailbox.";
};
auto = mkOption {
@ -335,19 +336,11 @@ in
};
mailboxes = mkOption {
type = with types; let m = submodule mailboxes; in either (listOf m) (attrsOf m);
type = with types; coercedTo
(listOf unspecified)
(list: listToAttrs (map (entry: { name = entry.name; value = removeAttrs entry ["name"]; }) list))
(attrsOf (submodule mailboxes));
default = {};
apply = x:
if isList x then warn "Declaring `services.dovecot2.mailboxes' as a list is deprecated and will break eval in 21.03!" x
else mapAttrsToList (name: value:
if value.name != null
then throw ''
When specifying dovecot2 mailboxes as attributes, declaring
a `name'-attribute is prohibited! The name ${value.name} should
be the attribute key!
''
else value // { inherit name; }
) x;
example = literalExample ''
{
Spam = { specialUse = "Junk"; auto = "create"; };
@ -471,6 +464,10 @@ in
environment.systemPackages = [ dovecotPkg ];
warnings = mkIf (any isList options.services.dovecot2.mailboxes.definitions) [
"Declaring `services.dovecot2.mailboxes' as a list is deprecated and will break eval in 21.03! See the release notes for more info for migration."
];
assertions = [
{
assertion = intersectLists cfg.protocols [ "pop3" "imap" ] != [];

View file

@ -54,7 +54,7 @@ let
'') gitlabConfig.production.repositories.storages))}
'';
gitlabShellConfig = {
gitlabShellConfig = flip recursiveUpdate cfg.extraShellConfig {
user = cfg.user;
gitlab_url = "http+unix://${pathUrlQuote gitlabSocket}";
http_settings.self_signed_cert = false;
@ -517,6 +517,12 @@ in {
'';
};
extraShellConfig = mkOption {
type = types.attrs;
default = {};
description = "Extra configuration to merge into shell-config.yml";
};
extraConfig = mkOption {
type = types.attrs;
default = {};

View file

@ -50,6 +50,12 @@ in
description = "Parse and interpret emoji tags";
};
h1-title = mkOption {
type = types.bool;
default = false;
description = "Use the first h1 as page title";
};
branch = mkOption {
type = types.str;
default = "master";
@ -102,6 +108,7 @@ in
--ref ${cfg.branch} \
${optionalString cfg.mathjax "--mathjax"} \
${optionalString cfg.emoji "--emoji"} \
${optionalString cfg.h1-title "--h1-title"} \
${optionalString (cfg.allowUploads != null) "--allow-uploads ${cfg.allowUploads}"} \
${cfg.stateDir}
'';

View file

@ -16,6 +16,14 @@ in
description = "User account under which Jellyfin runs.";
};
package = mkOption {
type = types.package;
example = literalExample "pkgs.jellyfin";
description = ''
Jellyfin package to use.
'';
};
group = mkOption {
type = types.str;
default = "jellyfin";
@ -35,11 +43,16 @@ in
Group = cfg.group;
StateDirectory = "jellyfin";
CacheDirectory = "jellyfin";
ExecStart = "${pkgs.jellyfin}/bin/jellyfin --datadir '/var/lib/${StateDirectory}' --cachedir '/var/cache/${CacheDirectory}'";
ExecStart = "${cfg.package}/bin/jellyfin --datadir '/var/lib/${StateDirectory}' --cachedir '/var/cache/${CacheDirectory}'";
Restart = "on-failure";
};
};
services.jellyfin.package = mkDefault (
if versionAtLeast config.system.stateVersion "20.09" then pkgs.jellyfin
else pkgs.jellyfin_10_5
);
users.users = mkIf (cfg.user == "jellyfin") {
jellyfin = {
group = cfg.group;

View file

@ -1,54 +0,0 @@
{ pkgs, lib, config, ... }:
with lib;
let
cfg = config.services.mathics;
in {
options = {
services.mathics = {
enable = mkEnableOption "Mathics notebook service";
external = mkOption {
type = types.bool;
default = false;
description = "Listen on all interfaces, rather than just localhost?";
};
port = mkOption {
type = types.int;
default = 8000;
description = "TCP port to listen on.";
};
};
};
config = mkIf cfg.enable {
users.users.mathics = {
group = config.users.groups.mathics.name;
description = "Mathics user";
home = "/var/lib/mathics";
createHome = true;
uid = config.ids.uids.mathics;
};
users.groups.mathics.gid = config.ids.gids.mathics;
systemd.services.mathics = {
description = "Mathics notebook server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
User = config.users.users.mathics.name;
Group = config.users.groups.mathics.name;
ExecStart = concatStringsSep " " [
"${pkgs.mathics}/bin/mathicsserver"
"--port" (toString cfg.port)
(if cfg.external then "--external" else "")
];
};
};
};
}

View file

@ -1,125 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.mesos.master;
in {
options.services.mesos = {
master = {
enable = mkOption {
description = "Whether to enable the Mesos Master.";
default = false;
type = types.bool;
};
ip = mkOption {
description = "IP address to listen on.";
default = "0.0.0.0";
type = types.str;
};
port = mkOption {
description = "Mesos Master port";
default = 5050;
type = types.int;
};
advertiseIp = mkOption {
description = "IP address advertised to reach this master.";
default = null;
type = types.nullOr types.str;
};
advertisePort = mkOption {
description = "Port advertised to reach this Mesos master.";
default = null;
type = types.nullOr types.int;
};
zk = mkOption {
description = ''
ZooKeeper URL (used for leader election amongst masters).
May be one of:
zk://host1:port1,host2:port2,.../mesos
zk://username:password@host1:port1,host2:port2,.../mesos
'';
type = types.str;
};
workDir = mkOption {
description = "The Mesos work directory.";
default = "/var/lib/mesos/master";
type = types.str;
};
extraCmdLineOptions = mkOption {
description = ''
Extra command line options for Mesos Master.
See https://mesos.apache.org/documentation/latest/configuration/
'';
default = [ "" ];
type = types.listOf types.str;
example = [ "--credentials=VALUE" ];
};
quorum = mkOption {
description = ''
The size of the quorum of replicas when using 'replicated_log' based
registry. It is imperative to set this value to be a majority of
masters i.e., quorum > (number of masters)/2.
If 0 will fall back to --registry=in_memory.
'';
default = 0;
type = types.int;
};
logLevel = mkOption {
description = ''
The logging level used. Possible values:
'INFO', 'WARNING', 'ERROR'
'';
default = "INFO";
type = types.str;
};
};
};
config = mkIf cfg.enable {
systemd.tmpfiles.rules = [
"d '${cfg.workDir}' 0700 - - - -"
];
systemd.services.mesos-master = {
description = "Mesos Master";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
ExecStart = ''
${pkgs.mesos}/bin/mesos-master \
--ip=${cfg.ip} \
--port=${toString cfg.port} \
${optionalString (cfg.advertiseIp != null) "--advertise_ip=${cfg.advertiseIp}"} \
${optionalString (cfg.advertisePort != null) "--advertise_port=${toString cfg.advertisePort}"} \
${if cfg.quorum == 0
then "--registry=in_memory"
else "--zk=${cfg.zk} --registry=replicated_log --quorum=${toString cfg.quorum}"} \
--work_dir=${cfg.workDir} \
--logging_level=${cfg.logLevel} \
${toString cfg.extraCmdLineOptions}
'';
Restart = "on-failure";
};
};
};
}

View file

@ -1,220 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.mesos.slave;
mkAttributes =
attrs: concatStringsSep ";" (mapAttrsToList
(k: v: "${k}:${v}")
(filterAttrs (k: v: v != null) attrs));
attribsArg = optionalString (cfg.attributes != {})
"--attributes=${mkAttributes cfg.attributes}";
containerizersArg = concatStringsSep "," (
lib.unique (
cfg.containerizers ++ (optional cfg.withDocker "docker")
)
);
imageProvidersArg = concatStringsSep "," (
lib.unique (
cfg.imageProviders ++ (optional cfg.withDocker "docker")
)
);
isolationArg = concatStringsSep "," (
lib.unique (
cfg.isolation ++ (optionals cfg.withDocker [ "filesystem/linux" "docker/runtime"])
)
);
in {
options.services.mesos = {
slave = {
enable = mkOption {
description = "Whether to enable the Mesos Slave.";
default = false;
type = types.bool;
};
ip = mkOption {
description = "IP address to listen on.";
default = "0.0.0.0";
type = types.str;
};
port = mkOption {
description = "Port to listen on.";
default = 5051;
type = types.int;
};
advertiseIp = mkOption {
description = "IP address advertised to reach this agent.";
default = null;
type = types.nullOr types.str;
};
advertisePort = mkOption {
description = "Port advertised to reach this agent.";
default = null;
type = types.nullOr types.int;
};
containerizers = mkOption {
description = ''
List of containerizer implementations to compose in order to provide
containerization. Available options are mesos and docker.
The order the containerizers are specified is the order they are tried.
'';
default = [ "mesos" ];
type = types.listOf types.str;
};
imageProviders = mkOption {
description = "List of supported image providers, e.g., APPC,DOCKER.";
default = [ ];
type = types.listOf types.str;
};
imageProvisionerBackend = mkOption {
description = ''
Strategy for provisioning container rootfs from images,
e.g., aufs, bind, copy, overlay.
'';
default = "copy";
type = types.str;
};
isolation = mkOption {
description = ''
Isolation mechanisms to use, e.g., posix/cpu,posix/mem, or
cgroups/cpu,cgroups/mem, or network/port_mapping, or `gpu/nvidia` for nvidia
specific gpu isolation.
'';
default = [ "posix/cpu" "posix/mem" ];
type = types.listOf types.str;
};
master = mkOption {
description = ''
May be one of:
zk://host1:port1,host2:port2,.../path
zk://username:password@host1:port1,host2:port2,.../path
'';
type = types.str;
};
withHadoop = mkOption {
description = "Add the HADOOP_HOME to the slave.";
default = false;
type = types.bool;
};
withDocker = mkOption {
description = "Enable the docker containerizer.";
default = config.virtualisation.docker.enable;
type = types.bool;
};
dockerRegistry = mkOption {
description = ''
The default url for pulling Docker images.
It could either be a Docker registry server url,
or a local path in which Docker image archives are stored.
'';
default = null;
type = types.nullOr (types.either types.str types.path);
};
workDir = mkOption {
description = "The Mesos work directory.";
default = "/var/lib/mesos/slave";
type = types.str;
};
extraCmdLineOptions = mkOption {
description = ''
Extra command line options for Mesos Slave.
See https://mesos.apache.org/documentation/latest/configuration/
'';
default = [ "" ];
type = types.listOf types.str;
example = [ "--gc_delay=3days" ];
};
logLevel = mkOption {
description = ''
The logging level used. Possible values:
'INFO', 'WARNING', 'ERROR'
'';
default = "INFO";
type = types.str;
};
attributes = mkOption {
description = ''
Machine attributes for the slave instance.
Use caution when changing this; you may need to manually reset slave
metadata before the slave can re-register.
'';
default = {};
type = types.attrsOf types.str;
example = { rack = "aa";
host = "aabc123";
os = "nixos"; };
};
executorEnvironmentVariables = mkOption {
description = ''
The environment variables that should be passed to the executor, and thus subsequently task(s).
'';
default = {
PATH = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin";
};
type = types.attrsOf types.str;
};
};
};
config = mkIf cfg.enable {
systemd.tmpfiles.rules = [
"d '${cfg.workDir}' 0701 - - - -"
];
systemd.services.mesos-slave = {
description = "Mesos Slave";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ] ++ optionals cfg.withDocker [ "docker.service" ] ;
path = [ pkgs.runtimeShellPackage ];
serviceConfig = {
ExecStart = ''
${pkgs.mesos}/bin/mesos-slave \
--containerizers=${containerizersArg} \
--image_providers=${imageProvidersArg} \
--image_provisioner_backend=${cfg.imageProvisionerBackend} \
--isolation=${isolationArg} \
--ip=${cfg.ip} \
--port=${toString cfg.port} \
${optionalString (cfg.advertiseIp != null) "--advertise_ip=${cfg.advertiseIp}"} \
${optionalString (cfg.advertisePort != null) "--advertise_port=${toString cfg.advertisePort}"} \
--master=${cfg.master} \
--work_dir=${cfg.workDir} \
--logging_level=${cfg.logLevel} \
${attribsArg} \
${optionalString cfg.withHadoop "--hadoop-home=${pkgs.hadoop}"} \
${optionalString cfg.withDocker "--docker=${pkgs.docker}/libexec/docker/docker"} \
${optionalString (cfg.dockerRegistry != null) "--docker_registry=${cfg.dockerRegistry}"} \
--executor_environment_variables=${lib.escapeShellArg (builtins.toJSON cfg.executorEnvironmentVariables)} \
${toString cfg.extraCmdLineOptions}
'';
};
};
};
}

View file

@ -29,13 +29,15 @@ in {
config = mkIf cfg.enable {
systemd.services.ssm-agent = {
users.extraUsers.ssm-user = {};
inherit (cfg.package.meta) description;
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = [ fake-lsb-release ];
path = [ fake-lsb-release pkgs.coreutils ];
serviceConfig = {
ExecStart = "${cfg.package}/bin/agent";
ExecStart = "${cfg.package}/bin/amazon-ssm-agent";
KillMode = "process";
Restart = "on-failure";
RestartSec = "15min";

View file

@ -4,19 +4,29 @@ with lib;
let
cfg = config.services.monit;
extraConfig = pkgs.writeText "monitConfig" cfg.extraConfig;
in
{
imports = [
(mkRenamedOptionModule [ "services" "monit" "config" ] ["services" "monit" "extraConfig" ])
];
options.services.monit = {
enable = mkEnableOption "Monit";
config = mkOption {
type = types.lines;
default = "";
description = "monitrc content";
configFiles = mkOption {
type = types.listOf types.path;
default = [];
description = "List of paths to be included in the monitrc file";
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = "Additional monit config as string";
};
};
config = mkIf cfg.enable {
@ -24,7 +34,7 @@ in
environment.systemPackages = [ pkgs.monit ];
environment.etc.monitrc = {
text = cfg.config;
text = concatMapStringsSep "\n" (path: "include ${path}") (cfg.configFiles ++ [extraConfig]);
mode = "0400";
};

View file

@ -46,7 +46,7 @@ let
cmdlineArgs = cfg.extraFlags ++ [
"--storage.tsdb.path=${workingDir}/data/"
"--config.file=${prometheusYml}"
"--web.listen-address=${cfg.listenAddress}"
"--web.listen-address=${cfg.listenAddress}:${builtins.toString cfg.port}"
"--alertmanager.notification-queue-capacity=${toString cfg.alertmanagerNotificationQueueCapacity}"
"--alertmanager.timeout=${toString cfg.alertmanagerTimeout}s"
] ++
@ -489,9 +489,17 @@ in {
'';
};
port = mkOption {
type = types.port;
default = 9090;
description = ''
Port to listen on.
'';
};
listenAddress = mkOption {
type = types.str;
default = "0.0.0.0:9090";
default = "0.0.0.0";
description = ''
Address to listen on for the web interface, API, and telemetry.
'';
@ -619,6 +627,21 @@ in {
};
config = mkIf cfg.enable {
assertions = [
( let
legacy = builtins.match "(.*):(.*)" cfg.listenAddress;
in {
assertion = legacy == null;
message = ''
Do not specify the port for Prometheus to listen on in the
listenAddress option; use the port option instead:
services.prometheus.listenAddress = ${builtins.elemAt legacy 0};
services.prometheus.port = ${builtins.elemAt legacy 1};
'';
}
)
];
users.groups.prometheus.gid = config.ids.gids.prometheus;
users.users.prometheus = {
description = "Prometheus daemon user";

View file

@ -20,7 +20,7 @@ let
${pkgs.coreutils}/bin/cat << EOF
From: smartd on ${host} <${nm.sender}>
To: undisclosed-recipients:;
Subject: SMART error on $SMARTD_DEVICESTRING: $SMARTD_FAILTYPE
Subject: $SMARTD_SUBJECT
$SMARTD_FULLMESSAGE
EOF
@ -239,11 +239,7 @@ in
systemd.services.smartd = {
description = "S.M.A.R.T. Daemon";
wantedBy = [ "multi-user.target" ];
path = [ pkgs.nettools ]; # for hostname and dnsdomanname calls in smartd
serviceConfig.ExecStart = "${pkgs.smartmontools}/sbin/smartd ${lib.concatStringsSep " " cfg.extraOptions} --no-fork --configfile=${smartdConf}";
};

View file

@ -5,8 +5,8 @@ let
pgsql = config.services.postgresql;
mysql = config.services.mysql;
inherit (lib) mkDefault mkEnableOption mkIf mkMerge mkOption;
inherit (lib) attrValues concatMapStringsSep literalExample optional optionalAttrs optionalString types;
inherit (lib) mkAfter mkDefault mkEnableOption mkIf mkMerge mkOption;
inherit (lib) attrValues concatMapStringsSep getName literalExample optional optionalAttrs optionalString types;
inherit (lib.generators) toKeyValue;
user = "zabbix";
@ -232,14 +232,15 @@ in
services.mysql = optionalAttrs mysqlLocal {
enable = true;
package = mkDefault pkgs.mariadb;
ensureDatabases = [ cfg.database.name ];
ensureUsers = [
{ name = cfg.database.user;
ensurePermissions = { "${cfg.database.name}.*" = "ALL PRIVILEGES"; };
}
];
};
systemd.services.mysql.postStart = mkAfter (optionalString mysqlLocal ''
( echo "CREATE DATABASE IF NOT EXISTS \`${cfg.database.name}\` CHARACTER SET utf8 COLLATE utf8_bin;"
echo "CREATE USER IF NOT EXISTS '${cfg.database.user}'@'localhost' IDENTIFIED WITH ${if (getName config.services.mysql.package == getName pkgs.mariadb) then "unix_socket" else "auth_socket"};"
echo "GRANT ALL PRIVILEGES ON \`${cfg.database.name}\`.* TO '${cfg.database.user}'@'localhost';"
) | ${config.services.mysql.package}/bin/mysql -N
'');
services.postgresql = optionalAttrs pgsqlLocal {
enable = true;
ensureDatabases = [ cfg.database.name ];

View file

@ -5,8 +5,8 @@ let
pgsql = config.services.postgresql;
mysql = config.services.mysql;
inherit (lib) mkDefault mkEnableOption mkIf mkMerge mkOption;
inherit (lib) attrValues concatMapStringsSep literalExample optional optionalAttrs optionalString types;
inherit (lib) mkAfter mkDefault mkEnableOption mkIf mkMerge mkOption;
inherit (lib) attrValues concatMapStringsSep getName literalExample optional optionalAttrs optionalString types;
inherit (lib.generators) toKeyValue;
user = "zabbix";
@ -220,14 +220,15 @@ in
services.mysql = optionalAttrs mysqlLocal {
enable = true;
package = mkDefault pkgs.mariadb;
ensureDatabases = [ cfg.database.name ];
ensureUsers = [
{ name = cfg.database.user;
ensurePermissions = { "${cfg.database.name}.*" = "ALL PRIVILEGES"; };
}
];
};
systemd.services.mysql.postStart = mkAfter (optionalString mysqlLocal ''
( echo "CREATE DATABASE IF NOT EXISTS \`${cfg.database.name}\` CHARACTER SET utf8 COLLATE utf8_bin;"
echo "CREATE USER IF NOT EXISTS '${cfg.database.user}'@'localhost' IDENTIFIED WITH ${if (getName config.services.mysql.package == getName pkgs.mariadb) then "unix_socket" else "auth_socket"};"
echo "GRANT ALL PRIVILEGES ON \`${cfg.database.name}\`.* TO '${cfg.database.user}'@'localhost';"
) | ${config.services.mysql.package}/bin/mysql -N
'');
services.postgresql = optionalAttrs pgsqlLocal {
enable = true;
ensureDatabases = [ cfg.database.name ];

View file

@ -256,6 +256,6 @@ in
};
meta.maintainers = with maintainers; [ maintainers."1000101" ];
meta.maintainers = with maintainers; [ _1000101 ];
}

View file

@ -270,6 +270,6 @@ in
nameValuePair "${cfg.group}" { })) eachBlockbook;
};
meta.maintainers = with maintainers; [ maintainers."1000101" ];
meta.maintainers = with maintainers; [ _1000101 ];
}

View file

@ -129,13 +129,17 @@ in {
systemd.services."kresd@".serviceConfig = {
ExecStart = "${package}/bin/kresd --noninteractive "
+ "-c ${package}/lib/knot-resolver/distro-preconfig.lua -c ${configFile}";
# Ensure correct ownership in case UID or GID changes.
# Ensure /run/knot-resolver exists
RuntimeDirectory = "knot-resolver";
RuntimeDirectoryMode = "0770";
# Ensure /var/lib/knot-resolver exists
StateDirectory = "knot-resolver";
StateDirectoryMode = "0770";
# Ensure /var/cache/knot-resolver exists
CacheDirectory = "knot-resolver";
CacheDirectoryMode = "0750";
CacheDirectoryMode = "0770";
};
systemd.tmpfiles.packages = [ package ];
# Try cleaning up the previously default location of cache file.
# Note that /var/cache/* should always be safe to remove.
# TODO: remove later, probably between 20.09 and 21.03

View file

@ -108,7 +108,6 @@ in
};
};
meta.maintainers = with maintainers; [ maintainers."1000101" ];
meta.maintainers = with maintainers; [ _1000101 ];
}

View file

@ -90,7 +90,7 @@ in
config = mkIf cfg.enable (
mkMerge [
{
meta.maintainers = [ lib.maintainers."0x4A6F" ];
meta.maintainers = with lib.maintainers; [ _0x4A6F ];
systemd.services.xandikos = {
description = "A Simple Calendar and Contact Server";

View file

@ -1,54 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.chronos;
in {
###### interface
options.services.chronos = {
enable = mkOption {
description = "Whether to enable graphite web frontend.";
default = false;
type = types.bool;
};
httpPort = mkOption {
description = "Chronos listening port";
default = 4400;
type = types.int;
};
master = mkOption {
description = "Chronos mesos master zookeeper address";
default = "zk://${head cfg.zookeeperHosts}/mesos";
type = types.str;
};
zookeeperHosts = mkOption {
description = "Chronos mesos zookepper addresses";
default = [ "localhost:2181" ];
type = types.listOf types.str;
};
};
###### implementation
config = mkIf cfg.enable {
systemd.services.chronos = {
description = "Chronos Service";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "zookeeper.service" ];
serviceConfig = {
ExecStart = "${pkgs.chronos}/bin/chronos --master ${cfg.master} --zk_hosts ${concatStringsSep "," cfg.zookeeperHosts} --http_port ${toString cfg.httpPort}";
User = "chronos";
};
};
users.users.chronos.uid = config.ids.uids.chronos;
};
}

View file

@ -1,98 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.marathon;
in {
###### interface
options.services.marathon = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable the marathon mesos framework.
'';
};
master = mkOption {
type = types.str;
default = "zk://${concatStringsSep "," cfg.zookeeperHosts}/mesos";
example = "zk://1.2.3.4:2181,2.3.4.5:2181,3.4.5.6:2181/mesos";
description = ''
Mesos master address. See <link xlink:href="https://mesosphere.github.io/marathon/docs/"/> for details.
'';
};
zookeeperHosts = mkOption {
type = types.listOf types.str;
default = [ "localhost:2181" ];
example = [ "1.2.3.4:2181" "2.3.4.5:2181" "3.4.5.6:2181" ];
description = ''
ZooKeeper hosts' addresses.
'';
};
user = mkOption {
type = types.str;
default = "marathon";
example = "root";
description = ''
The user that the Marathon framework will be launched as. If the user doesn't exist it will be created.
If you want to run apps that require root access or you want to launch apps using arbitrary users, that
is using the `--mesos_user` flag then you need to change this to `root`.
'';
};
httpPort = mkOption {
type = types.int;
default = 8080;
description = ''
Marathon listening port for HTTP connections.
'';
};
extraCmdLineOptions = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "--https_port=8443" "--zk_timeout=10000" "--marathon_store_timeout=2000" ];
description = ''
Extra command line options to pass to Marathon.
See <link xlink:href="https://mesosphere.github.io/marathon/docs/command-line-flags.html"/> for all possible flags.
'';
};
environment = mkOption {
default = { };
type = types.attrs;
example = { JAVA_OPTS = "-Xmx512m"; MESOSPHERE_HTTP_CREDENTIALS = "username:password"; };
description = ''
Environment variables passed to Marathon.
'';
};
};
###### implementation
config = mkIf cfg.enable {
systemd.services.marathon = {
description = "Marathon Service";
environment = cfg.environment;
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "zookeeper.service" "mesos-master.service" "mesos-slave.service" ];
serviceConfig = {
ExecStart = "${pkgs.marathon}/bin/marathon --master ${cfg.master} --zk zk://${concatStringsSep "," cfg.zookeeperHosts}/marathon --http_port ${toString cfg.httpPort} ${concatStringsSep " " cfg.extraCmdLineOptions}";
User = cfg.user;
Restart = "always";
RestartSec = "2";
};
};
users.users.${cfg.user}.isSystemUser = true;
};
}

View file

@ -11,6 +11,7 @@ let
settingsDir = ".config/transmission-daemon";
downloadsDir = "Downloads";
incompleteDir = ".incomplete";
watchDir = "watchdir";
# TODO: switch to configGen.json once RFC0042 is implemented
settingsFile = pkgs.writeText "settings.json" (builtins.toJSON cfg.settings);
in
@ -35,6 +36,8 @@ in
download-dir = "${cfg.home}/${downloadsDir}";
incomplete-dir = "${cfg.home}/${incompleteDir}";
incomplete-dir-enabled = true;
watch-dir = "${cfg.home}/${watchDir}";
watch-dir-enabled = false;
message-level = 1;
peer-port = 51413;
peer-port-random-high = 65535;
@ -161,6 +164,9 @@ in
{ assertion = types.path.check cfg.settings.incomplete-dir;
message = "`services.transmission.settings.incomplete-dir' must be an absolute path.";
}
{ assertion = types.path.check cfg.settings.watch-dir;
message = "`services.transmission.settings.watch-dir' must be an absolute path.";
}
{ assertion = cfg.settings.script-torrent-done-filename == "" || types.path.check cfg.settings.script-torrent-done-filename;
message = "`services.transmission.settings.script-torrent-done-filename' must be an absolute path.";
}
@ -220,14 +226,16 @@ in
cfg.settings.download-dir
] ++
optional cfg.settings.incomplete-dir-enabled
cfg.settings.incomplete-dir;
cfg.settings.incomplete-dir
++
optional cfg.settings.watch-dir-enabled
cfg.settings.watch-dir
;
BindReadOnlyPaths = [
# No confinement done of /nix/store here like in systemd-confinement.nix,
# an AppArmor profile is provided to get a confinement based upon paths and rights.
builtins.storeDir
"-/etc/hosts"
"-/etc/ld-nix.so.preload"
"-/etc/localtime"
"/etc"
] ++
optional (cfg.settings.script-torrent-done-enabled &&
cfg.settings.script-torrent-done-filename != "")
@ -410,11 +418,17 @@ in
${optionalString cfg.settings.incomplete-dir-enabled ''
rw ${cfg.settings.incomplete-dir}/**,
''}
${optionalString cfg.settings.watch-dir-enabled ''
rw ${cfg.settings.watch-dir}/**,
''}
profile dirs {
rw ${cfg.settings.download-dir}/**,
${optionalString cfg.settings.incomplete-dir-enabled ''
rw ${cfg.settings.incomplete-dir}/**,
''}
${optionalString cfg.settings.watch-dir-enabled ''
rw ${cfg.settings.watch-dir}/**,
''}
}
${optionalString (cfg.settings.script-torrent-done-enabled &&

View file

@ -383,6 +383,6 @@ in
};
};
meta.maintainers = with maintainers; [ maintainers."1000101" ];
meta.maintainers = with maintainers; [ _1000101 ];
}

View file

@ -47,8 +47,18 @@ let
in {
imports = [
( mkRemovedOptionModule [ "services" "nextcloud" "nginx" "enable" ]
"The nextcloud module dropped support for other webservers than nginx.")
(mkRemovedOptionModule [ "services" "nextcloud" "nginx" "enable" ] ''
The nextcloud module supports `nginx` as reverse-proxy by default and doesn't
support other reverse-proxies officially.
However it's possible to use an alternative reverse-proxy by
* disabling nginx
* setting `listen.owner` & `listen.group` in the phpfpm-pool to a different value
Further details about this can be found in the `Nextcloud`-section of the NixOS-manual
(which can be openend e.g. by running `nixos-help`).
'')
];
options.services.nextcloud = {
@ -544,36 +554,40 @@ in {
'';
};
"/" = {
priority = 200;
extraConfig = "rewrite ^ /index.php;";
priority = 900;
extraConfig = "try_files $uri $uri/ /index.php$request_uri;";
};
"~ ^/store-apps" = {
priority = 201;
extraConfig = "root ${cfg.home};";
};
"= /.well-known/carddav" = {
"^~ /.well-known" = {
priority = 210;
extraConfig = "return 301 $scheme://$host/remote.php/dav;";
extraConfig = ''
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
try_files $uri $uri/ =404;
'';
};
"= /.well-known/caldav" = {
priority = 210;
extraConfig = "return 301 $scheme://$host/remote.php/dav;";
};
"~ ^\\/(?:build|tests|config|lib|3rdparty|templates|data)\\/" = {
priority = 300;
extraConfig = "deny all;";
};
"~ ^\\/(?:\\.|autotest|occ|issue|indie|db_|console)" = {
priority = 300;
extraConfig = "deny all;";
};
"~ ^\\/(?:index|remote|public|cron|core/ajax\\/update|status|ocs\\/v[12]|updater\\/.+|ocs-provider\\/.+|ocm-provider\\/.+)\\.php(?:$|\\/)" = {
"~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)".extraConfig = ''
return 404;
'';
"~ ^/(?:\\.|autotest|occ|issue|indie|db_|console)".extraConfig = ''
return 404;
'';
"~ \\.php(?:$|/)" = {
priority = 500;
extraConfig = ''
include ${config.services.nginx.package}/conf/fastcgi.conf;
fastcgi_split_path_info ^(.+\.php)(\\/.*)$;
fastcgi_split_path_info ^(.+?\.php)(\\/.*)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS ${if cfg.https then "on" else "off"};
fastcgi_param modHeadersAvailable true;
fastcgi_param front_controller_active true;
@ -583,28 +597,24 @@ in {
fastcgi_read_timeout 120s;
'';
};
"~ \\.(?:css|js|svg|gif|map)$".extraConfig = ''
try_files $uri /index.php$request_uri;
expires 6M;
access_log off;
'';
"~ \\.woff2?$".extraConfig = ''
try_files $uri /index.php$request_uri;
expires 7d;
access_log off;
'';
"~ ^\\/(?:updater|ocs-provider|ocm-provider)(?:$|\\/)".extraConfig = ''
try_files $uri/ =404;
index index.php;
'';
"~ \\.(?:css|js|woff2?|svg|gif)$".extraConfig = ''
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
add_header X-Frame-Options sameorigin;
add_header Referrer-Policy no-referrer;
access_log off;
'';
"~ \\.(?:png|html|ttf|ico|jpg|jpeg|bcmap|mp4|webm)$".extraConfig = ''
try_files $uri /index.php$request_uri;
access_log off;
'';
};
extraConfig = ''
index index.php index.html /index.php$request_uri;
expires 1m;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
@ -613,8 +623,6 @@ in {
add_header X-Frame-Options sameorigin;
add_header Referrer-Policy no-referrer;
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
client_max_body_size ${cfg.maxUploadSize};
fastcgi_buffers 64 4K;
fastcgi_hide_header X-Powered-By;

View file

@ -123,6 +123,61 @@
</para>
</section>
<section xml:id="module-services-nextcloud-httpd">
<title>Using an alternative webserver as reverse-proxy (e.g. <literal>httpd</literal>)</title>
<para>
By default, <package>nginx</package> is used as reverse-proxy for <package>nextcloud</package>.
However, it's possible to use e.g. <package>httpd</package> by explicitly disabling
<package>nginx</package> using <xref linkend="opt-services.nginx.enable" /> and fixing the
settings <literal>listen.owner</literal> &amp; <literal>listen.group</literal> in the
<link linkend="opt-services.phpfpm.pools">corresponding <literal>phpfpm</literal> pool</link>.
</para>
<para>
An exemplary configuration may look like this:
<programlisting>{ config, lib, pkgs, ... }: {
<link linkend="opt-services.nginx.enable">services.nginx.enable</link> = false;
services.nextcloud = {
<link linkend="opt-services.nextcloud.enable">enable</link> = true;
<link linkend="opt-services.nextcloud.hostName">hostName</link> = "localhost";
/* further, required options */
};
<link linkend="opt-services.phpfpm.pools._name_.settings">services.phpfpm.pools.nextcloud.settings</link> = {
"listen.owner" = config.services.httpd.user;
"listen.group" = config.services.httpd.group;
};
services.httpd = {
<link linkend="opt-services.httpd.enable">enable</link> = true;
<link linkend="opt-services.httpd.adminAddr">adminAddr</link> = "webmaster@localhost";
<link linkend="opt-services.httpd.extraModules">extraModules</link> = [ "proxy_fcgi" ];
virtualHosts."localhost" = {
<link linkend="opt-services.httpd.virtualHosts._name_.documentRoot">documentRoot</link> = config.services.nextcloud.package;
<link linkend="opt-services.httpd.virtualHosts._name_.extraConfig">extraConfig</link> = ''
&lt;Directory "${config.services.nextcloud.package}"&gt;
&lt;FilesMatch "\.php$"&gt;
&lt;If "-f %{REQUEST_FILENAME}"&gt;
SetHandler "proxy:unix:${config.services.phpfpm.pools.nextcloud.socket}|fcgi://localhost/"
&lt;/If&gt;
&lt;/FilesMatch&gt;
&lt;IfModule mod_rewrite.c&gt;
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
&lt;/IfModule&gt;
DirectoryIndex index.php
Require all granted
Options +FollowSymLinks
&lt;/Directory&gt;
'';
};
};
}</programlisting>
</para>
</section>
<section xml:id="module-services-nextcloud-maintainer-info">
<title>Maintainer information</title>

View file

@ -0,0 +1,127 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.rss-bridge;
poolName = "rss-bridge";
whitelist = pkgs.writeText "rss-bridge_whitelist.txt"
(concatStringsSep "\n" cfg.whitelist);
in
{
options = {
services.rss-bridge = {
enable = mkEnableOption "rss-bridge";
user = mkOption {
type = types.str;
default = "nginx";
example = "nginx";
description = ''
User account under which both the service and the web-application run.
'';
};
group = mkOption {
type = types.str;
default = "nginx";
example = "nginx";
description = ''
Group under which the web-application run.
'';
};
pool = mkOption {
type = types.str;
default = poolName;
description = ''
Name of existing phpfpm pool that is used to run web-application.
If not specified a pool will be created automatically with
default values.
'';
};
dataDir = mkOption {
type = types.str;
default = "/var/lib/rss-bridge";
description = ''
Location in which cache directory will be created.
You can put <literal>config.ini.php</literal> in here.
'';
};
virtualHost = mkOption {
type = types.nullOr types.str;
default = "rss-bridge";
description = ''
Name of the nginx virtualhost to use and setup. If null, do not setup any virtualhost.
'';
};
whitelist = mkOption {
type = types.listOf types.str;
default = [];
example = options.literalExample ''
[
"Facebook"
"Instagram"
"Twitter"
]
'';
description = ''
List of bridges to be whitelisted.
If the list is empty, rss-bridge will use whitelist.default.txt.
Use <literal>[ "*" ]</literal> to whitelist all.
'';
};
};
};
config = mkIf cfg.enable {
services.phpfpm.pools = mkIf (cfg.pool == poolName) {
${poolName} = {
user = cfg.user;
settings = mapAttrs (name: mkDefault) {
"listen.owner" = cfg.user;
"listen.group" = cfg.user;
"listen.mode" = "0600";
"pm" = "dynamic";
"pm.max_children" = 75;
"pm.start_servers" = 10;
"pm.min_spare_servers" = 5;
"pm.max_spare_servers" = 20;
"pm.max_requests" = 500;
"catch_workers_output" = 1;
};
};
};
systemd.tmpfiles.rules = [
"d '${cfg.dataDir}/cache' 0750 ${cfg.user} ${cfg.group} - -"
(mkIf (cfg.whitelist != []) "L+ ${cfg.dataDir}/whitelist.txt - - - - ${whitelist}")
"z '${cfg.dataDir}/config.ini.php' 0750 ${cfg.user} ${cfg.group} - -"
];
services.nginx = mkIf (cfg.virtualHost != null) {
enable = true;
virtualHosts = {
${cfg.virtualHost} = {
root = "${pkgs.rss-bridge}";
locations."/" = {
tryFiles = "$uri /index.php$is_args$args";
};
locations."~ ^/index.php(/|$)" = {
extraConfig = ''
include ${pkgs.nginx}/conf/fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:${config.services.phpfpm.pools.${cfg.pool}.socket};
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param RSSBRIDGE_DATA ${cfg.dataDir};
'';
};
};
};
};
};
}

View file

@ -10,6 +10,12 @@ let
pkg = cfg.package.out;
apachectl = pkgs.runCommand "apachectl" { meta.priority = -1; } ''
mkdir -p $out/bin
cp ${pkg}/bin/apachectl $out/bin/apachectl
sed -i $out/bin/apachectl -e 's|$HTTPD -t|$HTTPD -t -f ${httpdConf}|'
'';
httpdConf = cfg.configFile;
php = cfg.phpPackage.override { apacheHttpd = pkg; };
@ -650,10 +656,29 @@ in
postRun = "systemctl reload httpd.service";
}) (filterAttrs (name: hostOpts: hostOpts.enableACME) cfg.virtualHosts);
environment.systemPackages = [ pkg ];
environment.systemPackages = [
apachectl
pkg
];
# required for "apachectl configtest"
environment.etc."httpd/httpd.conf".source = httpdConf;
services.logrotate = optionalAttrs (cfg.logFormat != "none") {
enable = mkDefault true;
paths.httpd = {
path = "${cfg.logDir}/*.log";
user = cfg.user;
group = cfg.group;
frequency = "daily";
keep = 28;
extraConfig = ''
sharedscripts
compress
delaycompress
postrotate
systemctl reload httpd.service > /dev/null 2>/dev/null || true
endscript
'';
};
};
services.httpd.phpOptions =
''

View file

@ -1,174 +0,0 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.meguca;
postgres = config.services.postgresql;
in with lib; {
options.services.meguca = {
enable = mkEnableOption "meguca";
dataDir = mkOption {
type = types.path;
default = "/var/lib/meguca";
example = "/home/okina/meguca";
description = "Location where meguca stores it's database and links.";
};
password = mkOption {
type = types.str;
default = "meguca";
example = "dumbpass";
description = "Password for the meguca database.";
};
passwordFile = mkOption {
type = types.path;
default = "/run/keys/meguca-password-file";
example = "/home/okina/meguca/keys/pass";
description = "Password file for the meguca database.";
};
reverseProxy = mkOption {
type = types.nullOr types.str;
default = null;
example = "192.168.1.5";
description = "Reverse proxy IP.";
};
sslCertificate = mkOption {
type = types.nullOr types.str;
default = null;
example = "/home/okina/meguca/ssl.cert";
description = "Path to the SSL certificate.";
};
listenAddress = mkOption {
type = types.nullOr types.str;
default = null;
example = "127.0.0.1:8000";
description = "Listen on a specific IP address and port.";
};
cacheSize = mkOption {
type = types.nullOr types.int;
default = null;
example = 256;
description = "Cache size in MB.";
};
postgresArgs = mkOption {
type = types.str;
example = "user=meguca password=dumbpass dbname=meguca sslmode=disable";
description = "Postgresql connection arguments.";
};
postgresArgsFile = mkOption {
type = types.path;
default = "/run/keys/meguca-postgres-args";
example = "/home/okina/meguca/keys/postgres";
description = "Postgresql connection arguments file.";
};
compressTraffic = mkOption {
type = types.bool;
default = false;
description = "Compress all traffic with gzip.";
};
assumeReverseProxy = mkOption {
type = types.bool;
default = false;
description = "Assume the server is behind a reverse proxy, when resolving client IPs.";
};
httpsOnly = mkOption {
type = types.bool;
default = false;
description = "Serve and listen only through HTTPS.";
};
videoPaths = mkOption {
type = types.listOf types.path;
default = [];
example = [ "/home/okina/Videos/tehe_pero.webm" ];
description = "Videos that will be symlinked into www/videos.";
};
};
config = mkIf cfg.enable {
security.sudo.enable = cfg.enable;
services.postgresql.enable = cfg.enable;
services.postgresql.package = pkgs.postgresql_11;
services.meguca.passwordFile = mkDefault (pkgs.writeText "meguca-password-file" cfg.password);
services.meguca.postgresArgsFile = mkDefault (pkgs.writeText "meguca-postgres-args" cfg.postgresArgs);
services.meguca.postgresArgs = mkDefault "user=meguca password=${cfg.password} dbname=meguca sslmode=disable";
systemd.services.meguca = {
description = "meguca";
after = [ "network.target" "postgresql.service" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
# Ensure folder exists or create it and links and permissions are correct
mkdir -p ${escapeShellArg cfg.dataDir}/www
rm -rf ${escapeShellArg cfg.dataDir}/www/videos
ln -sf ${pkgs.meguca}/share/meguca/www/* ${escapeShellArg cfg.dataDir}/www
unlink ${escapeShellArg cfg.dataDir}/www/videos
mkdir -p ${escapeShellArg cfg.dataDir}/www/videos
for vid in ${escapeShellArg cfg.videoPaths}; do
ln -sf $vid ${escapeShellArg cfg.dataDir}/www/videos
done
chmod 750 ${escapeShellArg cfg.dataDir}
chown -R meguca:meguca ${escapeShellArg cfg.dataDir}
# Ensure the database is correct or create it
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createuser \
-SDR meguca || true
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createdb \
-T template0 -E UTF8 -O meguca meguca || true
${pkgs.sudo}/bin/sudo -u meguca ${postgres.package}/bin/psql \
-c "ALTER ROLE meguca WITH PASSWORD '$(cat ${escapeShellArg cfg.passwordFile})';" || true
'';
script = ''
cd ${escapeShellArg cfg.dataDir}
${pkgs.meguca}/bin/meguca -d "$(cat ${escapeShellArg cfg.postgresArgsFile})"''
+ optionalString (cfg.reverseProxy != null) " -R ${cfg.reverseProxy}"
+ optionalString (cfg.sslCertificate != null) " -S ${cfg.sslCertificate}"
+ optionalString (cfg.listenAddress != null) " -a ${cfg.listenAddress}"
+ optionalString (cfg.cacheSize != null) " -c ${toString cfg.cacheSize}"
+ optionalString (cfg.compressTraffic) " -g"
+ optionalString (cfg.assumeReverseProxy) " -r"
+ optionalString (cfg.httpsOnly) " -s" + " start";
serviceConfig = {
PermissionsStartOnly = true;
Type = "forking";
User = "meguca";
Group = "meguca";
ExecStop = "${pkgs.meguca}/bin/meguca stop";
};
};
users = {
groups.meguca.gid = config.ids.gids.meguca;
users.meguca = {
description = "meguca server service user";
home = cfg.dataDir;
createHome = true;
group = "meguca";
uid = config.ids.uids.meguca;
};
};
};
imports = [
(mkRenamedOptionModule [ "services" "meguca" "baseDir" ] [ "services" "meguca" "dataDir" ])
];
meta.maintainers = with maintainers; [ chiiruno ];
}

View file

@ -120,9 +120,12 @@ in {
ProtectHome = true;
PrivateTmp = true;
PrivateDevices = true;
PrivateUsers = false;
ProtectHostname = true;
ProtectClock = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectKernelLogs = true;
ProtectControlGroups = true;
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
LockPersonality = true;

View file

@ -61,7 +61,8 @@ in
"--kill"
] ++ cfg.extraOptions);
ExecStop = "${pkgs.procps}/bin/pkill imwheel";
Restart = "on-failure";
RestartSec = 3;
Restart = "always";
};
};
};

View file

@ -82,12 +82,11 @@ in
services.xserver.windowManager = {
session = [{
name = "xmonad";
start = if (cfg.config != null) then ''
${xmonadBin}
waitPID=$!
'' else ''
systemd-cat -t xmonad ${xmonad}/bin/xmonad &
waitPID=$!
start = let
xmonadCommand = if (cfg.config != null) then xmonadBin else "${xmonad}/bin/xmonad";
in ''
systemd-cat -t xmonad ${xmonadCommand} &
waitPID=$!
'';
}];
};

View file

@ -378,12 +378,14 @@ mountFS() {
mkdir -p "/mnt-root$mountPoint"
# For CIFS mounts, retry a few times before giving up.
# For ZFS and CIFS mounts, retry a few times before giving up.
# We do this for ZFS as a workaround for issue NixOS/nixpkgs#25383.
local n=0
while true; do
mount "/mnt-root$mountPoint" && break
if [ "$fsType" != cifs -o "$n" -ge 10 ]; then fail; break; fi
if [ \( "$fsType" != cifs -a "$fsType" != zfs \) -o "$n" -ge 10 ]; then fail; break; fi
echo "retrying..."
sleep 1
n=$((n + 1))
done

View file

@ -25,7 +25,7 @@ let
"nss-lookup.target"
"nss-user-lookup.target"
"time-sync.target"
#"cryptsetup.target"
"cryptsetup.target"
"sigpwr.target"
"timers.target"
"paths.target"
@ -81,10 +81,6 @@ let
"systemd-coredump.socket"
"systemd-coredump@.service"
# SysV init compatibility.
"systemd-initctl.socket"
"systemd-initctl.service"
# Kernel module loading.
"systemd-modules-load.service"
"kmod-static-nodes.service"
@ -1012,18 +1008,18 @@ in
"sysctl.d/50-coredump.conf".source = "${systemd}/example/sysctl.d/50-coredump.conf";
"sysctl.d/50-default.conf".source = "${systemd}/example/sysctl.d/50-default.conf";
"tmpfiles.d".source = (pkgs.symlinkJoin {
"tmpfiles.d".source = pkgs.symlinkJoin {
name = "tmpfiles.d";
paths = cfg.tmpfiles.packages;
paths = map (p: p + "/lib/tmpfiles.d") cfg.tmpfiles.packages;
postBuild = ''
for i in $(cat $pathsPath); do
(test -d $i/lib/tmpfiles.d && test $(ls $i/lib/tmpfiles.d/*.conf | wc -l) -ge 1) || (
echo "ERROR: The path $i was passed to systemd.tmpfiles.packages but either does not contain the folder lib/tmpfiles.d or if it contains that folder, there are no files ending in .conf in it."
(test -d "$i" && test $(ls "$i"/*.conf | wc -l) -ge 1) || (
echo "ERROR: The path '$i' from systemd.tmpfiles.packages contains no *.conf files."
exit 1
)
done
'';
}) + "/lib/tmpfiles.d";
};
"systemd/system-generators" = { source = hooks "generators" cfg.generators; };
"systemd/system-shutdown" = { source = hooks "shutdown" cfg.shutdown; };

View file

@ -1,22 +1,13 @@
# This module allows the test driver to connect to the virtual machine
# via a root shell attached to port 514.
{ config, lib, pkgs, ... }:
{ options, config, lib, pkgs, ... }:
with lib;
with import ../../lib/qemu-flags.nix { inherit pkgs; };
{
# This option is a dummy that if used in conjunction with
# modules/virtualisation/qemu-vm.nix gets merged with the same option defined
# there and only is declared here because some modules use
# test-instrumentation.nix but not qemu-vm.nix.
#
# One particular example are the boot tests where we want instrumentation
# within the images but not other stuff like setting up 9p filesystems.
options.virtualisation.qemu = { };
config = {
systemd.services.backdoor =
@ -55,7 +46,12 @@ with import ../../lib/qemu-flags.nix { inherit pkgs; };
systemd.services."serial-getty@hvc0".enable = false;
# Only use a serial console, no TTY.
virtualisation.qemu.consoles = [ qemuSerialDevice ];
# NOTE: optionalAttrs
# test-instrumentation.nix appears to be used without qemu-vm.nix, so
# we avoid defining consoles if not possible.
# TODO: refactor such that test-instrumentation can import qemu-vm
# or declare virtualisation.qemu.console option in a module that's always imported
virtualisation = lib.optionalAttrs (options ? virtualisation.qemu.consoles) { qemu.consoles = [ qemuSerialDevice ]; };
boot.initrd.preDeviceCommands =
''

View file

@ -85,7 +85,7 @@ in
environment.etc."crictl.yaml".source = copyFile "${pkgs.cri-o-unwrapped.src}/crictl.yaml";
environment.etc."crio/crio.conf".text = ''
environment.etc."crio/crio.conf.d/00-default.conf".text = ''
[crio]
storage_driver = "${cfg.storageDriver}"
@ -100,6 +100,7 @@ in
cgroup_manager = "systemd"
log_level = "${cfg.logLevel}"
manage_ns_lifecycle = true
pinns_path = "${cfg.package}/bin/pinns"
${optionalString (cfg.runtime != null) ''
default_runtime = "${cfg.runtime}"
@ -109,6 +110,7 @@ in
'';
environment.etc."cni/net.d/10-crio-bridge.conf".source = copyFile "${pkgs.cri-o-unwrapped.src}/contrib/cni/10-crio-bridge.conf";
environment.etc."cni/net.d/99-loopback.conf".source = copyFile "${pkgs.cri-o-unwrapped.src}/contrib/cni/99-loopback.conf";
# Enable common /etc/containers configuration
virtualisation.containers.enable = true;

View file

@ -1,134 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
with builtins;
let
cfg = config.virtualisation;
sanitizeImageName = image: replaceStrings ["/"] ["-"] image.imageName;
hash = drv: head (split "-" (baseNameOf drv.outPath));
# The label of an ext4 FS is limited to 16 bytes
labelFromImage = image: substring 0 16 (hash image);
# The Docker image is loaded and some files from /var/lib/docker/
# are written into a qcow image.
preload = image: pkgs.vmTools.runInLinuxVM (
pkgs.runCommand "docker-preload-image-${sanitizeImageName image}" {
buildInputs = with pkgs; [ docker e2fsprogs utillinux curl kmod ];
preVM = pkgs.vmTools.createEmptyImage {
size = cfg.dockerPreloader.qcowSize;
fullName = "docker-deamon-image.qcow2";
};
}
''
mkfs.ext4 /dev/vda
e2label /dev/vda ${labelFromImage image}
mkdir -p /var/lib/docker
mount -t ext4 /dev/vda /var/lib/docker
modprobe overlay
# from https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
dockerd -H tcp://127.0.0.1:5555 -H unix:///var/run/docker.sock &
until $(curl --output /dev/null --silent --connect-timeout 2 http://127.0.0.1:5555); do
printf '.'
sleep 1
done
docker load -i ${image}
kill %1
find /var/lib/docker/ -maxdepth 1 -mindepth 1 -not -name "image" -not -name "overlay2" | xargs rm -rf
'');
preloadedImages = map preload cfg.dockerPreloader.images;
in
{
options.virtualisation.dockerPreloader = {
images = mkOption {
default = [ ];
type = types.listOf types.package;
description =
''
A list of Docker images to preload (in the /var/lib/docker directory).
'';
};
qcowSize = mkOption {
default = 1024;
type = types.int;
description =
''
The size (MB) of qcow files.
'';
};
};
config = mkIf (cfg.dockerPreloader.images != []) {
assertions = [{
# If docker.storageDriver is null, Docker choose the storage
# driver. So, in this case, we cannot be sure overlay2 is used.
assertion = cfg.docker.storageDriver == "overlay2"
|| cfg.docker.storageDriver == "overlay"
|| cfg.docker.storageDriver == null;
message = "The Docker image Preloader only works with overlay2 storage driver!";
}];
virtualisation.qemu.options =
map (path: "-drive if=virtio,file=${path}/disk-image.qcow2,readonly,media=cdrom,format=qcow2")
preloadedImages;
# All attached QCOW files are mounted and their contents are linked
# to /var/lib/docker/ in order to make image available.
systemd.services.docker-preloader = {
description = "Preloaded Docker images";
wantedBy = ["docker.service"];
after = ["network.target"];
path = with pkgs; [ mount rsync jq ];
script = ''
mkdir -p /var/lib/docker/overlay2/l /var/lib/docker/image/overlay2
echo '{}' > /tmp/repositories.json
for i in ${concatStringsSep " " (map labelFromImage cfg.dockerPreloader.images)}; do
mkdir -p /mnt/docker-images/$i
# The ext4 label is limited to 16 bytes
mount /dev/disk/by-label/$(echo $i | cut -c1-16) -o ro,noload /mnt/docker-images/$i
find /mnt/docker-images/$i/overlay2/ -maxdepth 1 -mindepth 1 -not -name l\
-exec ln -s '{}' /var/lib/docker/overlay2/ \;
cp -P /mnt/docker-images/$i/overlay2/l/* /var/lib/docker/overlay2/l/
rsync -a /mnt/docker-images/$i/image/ /var/lib/docker/image/
# Accumulate image definitions
cp /tmp/repositories.json /tmp/repositories.json.tmp
jq -s '.[0] * .[1]' \
/tmp/repositories.json.tmp \
/mnt/docker-images/$i/image/overlay2/repositories.json \
> /tmp/repositories.json
done
mv /tmp/repositories.json /var/lib/docker/image/overlay2/repositories.json
'';
serviceConfig = {
Type = "oneshot";
};
};
};
}

View file

@ -264,7 +264,6 @@ in
{
imports = [
../profiles/qemu-guest.nix
./docker-preloader.nix
];
options = {

View file

@ -34,6 +34,7 @@ in
bind = handleTest ./bind.nix {};
bitcoind = handleTest ./bitcoind.nix {};
bittorrent = handleTest ./bittorrent.nix {};
bitwarden = handleTest ./bitwarden.nix {};
blockbook-frontend = handleTest ./blockbook-frontend.nix {};
buildkite-agents = handleTest ./buildkite-agents.nix {};
boot = handleTestOn ["x86_64-linux"] ./boot.nix {}; # syslinux is unsupported on aarch64
@ -65,11 +66,13 @@ in
containers-macvlans = handleTest ./containers-macvlans.nix {};
containers-physical_interfaces = handleTest ./containers-physical_interfaces.nix {};
containers-portforward = handleTest ./containers-portforward.nix {};
containers-reloadable = handleTest ./containers-reloadable.nix {};
containers-restart_networking = handleTest ./containers-restart_networking.nix {};
containers-tmpfs = handleTest ./containers-tmpfs.nix {};
convos = handleTest ./convos.nix {};
corerad = handleTest ./corerad.nix {};
couchdb = handleTest ./couchdb.nix {};
cri-o = handleTestOn ["x86_64-linux"] ./cri-o.nix {};
deluge = handleTest ./deluge.nix {};
dhparams = handleTest ./dhparams.nix {};
dnscrypt-proxy2 = handleTestOn ["x86_64-linux"] ./dnscrypt-proxy2.nix {};
@ -78,15 +81,13 @@ in
docker = handleTestOn ["x86_64-linux"] ./docker.nix {};
oci-containers = handleTestOn ["x86_64-linux"] ./oci-containers.nix {};
docker-edge = handleTestOn ["x86_64-linux"] ./docker-edge.nix {};
docker-preloader = handleTestOn ["x86_64-linux"] ./docker-preloader.nix {};
docker-registry = handleTest ./docker-registry.nix {};
docker-tools = handleTestOn ["x86_64-linux"] ./docker-tools.nix {};
docker-tools-overlay = handleTestOn ["x86_64-linux"] ./docker-tools-overlay.nix {};
documize = handleTest ./documize.nix {};
dokuwiki = handleTest ./dokuwiki.nix {};
dovecot = handleTest ./dovecot.nix {};
# ec2-config doesn't work in a sandbox as the simulated ec2 instance needs network access
#ec2-config = (handleTestOn ["x86_64-linux"] ./ec2.nix {}).boot-ec2-config or {};
ec2-config = (handleTestOn ["x86_64-linux"] ./ec2.nix {}).boot-ec2-config or {};
ec2-nixops = (handleTestOn ["x86_64-linux"] ./ec2.nix {}).boot-ec2-nixops or {};
ecryptfs = handleTest ./ecryptfs.nix {};
ejabberd = handleTest ./xmpp/ejabberd.nix {};
@ -195,12 +196,10 @@ in
mailcatcher = handleTest ./mailcatcher.nix {};
mariadb-galera-mariabackup = handleTest ./mysql/mariadb-galera-mariabackup.nix {};
mariadb-galera-rsync = handleTest ./mysql/mariadb-galera-rsync.nix {};
mathics = handleTest ./mathics.nix {};
matomo = handleTest ./matomo.nix {};
matrix-synapse = handleTest ./matrix-synapse.nix {};
mediawiki = handleTest ./mediawiki.nix {};
memcached = handleTest ./memcached.nix {};
mesos = handleTest ./mesos.nix {};
metabase = handleTest ./metabase.nix {};
miniflux = handleTest ./miniflux.nix {};
minio = handleTest ./minio.nix {};

View file

@ -1,7 +1,7 @@
import ./make-test-python.nix ({ pkgs, ... }: {
name = "bitcoind";
meta = with pkgs.stdenv.lib; {
maintainers = with maintainers; [ maintainers."1000101" ];
maintainers = with maintainers; [ _1000101 ];
};
machine = { ... }: {

188
nixos/tests/bitwarden.nix Normal file
View file

@ -0,0 +1,188 @@
{ system ? builtins.currentSystem
, config ? { }
, pkgs ? import ../.. { inherit system config; }
}:
# These tests will:
# * Set up a bitwarden-rs server
# * Have Firefox use the web vault to create an account, log in, and save a password to the valut
# * Have the bw cli log in and read that password from the vault
#
# Note that Firefox must be on the same machine as the server for WebCrypto APIs to be available (or HTTPS must be configured)
#
# The same tests should work without modification on the official bitwarden server, if we ever package that.
with import ../lib/testing-python.nix { inherit system pkgs; };
with pkgs.lib;
let
backends = [ "sqlite" "mysql" "postgresql" ];
dbPassword = "please_dont_hack";
userEmail = "meow@example.com";
userPassword = "also_super_secret_ZJWpBKZi668QGt"; # Must be complex to avoid interstitial warning on the signup page
storedPassword = "seeeecret";
makeBitwardenTest = backend: makeTest {
name = "bitwarden_rs-${backend}";
meta = {
maintainers = with pkgs.stdenv.lib.maintainers; [ jjjollyjim ];
};
nodes = {
server = { pkgs, ... }:
let backendConfig = {
mysql = {
services.mysql = {
enable = true;
initialScript = pkgs.writeText "mysql-init.sql" ''
CREATE DATABASE bitwarden;
CREATE USER 'bitwardenuser'@'localhost' IDENTIFIED BY '${dbPassword}';
GRANT ALL ON `bitwarden`.* TO 'bitwardenuser'@'localhost';
FLUSH PRIVILEGES;
'';
package = pkgs.mysql;
};
services.bitwarden_rs.config.databaseUrl = "mysql://bitwardenuser:${dbPassword}@localhost/bitwarden";
systemd.services.bitwarden_rs.after = [ "mysql.service" ];
};
postgresql = {
services.postgresql = {
enable = true;
initialScript = pkgs.writeText "postgresql-init.sql" ''
CREATE DATABASE bitwarden;
CREATE USER bitwardenuser WITH PASSWORD '${dbPassword}';
GRANT ALL PRIVILEGES ON DATABASE bitwarden TO bitwardenuser;
'';
};
services.bitwarden_rs.config.databaseUrl = "postgresql://bitwardenuser:${dbPassword}@localhost/bitwarden";
systemd.services.bitwarden_rs.after = [ "postgresql.service" ];
};
sqlite = { };
};
in
mkMerge [
backendConfig.${backend}
{
services.bitwarden_rs = {
enable = true;
dbBackend = backend;
config.rocketPort = 80;
};
networking.firewall.allowedTCPPorts = [ 80 ];
environment.systemPackages =
let
testRunner = pkgs.writers.writePython3Bin "test-runner"
{
libraries = [ pkgs.python3Packages.selenium ];
} ''
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument('--headless')
driver = Firefox(options=options)
driver.implicitly_wait(20)
driver.get('http://localhost/#/register')
wait = WebDriverWait(driver, 10)
wait.until(EC.title_contains("Create Account"))
driver.find_element_by_css_selector('input#email').send_keys(
'${userEmail}'
)
driver.find_element_by_css_selector('input#name').send_keys(
'A Cat'
)
driver.find_element_by_css_selector('input#masterPassword').send_keys(
'${userPassword}'
)
driver.find_element_by_css_selector('input#masterPasswordRetype').send_keys(
'${userPassword}'
)
driver.find_element_by_xpath("//button[contains(., 'Submit')]").click()
wait.until_not(EC.title_contains("Create Account"))
driver.find_element_by_css_selector('input#masterPassword').send_keys(
'${userPassword}'
)
driver.find_element_by_xpath("//button[contains(., 'Log In')]").click()
wait.until(EC.title_contains("My Vault"))
driver.find_element_by_xpath("//button[contains(., 'Add Item')]").click()
driver.find_element_by_css_selector('input#name').send_keys(
'secrets'
)
driver.find_element_by_css_selector('input#loginPassword').send_keys(
'${storedPassword}'
)
driver.find_element_by_xpath("//button[contains(., 'Save')]").click()
'';
in
[ pkgs.firefox-unwrapped pkgs.geckodriver testRunner ];
virtualisation.memorySize = 768;
}
];
client = { pkgs, ... }:
{
environment.systemPackages = [ pkgs.bitwarden-cli ];
};
};
testScript = ''
start_all()
server.wait_for_unit("bitwarden_rs.service")
server.wait_for_open_port(80)
with subtest("configure the cli"):
client.succeed("bw --nointeraction config server http://server")
with subtest("can't login to nonexistant account"):
client.fail(
"bw --nointeraction --raw login ${userEmail} ${userPassword}"
)
with subtest("use the web interface to sign up, log in, and save a password"):
server.succeed("PYTHONUNBUFFERED=1 test-runner | systemd-cat -t test-runner")
with subtest("log in with the cli"):
key = client.succeed(
"bw --nointeraction --raw login ${userEmail} ${userPassword}"
).strip()
with subtest("sync with the cli"):
client.succeed(f"bw --nointeraction --raw --session {key} sync -f")
with subtest("get the password with the cli"):
password = client.succeed(
f"bw --nointeraction --raw --session {key} list items | ${pkgs.jq}/bin/jq -r .[].login.password"
)
assert password.strip() == "${storedPassword}"
'';
};
in
builtins.listToAttrs (
map
(backend: { name = backend; value = makeBitwardenTest backend; })
backends
)

View file

@ -1,7 +1,7 @@
import ./make-test-python.nix ({ pkgs, ... }: {
name = "blockbook-frontend";
meta = with pkgs.stdenv.lib; {
maintainers = with maintainers; [ maintainers."1000101" ];
maintainers = with maintainers; [ _1000101 ];
};
machine = { ... }: {

View file

@ -20,30 +20,44 @@ with pkgs.lib;
in makeTest {
name = "ec2-" + name;
nodes = {};
testScript =
''
my $imageDir = ($ENV{'TMPDIR'} // "/tmp") . "/vm-state-machine";
mkdir $imageDir, 0700;
my $diskImage = "$imageDir/machine.qcow2";
system("qemu-img create -f qcow2 -o backing_file=${image} $diskImage") == 0 or die;
system("qemu-img resize $diskImage 10G") == 0 or die;
testScript = ''
import os
import subprocess
# Note: we use net=169.0.0.0/8 rather than
# net=169.254.0.0/16 to prevent dhcpcd from getting horribly
# confused. (It would get a DHCP lease in the 169.254.*
# range, which it would then configure and prompty delete
# again when it deletes link-local addresses.) Ideally we'd
# turn off the DHCP server, but qemu does not have an option
# to do that.
my $startCommand = "qemu-kvm -m 1024";
$startCommand .= " -device virtio-net-pci,netdev=vlan0";
$startCommand .= " -netdev 'user,id=vlan0,net=169.0.0.0/8,guestfwd=tcp:169.254.169.254:80-cmd:${pkgs.micro-httpd}/bin/micro_httpd ${metaData}'";
$startCommand .= " -drive file=$diskImage,if=virtio,werror=report";
$startCommand .= " \$QEMU_OPTS";
image_dir = os.path.join(
os.environ.get("TMPDIR", tempfile.gettempdir()), "tmp", "vm-state-machine"
)
os.makedirs(image_dir, mode=0o700, exist_ok=True)
disk_image = os.path.join(image_dir, "machine.qcow2")
subprocess.check_call(
[
"qemu-img",
"create",
"-f",
"qcow2",
"-o",
"backing_file=${image}",
disk_image,
]
)
subprocess.check_call(["qemu-img", "resize", disk_image, "10G"])
my $machine = createMachine({ startCommand => $startCommand });
# Note: we use net=169.0.0.0/8 rather than
# net=169.254.0.0/16 to prevent dhcpcd from getting horribly
# confused. (It would get a DHCP lease in the 169.254.*
# range, which it would then configure and prompty delete
# again when it deletes link-local addresses.) Ideally we'd
# turn off the DHCP server, but qemu does not have an option
# to do that.
start_command = (
"qemu-kvm -m 1024"
+ " -device virtio-net-pci,netdev=vlan0"
+ " -netdev 'user,id=vlan0,net=169.0.0.0/8,guestfwd=tcp:169.254.169.254:80-cmd:${pkgs.micro-httpd}/bin/micro_httpd ${metaData}'"
+ f" -drive file={disk_image},if=virtio,werror=report"
+ " $QEMU_OPTS"
)
${script}
'';
machine = create_machine({"startCommand": start_command})
'' + script;
};
}

View file

@ -9,13 +9,13 @@ let
};
};
# prevent make-test.nix to change IP
# prevent make-test-python.nix to change IP
networking.interfaces = {
eth1.ipv4.addresses = lib.mkOverride 0 [ ];
};
};
in {
name = "cotnainers-reloadable";
name = "containers-reloadable";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ danbst ];
};

19
nixos/tests/cri-o.nix Normal file
View file

@ -0,0 +1,19 @@
# This test runs CRI-O and verifies via critest
import ./make-test-python.nix ({ pkgs, ... }: {
name = "cri-o";
maintainers = with pkgs.stdenv.lib.maintainers; teams.podman.members;
nodes = {
crio = {
virtualisation.cri-o.enable = true;
};
};
testScript = ''
start_all()
crio.wait_for_unit("crio.service")
crio.succeed(
"critest --ginkgo.focus='Runtime info' --runtime-endpoint unix:///var/run/crio/crio.sock"
)
'';
})

View file

@ -1,27 +0,0 @@
import ./make-test.nix ({ pkgs, ...} : {
name = "docker-preloader";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ lewo ];
};
nodes = {
docker =
{ pkgs, ... }:
{
virtualisation.docker.enable = true;
virtualisation.dockerPreloader.images = [ pkgs.dockerTools.examples.nix pkgs.dockerTools.examples.bash ];
services.openssh.enable = true;
services.openssh.permitRootLogin = "yes";
services.openssh.extraConfig = "PermitEmptyPasswords yes";
users.extraUsers.root.password = "";
};
};
testScript = ''
startAll;
$docker->waitForUnit("sockets.target");
$docker->succeed("docker run nix nix-store --version");
$docker->succeed("docker run bash bash --version");
'';
})

View file

@ -33,7 +33,7 @@ let
in {
name = "dokuwiki";
meta = with pkgs.stdenv.lib; {
maintainers = with maintainers; [ maintainers."1000101" ];
maintainers = with maintainers; [ _1000101 ];
};
machine = { ... }: {
services.dokuwiki."site1.local" = {

View file

@ -3,58 +3,58 @@
pkgs ? import ../.. { inherit system config; }
}:
with import ../lib/testing.nix { inherit system pkgs; };
with import ../lib/testing-python.nix { inherit system pkgs; };
with pkgs.lib;
with import common/ec2.nix { inherit makeTest pkgs; };
let
imageCfg =
(import ../lib/eval-config.nix {
inherit system;
modules = [
../maintainers/scripts/ec2/amazon-image.nix
../modules/testing/test-instrumentation.nix
../modules/profiles/qemu-guest.nix
{ ec2.hvm = true;
imageCfg = (import ../lib/eval-config.nix {
inherit system;
modules = [
../maintainers/scripts/ec2/amazon-image.nix
../modules/testing/test-instrumentation.nix
../modules/profiles/qemu-guest.nix
{
ec2.hvm = true;
# Hack to make the partition resizing work in QEMU.
boot.initrd.postDeviceCommands = mkBefore
''
ln -s vda /dev/xvda
ln -s vda1 /dev/xvda1
'';
# Hack to make the partition resizing work in QEMU.
boot.initrd.postDeviceCommands = mkBefore ''
ln -s vda /dev/xvda
ln -s vda1 /dev/xvda1
'';
# Needed by nixos-rebuild due to the lack of network
# access. Determined by trial and error.
system.extraDependencies =
with pkgs; (
[
# Needed for a nixos-rebuild.
busybox
stdenv
stdenvNoCC
mkinitcpio-nfs-utils
unionfs-fuse
cloud-utils
desktop-file-utils
texinfo
libxslt.bin
xorg.lndir
# Needed by nixos-rebuild due to the lack of network
# access. Determined by trial and error.
system.extraDependencies = with pkgs; ( [
# Needed for a nixos-rebuild.
busybox
cloud-utils
desktop-file-utils
libxslt.bin
mkinitcpio-nfs-utils
stdenv
stdenvNoCC
texinfo
unionfs-fuse
xorg.lndir
# These are used in the configure-from-userdata tests
# for EC2. Httpd and valgrind are requested by the
# configuration.
apacheHttpd apacheHttpd.doc apacheHttpd.man valgrind.doc
]
);
}
];
}).config;
# These are used in the configure-from-userdata tests
# for EC2. Httpd and valgrind are requested by the
# configuration.
apacheHttpd
apacheHttpd.doc
apacheHttpd.man
valgrind.doc
]);
}
];
}).config;
image = "${imageCfg.system.build.amazonImage}/${imageCfg.amazonImage.name}.vhd";
sshKeys = import ./ssh-keys.nix pkgs;
snakeOilPrivateKey = sshKeys.snakeOilPrivateKey.text;
snakeOilPrivateKeyFile = pkgs.writeText "private-key" snakeOilPrivateKey;
snakeOilPublicKey = sshKeys.snakeOilPublicKey;
in {
@ -68,43 +68,47 @@ in {
SSH_HOST_ED25519_KEY:${replaceStrings ["\n"] ["|"] snakeOilPrivateKey}
'';
script = ''
$machine->start;
$machine->waitForFile("/etc/ec2-metadata/user-data");
$machine->waitForUnit("sshd.service");
machine.start()
machine.wait_for_file("/etc/ec2-metadata/user-data")
machine.wait_for_unit("sshd.service")
$machine->succeed("grep unknown /etc/ec2-metadata/ami-manifest-path");
machine.succeed("grep unknown /etc/ec2-metadata/ami-manifest-path")
# We have no keys configured on the client side yet, so this should fail
$machine->fail("ssh -o BatchMode=yes localhost exit");
machine.fail("ssh -o BatchMode=yes localhost exit")
# Let's install our client private key
$machine->succeed("mkdir -p ~/.ssh");
machine.succeed("mkdir -p ~/.ssh")
$machine->succeed("echo '${snakeOilPrivateKey}' > ~/.ssh/id_ed25519");
$machine->succeed("chmod 600 ~/.ssh/id_ed25519");
machine.copy_from_host_via_shell(
"${snakeOilPrivateKeyFile}", "~/.ssh/id_ed25519"
)
machine.succeed("chmod 600 ~/.ssh/id_ed25519")
# We haven't configured the host key yet, so this should still fail
$machine->fail("ssh -o BatchMode=yes localhost exit");
machine.fail("ssh -o BatchMode=yes localhost exit")
# Add the host key; ssh should finally succeed
$machine->succeed("echo localhost,127.0.0.1 ${snakeOilPublicKey} > ~/.ssh/known_hosts");
$machine->succeed("ssh -o BatchMode=yes localhost exit");
machine.succeed(
"echo localhost,127.0.0.1 ${snakeOilPublicKey} > ~/.ssh/known_hosts"
)
machine.succeed("ssh -o BatchMode=yes localhost exit")
# Test whether the root disk was resized.
my $blocks = $machine->succeed("stat -c %b -f /");
my $bsize = $machine->succeed("stat -c %S -f /");
my $size = $blocks * $bsize;
die "wrong free space $size" if $size < 9.7 * 1024 * 1024 * 1024 || $size > 10 * 1024 * 1024 * 1024;
blocks, block_size = map(int, machine.succeed("stat -c %b:%S -f /").split(":"))
GB = 1024 ** 3
assert 9.7 * GB <= blocks * block_size <= 10 * GB
# Just to make sure resizing is idempotent.
$machine->shutdown;
$machine->start;
$machine->waitForFile("/etc/ec2-metadata/user-data");
machine.shutdown()
machine.start()
machine.wait_for_file("/etc/ec2-metadata/user-data")
'';
};
boot-ec2-config = makeEc2Test {
name = "config-userdata";
meta.broken = true; # amazon-init wants to download from the internet while building the system
inherit image;
sshPublicKey = snakeOilPublicKey;
@ -133,17 +137,17 @@ in {
}
'';
script = ''
$machine->start;
machine.start()
# amazon-init must succeed. if it fails, make the test fail
# immediately instead of timing out in waitForFile.
$machine->waitForUnit('amazon-init.service');
# immediately instead of timing out in wait_for_file.
machine.wait_for_unit("amazon-init.service")
$machine->waitForFile("/etc/testFile");
$machine->succeed("cat /etc/testFile | grep -q 'whoa'");
machine.wait_for_file("/etc/testFile")
assert "whoa" in machine.succeed("cat /etc/testFile")
$machine->waitForUnit("httpd.service");
$machine->succeed("curl http://localhost | grep Valgrind");
machine.wait_for_unit("httpd.service")
assert "Valgrind" in machine.succeed("curl http://localhost")
'';
};
}

View file

@ -23,6 +23,13 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : {
services.xserver.desktopManager.gnome3.enable = true;
services.xserver.desktopManager.gnome3.debug = true;
environment.systemPackages = [
(pkgs.makeAutostartItem {
name = "org.gnome.Terminal";
package = pkgs.gnome3.gnome-terminal;
})
];
virtualisation.memorySize = 1024;
};
@ -65,9 +72,6 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : {
)
with subtest("Open Gnome Terminal"):
machine.succeed(
"${gnomeTerminalCommand}"
)
# correct output should be (true, '"gnome-terminal-server"')
machine.wait_until_succeeds(
"${wmClass} | grep -q 'gnome-terminal-server'"

View file

@ -1,4 +1,4 @@
import ./make-test.nix ({ pkgs, latestKernel ? false, ... } : {
import ./make-test-python.nix ({ pkgs, latestKernel ? false, ... } : {
name = "hardened";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ joachifm ];
@ -47,84 +47,88 @@ import ./make-test.nix ({ pkgs, latestKernel ? false, ... } : {
};
in
''
$machine->waitForUnit("multi-user.target");
machine.wait_for_unit("multi-user.target")
with subtest("AppArmor profiles are loaded"):
machine.succeed("systemctl status apparmor.service")
subtest "apparmor-loaded", sub {
$machine->succeed("systemctl status apparmor.service");
};
# AppArmor securityfs
subtest "apparmor-securityfs", sub {
$machine->succeed("mountpoint -q /sys/kernel/security");
$machine->succeed("cat /sys/kernel/security/apparmor/profiles");
};
with subtest("AppArmor securityfs is mounted"):
machine.succeed("mountpoint -q /sys/kernel/security")
machine.succeed("cat /sys/kernel/security/apparmor/profiles")
# Test loading out-of-tree modules
subtest "extra-module-packages", sub {
$machine->succeed("grep -Fq wireguard /proc/modules");
};
with subtest("Out-of-tree modules can be loaded"):
machine.succeed("grep -Fq wireguard /proc/modules")
# Test hidepid
subtest "hidepid", sub {
$machine->succeed("grep -Fq hidepid=2 /proc/mounts");
with subtest("hidepid=2 option is applied and works"):
machine.succeed("grep -Fq hidepid=2 /proc/mounts")
# cannot use pgrep -u here, it segfaults when access to process info is denied
$machine->succeed("[ `su - sybil -c 'ps --no-headers --user root | wc -l'` = 0 ]");
$machine->succeed("[ `su - alice -c 'ps --no-headers --user root | wc -l'` != 0 ]");
};
machine.succeed("[ `su - sybil -c 'ps --no-headers --user root | wc -l'` = 0 ]")
machine.succeed("[ `su - alice -c 'ps --no-headers --user root | wc -l'` != 0 ]")
# Test kernel module hardening
subtest "lock-modules", sub {
with subtest("No more kernel modules can be loaded"):
# note: this better a be module we normally wouldn't load ...
$machine->fail("modprobe dccp");
};
machine.fail("modprobe dccp")
# Test userns
subtest "userns", sub {
$machine->succeed("unshare --user true");
$machine->fail("su -l alice -c 'unshare --user true'");
};
with subtest("User namespaces are restricted"):
machine.succeed("unshare --user true")
machine.fail("su -l alice -c 'unshare --user true'")
# Test dmesg restriction
subtest "dmesg", sub {
$machine->fail("su -l alice -c dmesg");
};
with subtest("Regular users cannot access dmesg"):
machine.fail("su -l alice -c dmesg")
# Test access to kcore
subtest "kcore", sub {
$machine->fail("cat /proc/kcore");
};
with subtest("Kcore is inaccessible as root"):
machine.fail("cat /proc/kcore")
# Test deferred mount
subtest "mount", sub {
$machine->fail("mountpoint -q /efi"); # was deferred
$machine->execute("mkdir -p /efi");
$machine->succeed("mount /dev/disk/by-label/EFISYS /efi");
$machine->succeed("mountpoint -q /efi"); # now mounted
};
with subtest("Deferred mounts work"):
machine.fail("mountpoint -q /efi") # was deferred
machine.execute("mkdir -p /efi")
machine.succeed("mount /dev/disk/by-label/EFISYS /efi")
machine.succeed("mountpoint -q /efi") # now mounted
# Test Nix dæmon usage
subtest "nix-daemon", sub {
$machine->fail("su -l nobody -s /bin/sh -c 'nix ping-store'");
$machine->succeed("su -l alice -c 'nix ping-store'") =~ "OK";
};
with subtest("nix-daemon cannot be used by all users"):
machine.fail("su -l nobody -s /bin/sh -c 'nix ping-store'")
machine.succeed("su -l alice -c 'nix ping-store'")
# Test kernel image protection
subtest "kernelimage", sub {
$machine->fail("systemctl hibernate");
$machine->fail("systemctl kexec");
};
with subtest("The kernel image is protected"):
machine.fail("systemctl hibernate")
machine.fail("systemctl kexec")
# Test hardened memory allocator
sub runMallocTestProg {
my ($progName, $errorText) = @_;
my $text = "fatal allocator error: " . $errorText;
$machine->fail("${hardened-malloc-tests}/bin/" . $progName) =~ $text;
};
def runMallocTestProg(prog_name, error_text):
text = "fatal allocator error: " + error_text
if not text in machine.fail(
"${hardened-malloc-tests}/bin/"
+ prog_name
+ " 2>&1"
):
raise Exception("Hardened malloc does not work for {}".format(error_text))
subtest "hardenedmalloc", sub {
runMallocTestProg("double_free_large", "invalid free");
runMallocTestProg("unaligned_free_small", "invalid unaligned free");
runMallocTestProg("write_after_free_small", "detected write after free");
};
with subtest("The hardened memory allocator works"):
runMallocTestProg("double_free_large", "invalid free")
runMallocTestProg("unaligned_free_small", "invalid unaligned free")
runMallocTestProg("write_after_free_small", "detected write after free")
'';
})

View file

@ -1,15 +1,16 @@
import ../make-test.nix ({ pkgs, ...} : {
import ../make-test-python.nix ({ pkgs, ...} : {
name = "test-hocker-fetchdocker";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ ixmatus ];
broken = true; # tries to download from registry-1.docker.io - how did this ever work?
};
machine = import ./machine.nix;
testScript = ''
startAll;
start_all()
$machine->waitForUnit("sockets.target");
$machine->waitUntilSucceeds("docker run registry-1.docker.io/v2/library/hello-world:latest");
machine.wait_for_unit("sockets.target")
machine.wait_until_succeeds("docker run registry-1.docker.io/v2/library/hello-world:latest")
'';
})

View file

@ -799,7 +799,7 @@ in {
"btrfs subvol create /mnt/badpath/boot",
"btrfs subvol create /mnt/nixos",
"btrfs subvol set-default "
+ "$(btrfs subvol list /mnt | grep 'nixos' | awk '{print \$2}') /mnt",
+ "$(btrfs subvol list /mnt | grep 'nixos' | awk '{print $2}') /mnt",
"umount /mnt",
"mount -o defaults LABEL=root /mnt",
"mkdir -p /mnt/badpath/boot", # Help ensure the detection mechanism

View file

@ -1,20 +0,0 @@
import ./make-test.nix ({ pkgs, ... }: {
name = "mathics";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ benley ];
};
nodes = {
machine = { ... }: {
services.mathics.enable = true;
services.mathics.port = 8888;
};
};
testScript = ''
startAll;
$machine->waitForUnit("mathics.service");
$machine->waitForOpenPort(8888);
$machine->succeed("curl http://localhost:8888/");
'';
})

View file

@ -1,92 +0,0 @@
import ./make-test.nix ({ pkgs, ...} : rec {
name = "mesos";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ offline kamilchm cstrahan ];
};
nodes = {
master = { ... }: {
networking.firewall.enable = false;
services.zookeeper.enable = true;
services.mesos.master = {
enable = true;
zk = "zk://master:2181/mesos";
};
};
slave = { ... }: {
networking.firewall.enable = false;
networking.nat.enable = true;
virtualisation.docker.enable = true;
services.mesos = {
slave = {
enable = true;
master = "master:5050";
dockerRegistry = registry;
executorEnvironmentVariables = {
PATH = "/run/current-system/sw/bin";
};
};
};
};
};
simpleDocker = pkgs.dockerTools.buildImage {
name = "echo";
tag = "latest";
contents = [ pkgs.stdenv.shellPackage pkgs.coreutils ];
config = {
Env = [
# When shell=true, mesos invokes "sh -c '<cmd>'", so make sure "sh" is
# on the PATH.
"PATH=${pkgs.stdenv.shellPackage}/bin:${pkgs.coreutils}/bin"
];
Entrypoint = [ "echo" ];
};
};
registry = pkgs.runCommand "registry" { } ''
mkdir -p $out
cp ${simpleDocker} $out/echo:latest.tar
'';
testFramework = pkgs.pythonPackages.buildPythonPackage {
name = "mesos-tests";
propagatedBuildInputs = [ pkgs.mesos ];
catchConflicts = false;
src = ./mesos_test.py;
phases = [ "installPhase" "fixupPhase" ];
installPhase = ''
install -Dvm 0755 $src $out/bin/mesos_test.py
echo "done" > test.result
tar czf $out/test.tar.gz test.result
'';
};
testScript =
''
startAll;
$master->waitForUnit("zookeeper.service");
$master->waitForUnit("mesos-master.service");
$slave->waitForUnit("docker.service");
$slave->waitForUnit("mesos-slave.service");
$master->waitForOpenPort(2181);
$master->waitForOpenPort(5050);
$slave->waitForOpenPort(5051);
# is slave registered?
$master->waitUntilSucceeds("curl -s --fail http://master:5050/master/slaves".
" | grep -q \"\\\"hostname\\\":\\\"slave\\\"\"");
# try to run docker image
$master->succeed("${pkgs.mesos}/bin/mesos-execute --master=master:5050".
" --resources=\"cpus:0.1;mem:32\" --name=simple-docker".
" --containerizer=mesos --docker_image=echo:latest".
" --shell=true --command=\"echo done\" | grep -q TASK_FINISHED");
# simple command with .tar.gz uri
$master->succeed("${testFramework}/bin/mesos_test.py master ".
"${testFramework}/test.tar.gz");
'';
})

View file

@ -1,72 +0,0 @@
#!/usr/bin/env python
import uuid
import time
import subprocess
import os
import sys
from mesos.interface import Scheduler
from mesos.native import MesosSchedulerDriver
from mesos.interface import mesos_pb2
def log(msg):
process = subprocess.Popen("systemd-cat", stdin=subprocess.PIPE)
(out,err) = process.communicate(msg)
class NixosTestScheduler(Scheduler):
def __init__(self):
self.master_ip = sys.argv[1]
self.download_uri = sys.argv[2]
def resourceOffers(self, driver, offers):
log("XXX got resource offer")
offer = offers[0]
task = self.new_task(offer)
uri = task.command.uris.add()
uri.value = self.download_uri
task.command.value = "cat test.result"
driver.launchTasks(offer.id, [task])
def statusUpdate(self, driver, update):
log("XXX status update")
if update.state == mesos_pb2.TASK_FAILED:
log("XXX test task failed with message: " + update.message)
driver.stop()
sys.exit(1)
elif update.state == mesos_pb2.TASK_FINISHED:
driver.stop()
sys.exit(0)
def new_task(self, offer):
task = mesos_pb2.TaskInfo()
id = uuid.uuid4()
task.task_id.value = str(id)
task.slave_id.value = offer.slave_id.value
task.name = "task {}".format(str(id))
cpus = task.resources.add()
cpus.name = "cpus"
cpus.type = mesos_pb2.Value.SCALAR
cpus.scalar.value = 0.1
mem = task.resources.add()
mem.name = "mem"
mem.type = mesos_pb2.Value.SCALAR
mem.scalar.value = 32
return task
if __name__ == '__main__':
log("XXX framework started")
framework = mesos_pb2.FrameworkInfo()
framework.user = "root"
framework.name = "nixos-test-framework"
driver = MesosSchedulerDriver(
NixosTestScheduler(),
framework,
sys.argv[1] + ":5050"
)
driver.run()

View file

@ -20,12 +20,24 @@ import ./make-test-python.nix ({ pkgs, ...} : rec {
{ fsType = "tmpfs";
options = [ "mode=1777" "noauto" ];
};
# Tests https://discourse.nixos.org/t/how-to-make-a-derivations-executables-have-the-s-permission/8555
"/user-mount/point" = {
device = "/user-mount/source";
fsType = "none";
options = [ "bind" "rw" "user" "noauto" ];
};
"/user-mount/denied-point" = {
device = "/user-mount/denied-source";
fsType = "none";
options = [ "bind" "rw" "noauto" ];
};
};
systemd.automounts = singleton
{ wantedBy = [ "multi-user.target" ];
where = "/tmp2";
};
users.users.sybil = { isNormalUser = true; group = "wheel"; };
users.users.alice = { isNormalUser = true; };
security.sudo = { enable = true; wheelNeedsPassword = false; };
boot.kernel.sysctl."vm.swappiness" = 1;
boot.kernelParams = [ "vsyscall=emulate" ];
@ -112,6 +124,26 @@ import ./make-test-python.nix ({ pkgs, ...} : rec {
machine.succeed("touch /tmp2/x")
machine.succeed("grep '/tmp2 tmpfs' /proc/mounts")
with subtest(
"Whether mounting by a user is possible with the `user` option in fstab (#95444)"
):
machine.succeed("mkdir -p /user-mount/source")
machine.succeed("touch /user-mount/source/file")
machine.succeed("chmod -R a+Xr /user-mount/source")
machine.succeed("mkdir /user-mount/point")
machine.succeed("chown alice:users /user-mount/point")
machine.succeed("su - alice -c 'mount /user-mount/point'")
machine.succeed("su - alice -c 'ls /user-mount/point/file'")
with subtest(
"Whether mounting by a user is denied without the `user` option in fstab"
):
machine.succeed("mkdir -p /user-mount/denied-source")
machine.succeed("touch /user-mount/denied-source/file")
machine.succeed("chmod -R a+Xr /user-mount/denied-source")
machine.succeed("mkdir /user-mount/denied-point")
machine.succeed("chown alice:users /user-mount/denied-point")
machine.fail("su - alice -c 'mount /user-mount/denied-point'")
with subtest("shell-vars"):
machine.succeed('[ -n "$NIX_PATH" ]')

View file

@ -172,20 +172,6 @@ import ./../make-test-python.nix ({ pkgs, ...} : {
"echo 'use testdb; select test_id from tests;' | sudo -u testuser mysql -u testuser -N | grep 42"
)
# Check if TokuDB plugin works
mariadb.succeed(
"echo 'use testdb; create table tokudb (test_id INT, PRIMARY KEY (test_id)) ENGINE = TokuDB;' | sudo -u testuser mysql -u testuser"
)
mariadb.succeed(
"echo 'use testdb; insert into tokudb values (25);' | sudo -u testuser mysql -u testuser"
)
mariadb.succeed(
"echo 'use testdb; select test_id from tokudb;' | sudo -u testuser mysql -u testuser -N | grep 25"
)
mariadb.succeed(
"echo 'use testdb; drop table tokudb;' | sudo -u testuser mysql -u testuser"
)
# Check if RocksDB plugin works
mariadb.succeed(
"echo 'use testdb; create table rocksdb (test_id INT, PRIMARY KEY (test_id)) ENGINE = RocksDB;' | sudo -u testuser mysql -u testuser"
@ -199,5 +185,19 @@ import ./../make-test-python.nix ({ pkgs, ...} : {
mariadb.succeed(
"echo 'use testdb; drop table rocksdb;' | sudo -u testuser mysql -u testuser"
)
'' + pkgs.stdenv.lib.optionalString pkgs.stdenv.isx86_64 ''
# Check if TokuDB plugin works
mariadb.succeed(
"echo 'use testdb; create table tokudb (test_id INT, PRIMARY KEY (test_id)) ENGINE = TokuDB;' | sudo -u testuser mysql -u testuser"
)
mariadb.succeed(
"echo 'use testdb; insert into tokudb values (25);' | sudo -u testuser mysql -u testuser"
)
mariadb.succeed(
"echo 'use testdb; select test_id from tokudb;' | sudo -u testuser mysql -u testuser -N | grep 25"
)
mariadb.succeed(
"echo 'use testdb; drop table tokudb;' | sudo -u testuser mysql -u testuser"
)
'';
})

View file

@ -3,30 +3,30 @@
pkgs ? import ../.. { inherit system config; }
}:
with import ../lib/testing.nix { inherit system pkgs; };
with import ../lib/testing-python.nix { inherit system pkgs; };
with pkgs.lib;
with import common/ec2.nix { inherit makeTest pkgs; };
let
image =
(import ../lib/eval-config.nix {
inherit system;
modules = [
../maintainers/scripts/openstack/openstack-image.nix
../modules/testing/test-instrumentation.nix
../modules/profiles/qemu-guest.nix
{
# Needed by nixos-rebuild due to lack of network access.
system.extraDependencies = with pkgs; [
stdenv
];
}
];
}).config.system.build.openstackImage + "/nixos.qcow2";
image = (import ../lib/eval-config.nix {
inherit system;
modules = [
../maintainers/scripts/openstack/openstack-image.nix
../modules/testing/test-instrumentation.nix
../modules/profiles/qemu-guest.nix
{
# Needed by nixos-rebuild due to lack of network access.
system.extraDependencies = with pkgs; [
stdenv
];
}
];
}).config.system.build.openstackImage + "/nixos.qcow2";
sshKeys = import ./ssh-keys.nix pkgs;
snakeOilPrivateKey = sshKeys.snakeOilPrivateKey.text;
snakeOilPrivateKeyFile = pkgs.writeText "private-key" snakeOilPrivateKey;
snakeOilPublicKey = sshKeys.snakeOilPublicKey;
in {
@ -39,32 +39,36 @@ in {
SSH_HOST_ED25519_KEY:${replaceStrings ["\n"] ["|"] snakeOilPrivateKey}
'';
script = ''
$machine->start;
$machine->waitForFile("/etc/ec2-metadata/user-data");
$machine->waitForUnit("sshd.service");
machine.start()
machine.wait_for_file("/etc/ec2-metadata/user-data")
machine.wait_for_unit("sshd.service")
$machine->succeed("grep unknown /etc/ec2-metadata/ami-manifest-path");
machine.succeed("grep unknown /etc/ec2-metadata/ami-manifest-path")
# We have no keys configured on the client side yet, so this should fail
$machine->fail("ssh -o BatchMode=yes localhost exit");
machine.fail("ssh -o BatchMode=yes localhost exit")
# Let's install our client private key
$machine->succeed("mkdir -p ~/.ssh");
machine.succeed("mkdir -p ~/.ssh")
$machine->succeed("echo '${snakeOilPrivateKey}' > ~/.ssh/id_ed25519");
$machine->succeed("chmod 600 ~/.ssh/id_ed25519");
machine.copy_from_host_via_shell(
"${snakeOilPrivateKeyFile}", "~/.ssh/id_ed25519"
)
machine.succeed("chmod 600 ~/.ssh/id_ed25519")
# We haven't configured the host key yet, so this should still fail
$machine->fail("ssh -o BatchMode=yes localhost exit");
machine.fail("ssh -o BatchMode=yes localhost exit")
# Add the host key; ssh should finally succeed
$machine->succeed("echo localhost,127.0.0.1 ${snakeOilPublicKey} > ~/.ssh/known_hosts");
$machine->succeed("ssh -o BatchMode=yes localhost exit");
machine.succeed(
"echo localhost,127.0.0.1 ${snakeOilPublicKey} > ~/.ssh/known_hosts"
)
machine.succeed("ssh -o BatchMode=yes localhost exit")
# Just to make sure resizing is idempotent.
$machine->shutdown;
$machine->start;
$machine->waitForFile("/etc/ec2-metadata/user-data");
machine.shutdown()
machine.start()
machine.wait_for_file("/etc/ec2-metadata/user-data")
'';
};
@ -86,9 +90,9 @@ in {
}
'';
script = ''
$machine->start;
$machine->waitForFile("/etc/testFile");
$machine->succeed("cat /etc/testFile | grep -q 'whoa'");
machine.start()
machine.wait_for_file("/etc/testFile")
assert "whoa" in machine.succeed("cat /etc/testFile")
'';
};
}

View file

@ -158,7 +158,10 @@ in import ./make-test-python.nix {
s3 = { pkgs, ... } : {
# Minio requires at least 1GiB of free disk space to run.
virtualisation.diskSize = 2 * 1024;
virtualisation = {
diskSize = 2 * 1024;
memorySize = 1024;
};
networking.firewall.allowedTCPPorts = [ minioPort ];
services.minio = {
@ -235,7 +238,7 @@ in import ./make-test-python.nix {
# Test if the Thanos bucket command is able to retrieve blocks from the S3 bucket
# and check if the blocks have the correct labels:
store.succeed(
"thanos bucket ls "
"thanos tools bucket ls "
+ "--objstore.config-file=${nodes.store.config.services.thanos.store.objstore.config-file} "
+ "--output=json | "
+ "jq .thanos.labels.some_label | "

View file

@ -4,7 +4,10 @@ import ./make-test-python.nix ({ pkgs, ... }: {
machine = { lib, ... }: {
imports = [ common/user-account.nix common/x11.nix ];
virtualisation.emptyDiskImages = [ 512 ];
virtualisation.emptyDiskImages = [ 512 512 ];
virtualisation.memorySize = 1024;
environment.systemPackages = [ pkgs.cryptsetup ];
fileSystems = lib.mkVMOverride {
"/test-x-initrd-mount" = {
@ -144,5 +147,25 @@ import ./make-test-python.nix ({ pkgs, ... }: {
assert "RuntimeWatchdogUSec=30s" in output
assert "RebootWatchdogUSec=10m" in output
assert "KExecWatchdogUSec=5m" in output
# Test systemd cryptsetup support
with subtest("systemd successfully reads /etc/crypttab and unlocks volumes"):
# create a luks volume and put a filesystem on it
machine.succeed(
"echo -n supersecret | cryptsetup luksFormat -q /dev/vdc -",
"echo -n supersecret | cryptsetup luksOpen --key-file - /dev/vdc foo",
"mkfs.ext3 /dev/mapper/foo",
)
# create a keyfile and /etc/crypttab
machine.succeed("echo -n supersecret > /var/lib/luks-keyfile")
machine.succeed("chmod 600 /var/lib/luks-keyfile")
machine.succeed("echo 'luks1 /dev/vdc /var/lib/luks-keyfile luks' > /etc/crypttab")
# after a reboot, systemd should unlock the volume and we should be able to mount it
machine.shutdown()
machine.succeed("systemctl status systemd-cryptsetup@luks1.service")
machine.succeed("mkdir -p /tmp/luks1")
machine.succeed("mount /dev/mapper/luks1 /tmp/luks1")
'';
})

View file

@ -9,6 +9,8 @@ import ./make-test-python.nix ({ pkgs, ...} : {
networking.firewall.allowedTCPPorts = [ 9091 ];
security.apparmor.enable = true;
services.transmission.enable = true;
};

View file

@ -1,7 +1,7 @@
import ./make-test-python.nix ({ pkgs, ... }: {
name = "trezord";
meta = with pkgs.stdenv.lib; {
maintainers = with maintainers; [ mmahut maintainers."1000101" ];
maintainers = with maintainers; [ mmahut _1000101 ];
};
nodes = {
machine = { ... }: {

View file

@ -1,7 +1,7 @@
import ./make-test-python.nix ({ pkgs, ... }: {
name = "trickster";
meta = with pkgs.stdenv.lib; {
maintainers = with maintainers; [ maintainers."1000101" ];
maintainers = with maintainers; [ _1000101 ];
};
nodes = {

View file

@ -4,7 +4,7 @@ import ./make-test-python.nix (
{
name = "xandikos";
meta.maintainers = [ lib.maintainers."0x4A6F" ];
meta.maintainers = with lib.maintainers; [ _0x4A6F ];
nodes = {
xandikos_client = {};

Some files were not shown because too many files have changed in this diff Show more