Merge branch 'master' into closure-size

Beware that stdenv doesn't build. It seems something more will be needed
than just resolution of merge conflicts.
This commit is contained in:
Vladimír Čunát 2016-04-01 10:06:01 +02:00
commit ab15a62c68
1108 changed files with 76254 additions and 11297 deletions

View file

@ -1,7 +1,10 @@
###### Things done:
- [ ] Tested using sandboxing (`nix-build --option build-use-chroot true` or [nix.useChroot](http://nixos.org/nixos/manual/options.html#opt-nix.useChroot) on NixOS)
- [ ] Built on platform(s): NixOS / OSX / Linux
- Built on platform(s)
- [ ] NixOS
- [ ] OS X
- [ ] Linux
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).

View file

@ -6,4 +6,4 @@ if ! builtins ? nixVersion || builtins.compareVersions requiredVersion builtins.
else
import ./pkgs/top-level/all-packages.nix
import ./pkgs/top-level

View file

@ -47,6 +47,10 @@ stdenv.mkDerivation {
outputFile = "introduction.xml";
useChapters = true;
}
+ toDocbook {
inputFile = ./languages-frameworks/python.md;
outputFile = "./languages-frameworks/python.xml";
}
+ toDocbook {
inputFile = ./haskell-users-guide.md;
outputFile = "haskell-users-guide.xml";

View file

@ -0,0 +1,244 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-bower">
<title>Bower</title>
<para>
<link xlink:href="http://bower.io">Bower</link> is a package manager
for web site front-end components. Bower packages (comprising of
build artefacts and sometimes sources) are stored in
<command>git</command> repositories, typically on Github. The
package registry is run by the Bower team with package metadata
coming from the <filename>bower.json</filename> file within each
package.
</para>
<para>
The end result of running Bower is a
<filename>bower_components</filename> directory which can be included
in the web app's build process.
</para>
<para>
Bower can be run interactively, by installing
<varname>nodePackages.bower</varname>. More interestingly, the Bower
components can be declared in a Nix derivation, with the help of
<varname>nodePackages.bower2nix</varname>.
</para>
<section xml:id="ssec-bower2nix-usage">
<title><command>bower2nix</command> usage</title>
<para>
Suppose you have a <filename>bower.json</filename> with the following contents:
<example xml:id="ex-bowerJson"><title><filename>bower.json</filename></title>
<programlisting language="json">
<![CDATA[{
"name": "my-web-app",
"dependencies": {
"angular": "~1.5.0",
"bootstrap": "~3.3.6"
}
}]]>
</programlisting>
</example>
</para>
<para>
Running <command>bower2nix</command> will produce something like the
following output:
<programlisting language="nix">
<![CDATA[{ fetchbower, buildEnv }:
buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [
(fetchbower "angular" "1.5.3" "~1.5.0" "1749xb0firxdra4rzadm4q9x90v6pzkbd7xmcyjk6qfza09ykk9y")
(fetchbower "bootstrap" "3.3.6" "~3.3.6" "1vvqlpbfcy0k5pncfjaiskj3y6scwifxygfqnw393sjfxiviwmbv")
(fetchbower "jquery" "2.2.2" "1.9.1 - 2" "10sp5h98sqwk90y4k6hbdviwqzvzwqf47r3r51pakch5ii2y7js1")
]; }]]>
</programlisting>
</para>
<para>
Using the <command>bower2nix</command> command line arguments, the
output can be redirected to a file. A name like
<filename>bower-packages.nix</filename> would be fine.
</para>
<para>
The resulting derivation is a union of all the downloaded Bower
packages (and their dependencies). To use it, they still need to be
linked together by Bower, which is where
<varname>buildBowerComponents</varname> is useful.
</para>
</section>
<section xml:id="ssec-build-bower-components"><title><varname>buildBowerComponents</varname> function</title>
<para>
The function is implemented in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/bower-modules/generic/default.nix">
<filename>pkgs/development/bower-modules/generic/default.nix</filename></link>.
Example usage:
<example xml:id="ex-buildBowerComponents"><title>buildBowerComponents</title>
<programlisting language="nix">
bowerComponents = buildBowerComponents {
name = "my-web-app";
generated = ./bower-packages.nix; <co xml:id="ex-buildBowerComponents-1" />
src = myWebApp; <co xml:id="ex-buildBowerComponents-2" />
};
</programlisting>
</example>
</para>
<para>
In <xref linkend="ex-buildBowerComponents" />, the following arguments
are of special significance to the function:
<calloutlist>
<callout arearefs="ex-buildBowerComponents-1">
<para>
<varname>generated</varname> specifies the file which was created by <command>bower2nix</command>.
</para>
</callout>
<callout arearefs="ex-buildBowerComponents-2">
<para>
<varname>src</varname> is your project's sources. It needs to
contain a <filename>bower.json</filename> file.
</para>
</callout>
</calloutlist>
</para>
<para>
<varname>buildBowerComponents</varname> will run Bower to link
together the output of <command>bower2nix</command>, resulting in a
<filename>bower_components</filename> directory which can be used.
</para>
<para>
Here is an example of a web frontend build process using
<command>gulp</command>. You might use <command>grunt</command>, or
anything else.
</para>
<example xml:id="ex-bowerGulpFile"><title>Example build script (<filename>gulpfile.js</filename>)</title>
<programlisting language="javascript">
<![CDATA[var gulp = require('gulp');
gulp.task('default', [], function () {
gulp.start('build');
});
gulp.task('build', [], function () {
console.log("Just a dummy gulp build");
gulp
.src(["./bower_components/**/*"])
.pipe(gulp.dest("./gulpdist/"));
});]]>
</programlisting>
</example>
<example xml:id="ex-buildBowerComponentsDefaultNix">
<title>Full example — <filename>default.nix</filename></title>
<programlisting language="nix">
{ myWebApp ? { outPath = ./.; name = "myWebApp"; }
, pkgs ? import &lt;nixpkgs&gt; {}
}:
pkgs.stdenv.mkDerivation {
name = "my-web-app-frontend";
src = myWebApp;
buildInputs = [ pkgs.nodePackages.gulp ];
bowerComponents = pkgs.buildBowerComponents { <co xml:id="ex-buildBowerComponentsDefault-1" />
name = "my-web-app";
generated = ./bower-packages.nix;
src = myWebApp;
};
buildPhase = ''
cp --reflink=auto --no-preserve=mode -R $bowerComponents/bower_components . <co xml:id="ex-buildBowerComponentsDefault-2" />
export HOME=$PWD <co xml:id="ex-buildBowerComponentsDefault-3" />
${pkgs.nodePackages.gulp}/bin/gulp build <co xml:id="ex-buildBowerComponentsDefault-4" />
'';
installPhase = "mv gulpdist $out";
}
</programlisting>
</example>
<para>
A few notes about <xref linkend="ex-buildBowerComponentsDefaultNix" />:
<calloutlist>
<callout arearefs="ex-buildBowerComponentsDefault-1">
<para>
The result of <varname>buildBowerComponents</varname> is an
input to the frontend build.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-2">
<para>
Whether to symlink or copy the
<filename>bower_components</filename> directory depends on the
build tool in use. In this case a copy is used to avoid
<command>gulp</command> silliness with permissions.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-3">
<para>
<command>gulp</command> requires <varname>HOME</varname> to
refer to a writeable directory.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-4">
<para>
The actual build command. Other tools could be used.
</para>
</callout>
</calloutlist>
</para>
</section>
<section xml:id="ssec-bower2nix-troubleshooting">
<title>Troubleshooting</title>
<variablelist>
<varlistentry>
<term>
<literal>ENOCACHE</literal> errors from
<varname>buildBowerComponents</varname>
</term>
<listitem>
<para>
This means that Bower was looking for a package version which
doesn't exist in the generated
<filename>bower-packages.nix</filename>.
</para>
<para>
If <filename>bower.json</filename> has been updated, then run
<command>bower2nix</command> again.
</para>
<para>
It could also be a bug in <command>bower2nix</command> or
<command>fetchbower</command>. If possible, try reformulating
the version specification in <filename>bower.json</filename>.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
</section>

View file

@ -23,22 +23,9 @@ such as Perl or Haskell. These are described in this chapter.</para>
<xi:include href="idris.xml" /> <!-- generated from ../../pkgs/development/idris-modules/README.md -->
<xi:include href="r.xml" /> <!-- generated from ../../pkgs/development/r-modules/README.md -->
<xi:include href="qt.xml" />
<!--
<section><title>Haskell</title>
<para>TODO</para>
</section>
<section><title>TeX / LaTeX</title>
<para>* Special support for building TeX documents</para>
</section>
-->
<xi:include href="texlive.xml" />
<xi:include href="bower.xml" />
</chapter>

View file

@ -0,0 +1,714 @@
# Python
## User Guide
Several versions of Python are available on Nix as well as a high amount of
packages. The default interpreter is CPython 2.7.
### Using Python
#### Installing Python and packages
It is important to make a distinction between Python packages that are
used as libraries, and applications that are written in Python.
Applications on Nix are installed typically into your user
profile imperatively using `nix-env -i`, and on NixOS declaratively by adding the
package name to `environment.systemPackages` in `/etc/nixos/configuration.nix`.
Dependencies such as libraries are automatically installed and should not be
installed explicitly.
The same goes for Python applications and libraries. Python applications can be
installed in your profile, but Python libraries you would like to use to develop
cannot. If you do install libraries in your profile, then you will end up with
import errors.
#### Python environments using `nix-shell`
The recommended method for creating Python environments for development is with
`nix-shell`. Executing
```sh
$ nix-shell -p python35Packages.numpy python35Packages.toolz
```
opens a Nix shell which has available the requested packages and dependencies.
Now you can launch the Python interpreter (which is itself a dependency)
```sh
[nix-shell:~] python3
```
If the packages were not available yet in the Nix store, Nix would download or
build them automatically. A convenient option with `nix-shell` is the `--run`
option, with which you can execute a command in the `nix-shell`. Let's say we
want the above environment and directly run the Python interpreter
```sh
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3"
```
This way you can use the `--run` option also to directly run a script
```sh
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3 myscript.py"
```
In fact, for this specific use case there is a more convenient method. You can
add a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) to your script
specifying which dependencies Nix shell needs. With the following shebang, you
can use `nix-shell myscript.py` and it will make available all dependencies and
run the script in the `python3` shell.
```py
#! /usr/bin/env nix-shell
#! nix-shell -i python3 -p python3Packages.numpy
import numpy
print(numpy.__version__)
```
Likely you do not want to type your dependencies each and every time. What you
can do is write a simple Nix expression which sets up an environment for you,
requiring you only to type `nix-shell`. Say we want to have Python 3.5, `numpy`
and `toolz`, like before, in an environment. With a `shell.nix` file
containing
```nix
with import <nixpkgs> {};
(pkgs.python35.buildEnv.override {
extraLibs = with pkgs.python35Packages; [ numpy toolz ];
}).env
```
executing `nix-shell` gives you again a Nix shell from which you can run Python.
What's happening here?
1. We begin with importing the Nix Packages collections. `import <nixpkgs>` import the `<nixpkgs>` function, `{}` calls it and the `with` statement brings all attributes of `nixpkgs` in the local scope. Therefore we can now use `pkgs`.
2. Then we create a Python 3.5 environment with `pkgs.buildEnv`. Because we want to use it with a custom set of Python packages, we override it.
3. The `extraLibs` argument of the original `buildEnv` function can be used to specify which packages should be included. We want `numpy` and `toolz`. Again, we use the `with` statement to bring a set of attributes into the local scope.
4. And finally, for in interactive use we return the environment.
### Developing with Python
Now that you know how to get a working Python environment on Nix, it is time to go forward and start actually developing with Python.
We will first have a look at how Python packages are packaged on Nix. Then, we will look how you can use development mode with your code.
#### Python packaging on Nix
On Nix all packages are built by functions. The main function in Nix for building Python packages is [`buildPythonPackage`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/python-modules/generic/default.nix).
Let's see how we would build the `toolz` package. According to [`python-packages.nix`](https://raw.githubusercontent.com/NixOS/nixpkgs/master/pkgs/top-level/python-packages.nix) `toolz` is build using
```nix
toolz = buildPythonPackage rec{
name = "toolz-${version}";
version = "0.7.4";
src = pkgs.fetchurl{
url = "https://pypi.python.org/packages/source/t/toolz/toolz-${version}.tar.gz";
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
};
meta = {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
};
};
```
What happens here? The function `buildPythonPackage` is called and as argument
it accepts a set. In this case the set is a recursive set ([`rec`](http://nixos.org/nix/manual/#sec-constructs)).
One of the arguments is the name of the package, which consists of a basename
(generally following the name on PyPi) and a version. Another argument, `src`
specifies the source, which in this case is fetched from an url. `fetchurl` not
only downloads the target file, but also validates its hash. Furthermore, we
specify some (optional) [meta information](http://nixos.org/nixpkgs/manual/#chap-meta).
The output of the function is a derivation, which is an attribute with the name
`toolz` of the set `pythonPackages`. Actually, sets are created for all interpreter versions,
so `python27Packages`, `python34Packages`, `python35Packages` and `pypyPackages`.
The above example works when you're directly working on
`pkgs/top-level/python-packages.nix` in the Nixpkgs repository. Often though,
you will want to test a Nix expression outside of the Nixpkgs tree. If you
create a `shell.nix` file with the following contents
```nix
with import <nixpkgs> {};
pkgs.python35Packages.buildPythonPackage rec {
name = "toolz-${version}";
version = "0.7.4";
src = pkgs.fetchurl{
url = "https://pypi.python.org/packages/source/t/toolz/toolz-${version}.tar.gz";
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
};
meta = {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
};
}
```
and then execute `nix-shell` will result in an environment in which you can use
Python 3.5 and the `toolz` package. As you can see we had to explicitly mention
for which Python version we want to build a package.
The above example considered only a single package. Generally you will want to use multiple packages.
If we create a `shell.nix` file with the following contents
```nix
with import <nixpkgs> {};
( let
toolz = pkgs.python35Packages.buildPythonPackage rec {
name = "toolz-${version}";
version = "0.7.4";
src = pkgs.fetchurl{
url = "https://pypi.python.org/packages/source/t/toolz/toolz-${version}.tar.gz";
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
};
meta = {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
};
};
in pkgs.python35.buildEnv.override rec {
extraLibs = [ pkgs.python35Packages.numpy toolz ];
}
).env
```
and again execute `nix-shell`, then we get a Python 3.5 environment with our
locally defined package as well as `numpy` which is build according to the
definition in Nixpkgs. What did we do here? Well, we took the Nix expression
that we used earlier to build a Python environment, and said that we wanted to
include our own version of `toolz`. To introduce our own package in the scope of
`buildEnv.override` we used a
[`let`](http://nixos.org/nix/manual/#sec-constructs) expression.
### Handling dependencies
Our example, `toolz`, doesn't have any dependencies on other Python
packages or system libraries. According to the manual, `buildPythonPackage`
uses the arguments `buildInputs` and `propagatedBuildInputs` to specify dependencies. If something is
exclusively a build-time dependency, then the dependency should be included as a
`buildInput`, but if it is (also) a runtime dependency, then it should be added
to `propagatedBuildInputs`. Test dependencies are considered build-time dependencies.
The following example shows which arguments are given to `buildPythonPackage` in
order to build [`datashape`](https://github.com/blaze/datashape).
```nix
datashape = buildPythonPackage rec {
name = "datashape-${version}";
version = "0.4.7";
src = pkgs.fetchurl {
url = "https://pypi.python.org/packages/source/D/DataShape/${name}.tar.gz";
sha256 = "14b2ef766d4c9652ab813182e866f493475e65e558bed0822e38bf07bba1a278";
};
buildInputs = with self; [ pytest ];
propagatedBuildInputs = with self; [ numpy multipledispatch dateutil ];
meta = {
homepage = https://github.com/ContinuumIO/datashape;
description = "A data description language";
license = licenses.bsd2;
maintainers = with maintainers; [ fridh ];
};
};
```
We can see several runtime dependencies, `numpy`, `multipledispatch`, and
`dateutil`. Furthermore, we have one `buildInput`, i.e. `pytest`. `pytest` is a
test runner and is only used during the `checkPhase` and is therefore not added
to `propagatedBuildInputs`.
In the previous case we had only dependencies on other Python packages to consider.
Occasionally you have also system libraries to consider. E.g., `lxml` provides
Python bindings to `libxml2` and `libxslt`. These libraries are only required
when building the bindings and are therefore added as `buildInputs`.
```nix
lxml = buildPythonPackage rec {
name = "lxml-3.4.4";
src = pkgs.fetchurl {
url = "http://pypi.python.org/packages/source/l/lxml/${name}.tar.gz";
sha256 = "16a0fa97hym9ysdk3rmqz32xdjqmy4w34ld3rm3jf5viqjx65lxk";
};
buildInputs = with self; [ pkgs.libxml2 pkgs.libxslt ];
meta = {
description = "Pythonic binding for the libxml2 and libxslt libraries";
homepage = http://lxml.de;
license = licenses.bsd3;
maintainers = with maintainers; [ sjourdois ];
};
};
```
In this example `lxml` and Nix are able to work out exactly where the relevant
files of the dependencies are. This is not always the case.
The example below shows bindings to The Fastest Fourier Transform in the West, commonly known as
FFTW. On Nix we have separate packages of FFTW for the different types of floats
(`"single"`, `"double"`, `"long-double"`). The bindings need all three types,
and therefore we add all three as `buildInputs`. The bindings don't expect to
find each of them in a different folder, and therefore we have to set `LDFLAGS`
and `CFLAGS`.
```nix
pyfftw = buildPythonPackage rec {
name = "pyfftw-${version}";
version = "0.9.2";
src = pkgs.fetchurl {
url = "https://pypi.python.org/packages/source/p/pyFFTW/pyFFTW-${version}.tar.gz";
sha256 = "f6bbb6afa93085409ab24885a1a3cdb8909f095a142f4d49e346f2bd1b789074";
};
buildInputs = [ pkgs.fftw pkgs.fftwFloat pkgs.fftwLongDouble];
propagatedBuildInputs = with self; [ numpy scipy ];
# Tests cannot import pyfftw. pyfftw works fine though.
doCheck = false;
LDFLAGS="-L${pkgs.fftw}/lib -L${pkgs.fftwFloat}/lib -L${pkgs.fftwLongDouble}/lib"
CFLAGS="-I${pkgs.fftw}/include -I${pkgs.fftwFloat}/include -I${pkgs.fftwLongDouble}/include"
'';
meta = {
description = "A pythonic wrapper around FFTW, the FFT library, presenting a unified interface for all the supported transforms";
homepage = http://hgomersall.github.com/pyFFTW/;
license = with licenses; [ bsd2 bsd3 ];
maintainer = with maintainers; [ fridh ];
};
};
```
Note also the line `doCheck = false;`, we explicitly disabled running the test-suite.
#### Develop local package
As a Python developer you're likely aware of [development mode](http://pythonhosted.org/setuptools/setuptools.html#development-mode) (`python setup.py develop`);
instead of installing the package this command creates a special link to the project code.
That way, you can run updated code without having to reinstall after each and every change you make.
Development mode is also available on Nix as [explained](http://nixos.org/nixpkgs/manual/#ssec-python-development) in the Nixpkgs manual.
Let's see how you can use it.
In the previous Nix expression the source was fetched from an url. We can also refer to a local source instead using
```nix
src = ./path/to/source/tree;
```
If we create a `shell.nix` file which calls `buildPythonPackage`, and if `src`
is a local source, and if the local source has a `setup.py`, then development
mode is activated.
In the following example we create a simple environment that
has a Python 3.5 version of our package in it, as well as its dependencies and
other packages we like to have in the environment, all specified with `propagatedBuildInputs`.
Indeed, we can just add any package we like to have in our environment to `propagatedBuildInputs`.
```nix
with import <nixpkgs>;
with pkgs.python35Packages;
buildPythonPackage rec {
name = "mypackage";
src = ./path/to/package/source;
propagatedBuildInputs = [ pytest numpy pkgs.libsndfile ];
};
```
It is important to note that due to how development mode is implemented on Nix it is not possible to have multiple packages simultaneously in development mode.
### Organising your packages
So far we discussed how you can use Python on Nix, and how you can develop with
it. We've looked at how you write expressions to package Python packages, and we
looked at how you can create environments in which specified packages are
available.
At some point you'll likely have multiple packages which you would
like to be able to use in different projects. In order to minimise unnecessary
duplication we now look at how you can maintain yourself a repository with your
own packages. The important functions here are `import` and `callPackage`.
### Including a derivation using `callPackage`
Earlier we created a Python environment using `buildEnv`, and included the
`toolz` package via a `let` expression.
Let's split the package definition from the environment definition.
We first create a function that builds `toolz` in `~/path/to/toolz/release.nix`
```nix
{ pkgs, buildPythonPackage }:
buildPythonPackage rec {
name = "toolz-${version}";
version = "0.7.4";
src = pkgs.fetchurl{
url = "https://pypi.python.org/packages/source/t/toolz/toolz-${version}.tar.gz";
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
};
meta = {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
};
};
```
It takes two arguments, `pkgs` and `buildPythonPackage`.
We now call this function using `callPackage` in the definition of our environment
```nix
with import <nixpkgs> {};
( let
toolz = pkgs.callPackage ~/path/to/toolz/release.nix { pkgs=pkgs; buildPythonPackage=pkgs.python35Packages.buildPythonPackage; };
in pkgs.python35.buildEnv.override rec {
extraLibs = [ pkgs.python35Packages.numpy toolz ];
}
).env
```
Important to remember is that the Python version for which the package is made
depends on the `python` derivation that is passed to `buildPythonPackage`. Nix
tries to automatically pass arguments when possible, which is why generally you
don't explicitly define which `python` derivation should be used. In the above
example we use `buildPythonPackage` that is part of the set `python35Packages`,
and in this case the `python35` interpreter is automatically used.
## Reference
### Interpreters
Versions 2.6, 2.7, 3.3, 3.4 and 3.5 of the CPython interpreter are available on
Nix and are available as `python26`, `python27`, `python33`, `python34` and
`python35`. The PyPy interpreter is also available as `pypy`. Currently, the
aliases `python` and `python3` correspond to respectively `python27` and
`python35`. The Nix expressions for the interpreters can be found in
`pkgs/development/interpreters/python`.
#### Missing modules standard library
The interpreters `python26` and `python27` do not include modules that
require external dependencies. This is done in order to reduce the closure size.
The following modules need to be added as `buildInput` explicitly:
* `python.modules.bsddb`
* `python.modules.curses`
* `python.modules.curses_panel`
* `python.modules.crypt`
* `python.modules.gdbm`
* `python.modules.sqlite3`
* `python.modules.tkinter`
* `python.modules.readline`
For convenience `python27Full` and `python26Full` are provided with all
modules included.
All packages depending on any Python interpreter get appended
`out/{python.sitePackages}` to `$PYTHONPATH` if such directory
exists.
#### Attributes on interpreters packages
Each interpreter has the following attributes:
- `libPrefix`. Name of the folder in `${python}/lib/` for corresponding interpreter.
- `interpreter`. Alias for `${python}/bin/${executable}`.
- `buildEnv`. Function to build python interpreter environments with extra packages bundled together. See section *python.buildEnv function* for usage and documentation.
- `sitePackages`. Alias for `lib/${libPrefix}/site-packages`.
- `executable`. Name of the interpreter executable, ie `python3.4`.
### Building packages and applications
Python packages (libraries) and applications that use `setuptools` or
`distutils` are typically built with respectively the `buildPythonPackage` and
`buildPythonApplication` functions.
All Python packages reside in `pkgs/top-level/python-packages.nix` and all
applications elsewhere. Some packages are also defined in
`pkgs/development/python-modules`. It is important that these packages are
called in `pkgs/top-level/python-packages.nix` and not elsewhere, to guarantee
the right version of the package is built.
Based on the packages defined in `pkgs/top-level/python-packages.nix` an
attribute set is created for each available Python interpreter. The available
sets are
* `pkgs.python26Packages`
* `pkgs.python27Packages`
* `pkgs.python33Packages`
* `pkgs.python34Packages`
* `pkgs.python35Packages`
* `pkgs.pypyPackages`
and the aliases
* `pkgs.pythonPackages` pointing to `pkgs.python27Packages`
* `pkgs.python3Packages` pointing to `pkgs.python35Packages`
#### `buildPythonPackage` function
The `buildPythonPackage` function is implemented in
`pkgs/development/python-modules/generic/default.nix`
and can be used as:
twisted = buildPythonPackage {
name = "twisted-8.1.0";
src = pkgs.fetchurl {
url = http://tmrc.mit.edu/mirror/twisted/Twisted/8.1/Twisted-8.1.0.tar.bz2;
sha256 = "0q25zbr4xzknaghha72mq57kh53qw1bf8csgp63pm9sfi72qhirl";
};
propagatedBuildInputs = [ self.ZopeInterface ];
meta = {
homepage = http://twistedmatrix.com/;
description = "Twisted, an event-driven networking engine written in Python";
license = stdenv.lib.licenses.mit; };
};
The `buildPythonPackage` mainly does four things:
* In the `buildPhase`, it calls `${python.interpreter} setup.py bdist_wheel` to build a wheel binary zipfile.
* In the `installPhase`, it installs the wheel file using `pip install *.whl`.
* In the `postFixup` phase, the `wrapPythonPrograms` bash function is called to wrap all programs in the `$out/bin/*` directory to include `$PYTHONPATH` and `$PATH` environment variables.
* In the `installCheck` phase, `${python.interpreter} setup.py test` is ran.
As in Perl, dependencies on other Python packages can be specified in the
`buildInputs` and `propagatedBuildInputs` attributes. If something is
exclusively a build-time dependency, use `buildInputs`; if its (also) a runtime
dependency, use `propagatedBuildInputs`.
By default tests are run because `doCheck = true`. Test dependencies, like
e.g. the test runner, should be added to `buildInputs`.
By default `meta.platforms` is set to the same value
as the interpreter unless overriden otherwise.
##### `buildPythonPackage` parameters
All parameters from `mkDerivation` function are still supported.
* `namePrefix`: Prepended text to `${name}` parameter. Defaults to `"python3.3-"` for Python 3.3, etc. Set it to `""` if you're packaging an application or a command line tool.
* `disabled`: If `true`, package is not build for particular python interpreter version. Grep around `pkgs/top-level/python-packages.nix` for examples.
* `setupPyBuildFlags`: List of flags passed to `setup.py build_ext` command.
* `pythonPath`: List of packages to be added into `$PYTHONPATH`. Packages in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`).
* `preShellHook`: Hook to execute commands before `shellHook`.
* `postShellHook`: Hook to execute commands after `shellHook`.
* `makeWrapperArgs`: A list of strings. Arguments to be passed to `makeWrapper`, which wraps generated binaries. By default, the arguments to `makeWrapper` set `PATH` and `PYTHONPATH` environment variables before calling the binary. Additional arguments here can allow a developer to set environment variables which will be available when the binary is run. For example, `makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`.
* `installFlags`: A list of strings. Arguments to be passed to `pip install`. To pass options to `python setup.py install`, use `--install-option`. E.g., `installFlags=["--install-option='--cpp_implementation'"].
#### `buildPythonApplication` function
The `buildPythonApplication` function is practically the same as `buildPythonPackage`.
The difference is that `buildPythonPackage` by default prefixes the names of the packages with the version of the interpreter.
Because with an application we're not interested in multiple version the prefix is dropped.
#### python.buildEnv function
Python environments can be created using the low-level `pkgs.buildEnv` function.
This example shows how to create an environment that has the Pyramid Web Framework.
Saving the following as `default.nix`
with import {};
python.buildEnv.override {
extraLibs = [ pkgs.pythonPackages.pyramid ];
ignoreCollisions = true;
}
and running `nix-build` will create
/nix/store/cf1xhjwzmdki7fasgr4kz6di72ykicl5-python-2.7.8-env
with wrapped binaries in `bin/`.
You can also use the `env` attribute to create local environments with needed
packages installed. This is somewhat comparable to `virtualenv`. For example,
running `nix-shell` with the following `shell.nix`
with import {};
(python3.buildEnv.override {
extraLibs = with python3Packages; [ numpy requests ];
}).env
will drop you into a shell where Python will have the
specified packages in its path.
##### `python.buildEnv` arguments
* `extraLibs`: List of packages installed inside the environment.
* `postBuild`: Shell command executed after the build of environment.
* `ignoreCollisions`: Ignore file collisions inside the environment (default is `false`).
### Development mode
Development or editable mode is supported. To develop Python packages
`buildPythonPackage` has additional logic inside `shellPhase` to run `pip
install -e . --prefix $TMPDIR/`for the package.
Warning: `shellPhase` is executed only if `setup.py` exists.
Given a `default.nix`:
with import {};
buildPythonPackage { name = "myproject";
buildInputs = with pkgs.pythonPackages; [ pyramid ];
src = ./.; }
Running `nix-shell` with no arguments should give you
the environment in which the package would be build with
`nix-build`.
Shortcut to setup environments with C headers/libraries and python packages:
$ nix-shell -p pythonPackages.pyramid zlib libjpeg git
Note: There is a boolean value `lib.inNixShell` set to `true` if nix-shell is invoked.
### Tools
Packages inside nixpkgs are written by hand. However many tools exist in
community to help save time. No tool is preferred at the moment.
- [python2nix](https://github.com/proger/python2nix) by Vladimir Kirillov
- [pypi2nix](https://github.com/garbas/pypi2nix) by Rok Garbas
- [pypi2nix](https://github.com/offlinehacker/pypi2nix) by Jaka Hudoklin
## FAQ
### How to solve circular dependencies?
Consider the packages `A` and `B` that depend on each other. When packaging `B`,
a solution is to override package `A` not to depend on `B` as an input. The same
should also be done when packaging `A`.
### How to override a Python package?
Recursively updating a package can be done with `pkgs.overridePackages` as explained in the Nixpkgs manual.
Python attribute sets are created for each interpreter version. We will therefore override the attribute set for the interpreter version we're interested.
In the following example we change the name of the package `pandas` to `foo`.
```
newpkgs = pkgs.overridePackages(self: super: rec {
python35Packages = super.python35Packages.override {
self = python35Packages // { pandas = python35Packages.pandas.override{name="foo";};};
};
});
```
This can be tested with
```
with import <nixpkgs> {};
(let
newpkgs = pkgs.overridePackages(self: super: rec {
python35Packages = super.python35Packages.override {
self = python35Packages // { pandas = python35Packages.pandas.override{name="foo";};};
};
});
in newpkgs.python35.buildEnv.override{
extraLibs = [newpkgs.python35Packages.blaze ];
}).env
```
A typical use case is to switch to another version of a certain package. For example, in the Nixpkgs repository we have multiple versions of `django` and `scipy`.
In the following example we use a different version of `scipy`. All packages in `newpkgs` will now use the updated `scipy` version.
```
with import <nixpkgs> {};
(let
newpkgs = pkgs.overridePackages(self: super: rec {
python35Packages = super.python35Packages.override {
self = python35Packages // { scipy = python35Packages.scipy_0_16;};
};
});
in pkgs.python35.buildEnv.override{
extraLibs = [newpkgs.python35Packages.blaze ];
}).env
```
The requested package `blaze` depends upon `pandas` which itself depends on `scipy`.
### `install_data` / `data_files` problems
If you get the following error:
could not create '/nix/store/6l1bvljpy8gazlsw2aw9skwwp4pmvyxw-python-2.7.8/etc':
Permission denied
This is a [known bug](https://bitbucket.org/pypa/setuptools/issue/130/install_data-doesnt-respect-prefix) in setuptools.
Setuptools `install_data` does not respect `--prefix`. An example of such package using the feature is `pkgs/tools/X11/xpra/default.nix`.
As workaround install it as an extra `preInstall` step:
${python.interpreter} setup.py install_data --install-dir=$out --root=$out
sed -i '/ = data\_files/d' setup.py
### Rationale of non-existent global site-packages
On most operating systems a global `site-packages` is maintained. This however
becomes problematic if you want to run multiple Python versions or have multiple
versions of certain libraries for your projects. Generally, you would solve such
issues by creating virtual environments using `virtualenv`.
On Nix each package has an isolated dependency tree which, in the case of
Python, guarantees the right versions of the interpreter and libraries or
packages are available. There is therefore no need to maintain a global `site-packages`.
If you want to create a Python environment for development, then the recommended
method is to use `nix-shell`, either with or without the `python.buildEnv`
function.
## Contributing
### Contributing guidelines
Following rules are desired to be respected:
* Make sure package builds for all python interpreters. Use `disabled` argument to `buildPythonPackage` to set unsupported interpreters.
* If tests need to be disabled for a package, make sure you leave a comment about reasoning.
* Packages in `pkgs/top-level/python-packages.nix` are sorted quasi-alphabetically to avoid merge conflicts.
* Python libraries are supposed to be in `python-packages.nix` and packaged with `buildPythonPackage`. Python applications live outside of `python-packages.nix` and are packaged with `buildPythonApplication`.

View file

@ -1,447 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-python">
<title>Python</title>
<para>
Currently supported interpreters are <varname>python26</varname>, <varname>python27</varname>,
<varname>python33</varname>, <varname>python34</varname>, <varname>python35</varname>
and <varname>pypy</varname>.
</para>
<para>
<varname>python</varname> is an alias to <varname>python27</varname> and <varname>python3</varname> is an alias to <varname>python34</varname>.
</para>
<para>
<varname>python26</varname> and <varname>python27</varname> do not include modules that require
external dependencies (to reduce dependency bloat). Following modules need to be added as
<varname>buildInput</varname> explicitly:
</para>
<itemizedlist>
<listitem><para><varname>python.modules.bsddb</varname></para></listitem>
<listitem><para><varname>python.modules.curses</varname></para></listitem>
<listitem><para><varname>python.modules.curses_panel</varname></para></listitem>
<listitem><para><varname>python.modules.crypt</varname></para></listitem>
<listitem><para><varname>python.modules.gdbm</varname></para></listitem>
<listitem><para><varname>python.modules.sqlite3</varname></para></listitem>
<listitem><para><varname>python.modules.tkinter</varname></para></listitem>
<listitem><para><varname>python.modules.readline</varname></para></listitem>
</itemizedlist>
<para>For convenience <varname>python27Full</varname> and <varname>python26Full</varname>
are provided with all modules included.</para>
<para>
Python packages that
use <link xlink:href="http://pypi.python.org/pypi/setuptools/"><literal>setuptools</literal></link> or <literal>distutils</literal>,
can be built using the <varname>buildPythonPackage</varname> function as documented below.
</para>
<para>
All packages depending on any Python interpreter get appended <varname>$out/${python.sitePackages}</varname>
to <literal>$PYTHONPATH</literal> if such directory exists.
</para>
<variablelist>
<title>
Useful attributes on interpreters packages:
</title>
<varlistentry>
<term><varname>libPrefix</varname></term>
<listitem><para>
Name of the folder in <literal>${python}/lib/</literal> for corresponding interpreter.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>interpreter</varname></term>
<listitem><para>
Alias for <literal>${python}/bin/${executable}.</literal>
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>buildEnv</varname></term>
<listitem><para>
Function to build python interpreter environments with extra packages bundled together.
See <xref linkend="ssec-python-build-env" /> for usage and documentation.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>sitePackages</varname></term>
<listitem><para>
Alias for <literal>lib/${libPrefix}/site-packages</literal>.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>executable</varname></term>
<listitem><para>
Name of the interpreter executable, ie <literal>python3.4</literal>.
</para></listitem>
</varlistentry>
</variablelist>
<section xml:id="ssec-build-python-package"><title><varname>buildPythonPackage</varname> function</title>
<para>
The function is implemented in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/python-modules/generic/default.nix">
<filename>pkgs/development/python-modules/generic/default.nix</filename></link>.
Example usage:
<programlisting language="nix">
twisted = buildPythonPackage {
name = "twisted-8.1.0";
src = pkgs.fetchurl {
url = http://tmrc.mit.edu/mirror/twisted/Twisted/8.1/Twisted-8.1.0.tar.bz2;
sha256 = "0q25zbr4xzknaghha72mq57kh53qw1bf8csgp63pm9sfi72qhirl";
};
propagatedBuildInputs = [ self.ZopeInterface ];
meta = {
homepage = http://twistedmatrix.com/;
description = "Twisted, an event-driven networking engine written in Python";
license = stdenv.lib.licenses.mit;
};
};
</programlisting>
Most of Python packages that use <varname>buildPythonPackage</varname> are defined
in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/python-packages.nix"><filename>pkgs/top-level/python-packages.nix</filename></link>
and generated for each python interpreter separately into attribute sets <varname>python26Packages</varname>,
<varname>python27Packages</varname>, <varname>python35Packages</varname>, <varname>python33Packages</varname>,
<varname>python34Packages</varname> and <varname>pypyPackages</varname>.
</para>
<para>
<function>buildPythonPackage</function> mainly does four things:
<orderedlist>
<listitem><para>
In the <varname>buildPhase</varname>, it calls
<literal>${python.interpreter} setup.py bdist_wheel</literal> to build a wheel binary zipfile.
</para></listitem>
<listitem><para>
In the <varname>installPhase</varname>, it installs the wheel file using
<literal>pip install *.whl</literal>.
</para></listitem>
<listitem><para>
In the <varname>postFixup</varname> phase, <literal>wrapPythonPrograms</literal>
bash function is called to wrap all programs in <filename>$out/bin/*</filename>
directory to include <literal>$PYTHONPATH</literal> and <literal>$PATH</literal>
environment variables.
</para></listitem>
<listitem><para>
In the <varname>installCheck</varname> phase, <literal>${python.interpreter} setup.py test</literal>
is ran.
</para></listitem>
</orderedlist>
</para>
<para>By default <varname>doCheck = true</varname> is set</para>
<para>
As in Perl, dependencies on other Python packages can be specified in the
<varname>buildInputs</varname> and
<varname>propagatedBuildInputs</varname> attributes. If something is
exclusively a build-time dependency, use
<varname>buildInputs</varname>; if its (also) a runtime dependency,
use <varname>propagatedBuildInputs</varname>.
</para>
<para>
By default <varname>meta.platforms</varname> is set to the same value
as the interpreter unless overriden otherwise.
</para>
<variablelist>
<title>
<varname>buildPythonPackage</varname> parameters
(all parameters from <varname>mkDerivation</varname> function are still supported)
</title>
<varlistentry>
<term><varname>namePrefix</varname></term>
<listitem><para>
Prepended text to <varname>${name}</varname> parameter.
Defaults to <literal>"python3.3-"</literal> for Python 3.3, etc. Set it to
<literal>""</literal>
if you're packaging an application or a command line tool.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>disabled</varname></term>
<listitem><para>
If <varname>true</varname>, package is not build for
particular python interpreter version. Grep around
<filename>pkgs/top-level/python-packages.nix</filename>
for examples.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>setupPyBuildFlags</varname></term>
<listitem><para>
List of flags passed to <command>setup.py build_ext</command> command.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>pythonPath</varname></term>
<listitem><para>
List of packages to be added into <literal>$PYTHONPATH</literal>.
Packages in <varname>pythonPath</varname> are not propagated
(contrary to <varname>propagatedBuildInputs</varname>).
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>preShellHook</varname></term>
<listitem><para>
Hook to execute commands before <varname>shellHook</varname>.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>postShellHook</varname></term>
<listitem><para>
Hook to execute commands after <varname>shellHook</varname>.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>makeWrapperArgs</varname></term>
<listitem><para>
A list of strings. Arguments to be passed to
<varname>makeWrapper</varname>, which wraps generated binaries. By
default, the arguments to <varname>makeWrapper</varname> set
<varname>PATH</varname> and <varname>PYTHONPATH</varname> environment
variables before calling the binary. Additional arguments here can
allow a developer to set environment variables which will be
available when the binary is run. For example,
<varname>makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]</varname>.
</para></listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="ssec-python-build-env"><title><function>python.buildEnv</function> function</title>
<para>
Create Python environments using low-level <function>pkgs.buildEnv</function> function. Example <filename>default.nix</filename>:
<programlisting language="nix">
<![CDATA[with import <nixpkgs> {};
python.buildEnv.override {
extraLibs = [ pkgs.pythonPackages.pyramid ];
ignoreCollisions = true;
}]]>
</programlisting>
Running <command>nix-build</command> will create
<filename>/nix/store/cf1xhjwzmdki7fasgr4kz6di72ykicl5-python-2.7.8-env</filename>
with wrapped binaries in <filename>bin/</filename>.
</para>
<para>
You can also use <varname>env</varname> attribute to create local
environments with needed packages installed (somewhat comparable to
<literal>virtualenv</literal>). For example, with the following
<filename>shell.nix</filename>:
<programlisting language="nix">
<![CDATA[with import <nixpkgs> {};
(python3.buildEnv.override {
extraLibs = with python3Packages;
[ numpy
requests
];
}).env]]>
</programlisting>
Running <command>nix-shell</command> will drop you into a shell where
<command>python</command> will have specified packages in its path.
</para>
<variablelist>
<title>
<function>python.buildEnv</function> arguments
</title>
<varlistentry>
<term><varname>extraLibs</varname></term>
<listitem><para>
List of packages installed inside the environment.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>postBuild</varname></term>
<listitem><para>
Shell command executed after the build of environment.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>ignoreCollisions</varname></term>
<listitem><para>
Ignore file collisions inside the environment (default is <varname>false</varname>).
</para></listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="ssec-python-tools"><title>Tools</title>
<para>Packages inside nixpkgs are written by hand. However many tools
exist in community to help save time. No tool is preferred at the moment.
</para>
<itemizedlist>
<listitem><para>
<link xlink:href="https://github.com/proger/python2nix">python2nix</link>
by Vladimir Kirillov
</para></listitem>
<listitem><para>
<link xlink:href="https://github.com/garbas/pypi2nix">pypi2nix</link>
by Rok Garbas
</para></listitem>
<listitem><para>
<link xlink:href="https://github.com/offlinehacker/pypi2nix">pypi2nix</link>
by Jaka Hudoklin
</para></listitem>
</itemizedlist>
</section>
<section xml:id="ssec-python-development"><title>Development</title>
<para>
To develop Python packages <function>buildPythonPackage</function> has
additional logic inside <varname>shellPhase</varname> to run
<command>pip install -e . --prefix $TMPDIR/</command> for the package.
</para>
<warning><para><varname>shellPhase</varname> is executed only if <filename>setup.py</filename>
exists.</para></warning>
<para>
Given a <filename>default.nix</filename>:
<programlisting language="nix">
<![CDATA[with import <nixpkgs> {};
buildPythonPackage {
name = "myproject";
buildInputs = with pkgs.pythonPackages; [ pyramid ];
src = ./.;
}]]>
</programlisting>
Running <command>nix-shell</command> with no arguments should give you
the environment in which the package would be build with
<command>nix-build</command>.
</para>
<para>
Shortcut to setup environments with C headers/libraries and python packages:
<programlisting language="bash">$ nix-shell -p pythonPackages.pyramid zlib libjpeg git</programlisting>
</para>
<note><para>
There is a boolean value <varname>lib.inNixShell</varname> set to
<varname>true</varname> if nix-shell is invoked.
</para></note>
</section>
<section xml:id="ssec-python-faq"><title>FAQ</title>
<variablelist>
<varlistentry>
<term>How to solve circular dependencies?</term>
<listitem><para>
If you have packages <varname>A</varname> and <varname>B</varname> that
depend on each other, when packaging <varname>B</varname> override package
<varname>A</varname> not to depend on <varname>B</varname> as input
(and also the other way around).
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>install_data / data_files</varname> problems resulting into <literal>error: could not create '/nix/store/6l1bvljpy8gazlsw2aw9skwwp4pmvyxw-python-2.7.8/etc': Permission denied</literal></term>
<listitem><para>
<link xlink:href="https://bitbucket.org/pypa/setuptools/issue/130/install_data-doesnt-respect-prefix">
Known bug in setuptools <varname>install_data</varname> does not respect --prefix</link>. Example of
such package using the feature is <filename>pkgs/tools/X11/xpra/default.nix</filename>. As workaround
install it as an extra <varname>preInstall</varname> step:
<programlisting>${python.interpreter} setup.py install_data --install-dir=$out --root=$out
sed -i '/ = data_files/d' setup.py</programlisting>
</para></listitem>
</varlistentry>
<varlistentry>
<term>Rationale of non-existent global site-packages</term>
<listitem><para>
There is no need to have global site-packages in Nix. Each package has isolated
dependency tree and installing any python package will only populate <varname>$PATH</varname>
inside user environment. See <xref linkend="ssec-python-build-env" /> to create self-contained
interpreter with a set of packages.
</para></listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="ssec-python-contrib"><title>Contributing guidelines</title>
<para>
Following rules are desired to be respected:
</para>
<itemizedlist>
<listitem><para>
Make sure package builds for all python interpreters. Use <varname>disabled</varname> argument to
<function>buildPythonPackage</function> to set unsupported interpreters.
</para></listitem>
<listitem><para>
If tests need to be disabled for a package, make sure you leave a comment about reasoning.
</para></listitem>
<listitem><para>
Packages in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/python-packages.nix"><filename>pkgs/top-level/python-packages.nix</filename></link>
are sorted quasi-alphabetically to avoid merge conflicts.
</para></listitem>
</itemizedlist>
</section>
</section>

View file

@ -12,25 +12,26 @@
<screen>
<![CDATA[$ cd pkgs/servers/monitoring
$ mkdir sensu
$ cd sensu
$ cat > Gemfile
source 'https://rubygems.org'
gem 'sensu'
$ bundler package --path /tmp/vendor/bundle
$ nix-shell -p bundler --command "bundler package --path /tmp/vendor/bundle"
$ $(nix-build '<nixpkgs>' -A bundix)/bin/bundix
$ cat > default.nix
{ lib, bundlerEnv, ruby }:
bundlerEnv {
name = "sensu-0.17.1";
bundlerEnv rec {
name = "sensu-${version}";
version = (import gemset).sensu.version;
inherit ruby;
gemfile = ./Gemfile;
lockfile = ./Gemfile.lock;
gemset = ./gemset.nix;
meta = with lib; {
description = "A monitoring framework that aims to be simple, malleable,
and scalable.";
description = "A monitoring framework that aims to be simple, malleable, and scalable";
homepage = http://sensuapp.org/;
license = with licenses; mit;
maintainers = with maintainers; [ theuni ];

View file

@ -0,0 +1,59 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-texlive">
<title>TeX Live</title>
<para>Since release 15.09 there is a new TeX Live packaging that lives entirely under attribute <varname>texlive</varname>.</para>
<section><title>User's guide</title>
<itemizedlist>
<listitem><para>
For basic usage just pull <varname>texlive.combined.scheme-basic</varname> for an environment with basic LaTeX support.</para></listitem>
<listitem><para>
It typically won't work to use separately installed packages together.
Instead, you can build a custom set of packages like this:
<programlisting>
texlive.combine {
inherit (texlive) scheme-small collection-langkorean algorithms cm-super;
}
</programlisting>
There are all the schemes, collections and a few thousand packages, as defined upstream (perhaps with tiny differences).
</para></listitem>
<listitem><para>
By default you only get executables and files needed during runtime, and a little documentation for the core packages. To change that, you need to add <varname>pkgFilter</varname> function to <varname>combine</varname>.
<programlisting>
texlive.combine {
# inherit (texlive) whatever-you-want;
pkgFilter = pkg:
pkg.tlType == "run" || pkg.tlType == "bin" || pkg.pname == "cm-super";
# elem tlType [ "run" "bin" "doc" "source" ]
# there are also other attributes: version, name
}
</programlisting>
</para></listitem>
<listitem><para>
You can list packages e.g. by <command>nix-repl</command>.
<programlisting>
$ nix-repl
nix-repl> texlive.collection-&lt;TAB>
</programlisting>
</para></listitem>
</itemizedlist>
</section>
<section><title>Known problems</title>
<itemizedlist>
<listitem><para>
Some tools are still missing, e.g. luajittex;</para></listitem>
<listitem><para>
some apps aren't packaged/tested yet (asymptote, biber, etc.);</para></listitem>
<listitem><para>
feature/bug: when a package is rejected by <varname>pkgFilter</varname>, its dependencies are still propagated;</para></listitem>
<listitem><para>
in case of any bugs or feature requests, file a github issue or better a pull request and /cc @vcunat.</para></listitem>
</itemizedlist>
</section>
</section>

View file

@ -12,9 +12,15 @@ rec {
inherit (builtins) attrNames listToAttrs hasAttr isAttrs getAttr;
/* Return an attribute from nested attribute sets. For instance
["x" "y"] applied to some set e returns e.x.y, if it exists. The
default value is returned otherwise. */
/* Return an attribute from nested attribute sets.
Example:
x = { a = { b = 3; }; }
attrByPath ["a" "b"] 6 x
=> 3
attrByPath ["z" "z"] 6 x
=> 6
*/
attrByPath = attrPath: default: e:
let attr = head attrPath;
in
@ -24,8 +30,15 @@ rec {
else default;
/* Return if an attribute from nested attribute set exists.
For instance ["x" "y"] applied to some set e returns true, if e.x.y exists. False
is returned otherwise. */
Example:
x = { a = { b = 3; }; }
hasAttrByPath ["a" "b"] x
=> true
hasAttrByPath ["z" "z"] x
=> false
*/
hasAttrByPath = attrPath: e:
let attr = head attrPath;
in
@ -35,14 +48,28 @@ rec {
else false;
/* Return nested attribute set in which an attribute is set. For instance
["x" "y"] applied with some value v returns `x.y = v;' */
/* Return nested attribute set in which an attribute is set.
Example:
setAttrByPath ["a" "b"] 3
=> { a = { b = 3; }; }
*/
setAttrByPath = attrPath: value:
if attrPath == [] then value
else listToAttrs
[ { name = head attrPath; value = setAttrByPath (tail attrPath) value; } ];
/* Like `getAttrPath' without a default value. If it doesn't find the
path it will throw.
Example:
x = { a = { b = 3; }; }
getAttrFromPath ["a" "b"] x
=> 3
getAttrFromPath ["z" "z"] x
=> error: cannot find attribute `z.z'
*/
getAttrFromPath = attrPath: set:
let errorMsg = "cannot find attribute `" + concatStringsSep "." attrPath + "'";
in attrByPath attrPath (abort errorMsg) set;
@ -109,9 +136,11 @@ rec {
) (attrNames set)
);
/* foldAttrs: apply fold functions to values grouped by key. Eg accumulate values as list:
foldAttrs (n: a: [n] ++ a) [] [{ a = 2; } { a = 3; }]
=> { a = [ 2 3 ]; }
/* Apply fold functions to values grouped by key.
Example:
foldAttrs (n: a: [n] ++ a) [] [{ a = 2; } { a = 3; }]
=> { a = [ 2 3 ]; }
*/
foldAttrs = op: nul: list_of_attrs:
fold (n: a:
@ -147,7 +176,12 @@ rec {
/* Utility function that creates a {name, value} pair as expected by
builtins.listToAttrs. */
builtins.listToAttrs.
Example:
nameValuePair "some" 6
=> { name = "some"; value = 6; }
*/
nameValuePair = name: value: { inherit name value; };
@ -248,11 +282,19 @@ rec {
listToAttrs (map (n: nameValuePair n (f n)) names);
/* Check whether the argument is a derivation. */
/* Check whether the argument is a derivation. Any set with
{ type = "derivation"; } counts as a derivation.
Example:
nixpkgs = import <nixpkgs> {}
isDerivation nixpkgs.ruby
=> true
isDerivation "foobar"
=> false
*/
isDerivation = x: isAttrs x && x ? type && x.type == "derivation";
/* Convert a store path to a fake derivation. */
/* Converts a store path to a fake derivation. */
toDerivation = path:
let path' = builtins.storePath path; in
{ type = "derivation";
@ -262,32 +304,49 @@ rec {
};
/* If the Boolean `cond' is true, return the attribute set `as',
otherwise an empty attribute set. */
/* If `cond' is true, return the attribute set `as',
otherwise an empty attribute set.
Example:
optionalAttrs (true) { my = "set"; }
=> { my = "set"; }
optionalAttrs (false) { my = "set"; }
=> { }
*/
optionalAttrs = cond: as: if cond then as else {};
/* Merge sets of attributes and use the function f to merge attributes
values. */
values.
Example:
zipAttrsWithNames ["a"] (name: vs: vs) [{a = "x";} {a = "y"; b = "z";}]
=> { a = ["x" "y"]; }
*/
zipAttrsWithNames = names: f: sets:
listToAttrs (map (name: {
inherit name;
value = f name (catAttrs name sets);
}) names);
# implentation note: Common names appear multiple times in the list of
# names, hopefully this does not affect the system because the maximal
# laziness avoid computing twice the same expression and listToAttrs does
# not care about duplicated attribute names.
/* Implentation note: Common names appear multiple times in the list of
names, hopefully this does not affect the system because the maximal
laziness avoid computing twice the same expression and listToAttrs does
not care about duplicated attribute names.
Example:
zipAttrsWith (name: values: values) [{a = "x";} {a = "y"; b = "z";}]
=> { a = ["x" "y"]; b = ["z"] }
*/
zipAttrsWith = f: sets: zipAttrsWithNames (concatMap attrNames sets) f sets;
/* Like `zipAttrsWith' with `(name: values: value)' as the function.
Example:
zipAttrs [{a = "x";} {a = "y"; b = "z";}]
=> { a = ["x" "y"]; b = ["z"] }
*/
zipAttrs = zipAttrsWith (name: values: values);
/* backward compatibility */
zipWithNames = zipAttrsWithNames;
zip = builtins.trace "lib.zip is deprecated, use lib.zipAttrsWith instead" zipAttrsWith;
/* Does the same as the update operator '//' except that attributes are
merged until the given pedicate is verified. The predicate should
accept 3 arguments which are the path to reach the attribute, a part of
@ -351,6 +410,15 @@ rec {
!(isAttrs lhs && isAttrs rhs)
) lhs rhs;
/* Returns true if the pattern is contained in the set. False otherwise.
FIXME(zimbatm): this example doesn't work !!!
Example:
sys = mkSystem { }
matchAttrs { cpu = { bits = 64; }; } sys
=> true
*/
matchAttrs = pattern: attrs:
fold or false (attrValues (zipAttrsWithNames (attrNames pattern) (n: values:
let pat = head values; val = head (tail values); in
@ -359,10 +427,23 @@ rec {
else pat == val
) [pattern attrs]));
# override only the attributes that are already present in the old set
# useful for deep-overriding
/* Override only the attributes that are already present in the old set
useful for deep-overriding.
Example:
x = { a = { b = 4; c = 3; }; }
overrideExisting x { a = { b = 6; d = 2; }; }
=> { a = { b = 6; d = 2; }; }
*/
overrideExisting = old: new:
old // listToAttrs (map (attr: nameValuePair attr (attrByPath [attr] old.${attr} new)) (attrNames old));
deepSeqAttrs = x: y: deepSeqList (attrValues x) y;
/*** deprecated stuff ***/
deepSeqAttrs = throw "removed 2016-02-29 because unused and broken";
zipWithNames = zipAttrsWithNames;
zip = builtins.trace
"lib.zip is deprecated, use lib.zipAttrsWith instead" zipAttrsWith;
}

View file

@ -175,6 +175,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Eclipse Public License 1.0";
};
epson = {
fullName = "Seiko Epson Corporation Software License Agreement for Linux";
url = https://download.ebz.epson.net/dsc/du/02/eula/global/LINUX_EN.html;
free = false;
};
fdl12 = spdx {
spdxId = "GFDL-1.2";
fullName = "GNU Free Documentation License v1.2";

View file

@ -6,17 +6,26 @@ rec {
inherit (builtins) head tail length isList elemAt concatLists filter elem genList;
/* Create a list consisting of a single element. `singleton x' is
sometimes more convenient with respect to indentation than `[x]'
when x spans multiple lines.
# Create a list consisting of a single element. `singleton x' is
# sometimes more convenient with respect to indentation than `[x]'
# when x spans multiple lines.
Example:
singleton "foo"
=> [ "foo" ]
*/
singleton = x: [x];
/* "Fold" a binary function `op' between successive elements of
`list' with `nul' as the starting value, i.e., `fold op nul [x_1
x_2 ... x_n] == op x_1 (op x_2 ... (op x_n nul))'. (This is
Haskell's foldr).
# "Fold" a binary function `op' between successive elements of
# `list' with `nul' as the starting value, i.e., `fold op nul [x_1
# x_2 ... x_n] == op x_1 (op x_2 ... (op x_n nul))'. (This is
# Haskell's foldr).
Example:
concat = fold (a: b: a + b) "z"
concat [ "a" "b" "c" ]
=> "abcnul"
*/
fold = op: nul: list:
let
len = length list;
@ -26,8 +35,14 @@ rec {
else op (elemAt list n) (fold' (n + 1));
in fold' 0;
# Left fold: `fold op nul [x_1 x_2 ... x_n] == op (... (op (op nul
# x_1) x_2) ... x_n)'.
/* Left fold: `fold op nul [x_1 x_2 ... x_n] == op (... (op (op nul
x_1) x_2) ... x_n)'.
Example:
lconcat = foldl (a: b: a + b) "z"
lconcat [ "a" "b" "c" ]
=> "zabc"
*/
foldl = op: nul: list:
let
len = length list;
@ -37,13 +52,22 @@ rec {
else op (foldl' (n - 1)) (elemAt list n);
in foldl' (length list - 1);
/* Strict version of foldl.
# Strict version of foldl.
The difference is that evaluation is forced upon access. Usually used
with small whole results (in contract with lazily-generated list or large
lists where only a part is consumed.)
*/
foldl' = builtins.foldl' or foldl;
/* Map with index
# Map with index: `imap (i: v: "${v}-${toString i}") ["a" "b"] ==
# ["a-1" "b-2"]'. FIXME: why does this start to count at 1?
FIXME(zimbatm): why does this start to count at 1?
Example:
imap (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-1" "b-2" ]
*/
imap =
if builtins ? genList then
f: list: genList (n: f (n + 1) (elemAt list n)) (length list)
@ -57,73 +81,141 @@ rec {
else [ (f (n + 1) (elemAt list n)) ] ++ imap' (n + 1);
in imap' 0;
/* Map and concatenate the result.
# Map and concatenate the result.
Example:
concatMap (x: [x] ++ ["z"]) ["a" "b"]
=> [ "a" "z" "b" "z" ]
*/
concatMap = f: list: concatLists (map f list);
/* Flatten the argument into a single list; that is, nested lists are
spliced into the top-level lists.
# Flatten the argument into a single list; that is, nested lists are
# spliced into the top-level lists. E.g., `flatten [1 [2 [3] 4] 5]
# == [1 2 3 4 5]' and `flatten 1 == [1]'.
Example:
flatten [1 [2 [3] 4] 5]
=> [1 2 3 4 5]
flatten 1
=> [1]
*/
flatten = x:
if isList x
then foldl' (x: y: x ++ (flatten y)) [] x
else [x];
/* Remove elements equal to 'e' from a list. Useful for buildInputs.
# Remove elements equal to 'e' from a list. Useful for buildInputs.
Example:
remove 3 [ 1 3 4 3 ]
=> [ 1 4 ]
*/
remove = e: filter (x: x != e);
/* Find the sole element in the list matching the specified
predicate, returns `default' if no such element exists, or
`multiple' if there are multiple matching elements.
# Find the sole element in the list matching the specified
# predicate, returns `default' if no such element exists, or
# `multiple' if there are multiple matching elements.
Example:
findSingle (x: x == 3) "none" "multiple" [ 1 3 3 ]
=> "multiple"
findSingle (x: x == 3) "none" "multiple" [ 1 3 ]
=> 3
findSingle (x: x == 3) "none" "multiple" [ 1 9 ]
=> "none"
*/
findSingle = pred: default: multiple: list:
let found = filter pred list; len = length found;
in if len == 0 then default
else if len != 1 then multiple
else head found;
/* Find the first element in the list matching the specified
predicate or returns `default' if no such element exists.
# Find the first element in the list matching the specified
# predicate or returns `default' if no such element exists.
Example:
findFirst (x: x > 3) 7 [ 1 6 4 ]
=> 6
findFirst (x: x > 9) 7 [ 1 6 4 ]
=> 7
*/
findFirst = pred: default: list:
let found = filter pred list;
in if found == [] then default else head found;
/* Return true iff function `pred' returns true for at least element
of `list'.
# Return true iff function `pred' returns true for at least element
# of `list'.
Example:
any isString [ 1 "a" { } ]
=> true
any isString [ 1 { } ]
=> false
*/
any = builtins.any or (pred: fold (x: y: if pred x then true else y) false);
/* Return true iff function `pred' returns true for all elements of
`list'.
# Return true iff function `pred' returns true for all elements of
# `list'.
Example:
all (x: x < 3) [ 1 2 ]
=> true
all (x: x < 3) [ 1 2 3 ]
=> false
*/
all = builtins.all or (pred: fold (x: y: if pred x then y else false) true);
/* Count how many times function `pred' returns true for the elements
of `list'.
# Count how many times function `pred' returns true for the elements
# of `list'.
Example:
count (x: x == 3) [ 3 2 3 4 6 ]
=> 2
*/
count = pred: foldl' (c: x: if pred x then c + 1 else c) 0;
/* Return a singleton list or an empty list, depending on a boolean
value. Useful when building lists with optional elements
(e.g. `++ optional (system == "i686-linux") flashplayer').
# Return a singleton list or an empty list, depending on a boolean
# value. Useful when building lists with optional elements
# (e.g. `++ optional (system == "i686-linux") flashplayer').
Example:
optional true "foo"
=> [ "foo" ]
optional false "foo"
=> [ ]
*/
optional = cond: elem: if cond then [elem] else [];
/* Return a list or an empty list, dependening on a boolean value.
# Return a list or an empty list, dependening on a boolean value.
Example:
optionals true [ 2 3 ]
=> [ 2 3 ]
optionals false [ 2 3 ]
=> [ ]
*/
optionals = cond: elems: if cond then elems else [];
# If argument is a list, return it; else, wrap it in a singleton
# list. If you're using this, you should almost certainly
# reconsider if there isn't a more "well-typed" approach.
/* If argument is a list, return it; else, wrap it in a singleton
list. If you're using this, you should almost certainly
reconsider if there isn't a more "well-typed" approach.
Example:
toList [ 1 2 ]
=> [ 1 2 ]
toList "hi"
=> [ "hi "]
*/
toList = x: if isList x then x else [x];
/* Return a list of integers from `first' up to and including `last'.
# Return a list of integers from `first' up to and including `last'.
Example:
range 2 4
=> [ 2 3 4 ]
range 3 2
=> [ ]
*/
range =
if builtins ? genList then
first: last:
@ -136,9 +228,13 @@ rec {
then []
else [first] ++ range (first + 1) last;
/* Splits the elements of a list in two lists, `right' and
`wrong', depending on the evaluation of a predicate.
# Partition the elements of a list in two lists, `right' and
# `wrong', depending on the evaluation of a predicate.
Example:
partition (x: x > 2) [ 5 1 2 3 4 ]
=> { right = [ 5 3 4 ]; wrong = [ 1 2 ]; }
*/
partition = pred:
fold (h: t:
if pred h
@ -146,7 +242,14 @@ rec {
else { right = t.right; wrong = [h] ++ t.wrong; }
) { right = []; wrong = []; };
/* Merges two lists of the same size together. If the sizes aren't the same
the merging stops at the shortest. How both lists are merged is defined
by the first argument.
Example:
zipListsWith (a: b: a + b) ["h" "l"] ["e" "o"]
=> ["he" "lo"]
*/
zipListsWith =
if builtins ? genList then
f: fst: snd: genList (n: f (elemAt fst n) (elemAt snd n)) (min (length fst) (length snd))
@ -161,21 +264,37 @@ rec {
else [];
in zipListsWith' 0;
/* Merges two lists of the same size together. If the sizes aren't the same
the merging stops at the shortest.
Example:
zipLists [ 1 2 ] [ "a" "b" ]
=> [ { fst = 1; snd = "a"; } { fst = 2; snd = "b"; } ]
*/
zipLists = zipListsWith (fst: snd: { inherit fst snd; });
/* Reverse the order of the elements of a list.
# Reverse the order of the elements of a list.
Example:
reverseList [ "b" "o" "j" ]
=> [ "j" "o" "b" ]
*/
reverseList =
if builtins ? genList then
xs: let l = length xs; in genList (n: elemAt xs (l - n - 1)) l
else
fold (e: acc: acc ++ [ e ]) [];
/* Sort a list based on a comparator function which compares two
elements and returns true if the first argument is strictly below
the second argument. The returned list is sorted in an increasing
order. The implementation does a quick-sort.
# Sort a list based on a comparator function which compares two
# elements and returns true if the first argument is strictly below
# the second argument. The returned list is sorted in an increasing
# order. The implementation does a quick-sort.
Example:
sort (a: b: a < b) [ 5 3 7 ]
=> [ 3 5 7 ]
*/
sort = builtins.sort or (
strictLess: list:
let
@ -193,8 +312,14 @@ rec {
if len < 2 then list
else (sort strictLess pivot.left) ++ [ first ] ++ (sort strictLess pivot.right));
/* Return the first (at most) N elements of a list.
# Return the first (at most) N elements of a list.
Example:
take 2 [ "a" "b" "c" "d" ]
=> [ "a" "b" ]
take 2 [ ]
=> [ ]
*/
take =
if builtins ? genList then
count: sublist 0 count
@ -209,8 +334,14 @@ rec {
[ (elemAt list n) ] ++ take' (n + 1);
in take' 0;
/* Remove the first (at most) N elements of a list.
# Remove the first (at most) N elements of a list.
Example:
drop 2 [ "a" "b" "c" "d" ]
=> [ "c" "d" ]
drop 2 [ ]
=> [ ]
*/
drop =
if builtins ? genList then
count: list: sublist count (length list) list
@ -225,9 +356,15 @@ rec {
drop' (n - 1) ++ [ (elemAt list n) ];
in drop' (len - 1);
/* Return a list consisting of at most count elements of list,
starting at index start.
# Return a list consisting of at most count elements of list,
# starting at index start.
Example:
sublist 1 3 [ "a" "b" "c" "d" "e" ]
=> [ "b" "c" "d" ]
sublist 1 3 [ ]
=> [ ]
*/
sublist = start: count: list:
let len = length list; in
genList
@ -236,23 +373,36 @@ rec {
else if start + count > len then len - start
else count);
/* Return the last element of a list.
# Return the last element of a list.
Example:
last [ 1 2 3 ]
=> 3
*/
last = list:
assert list != []; elemAt list (length list - 1);
/* Return all elements but the last
# Return all elements but the last
Example:
init [ 1 2 3 ]
=> [ 1 2 ]
*/
init = list: assert list != []; take (length list - 1) list;
deepSeqList = xs: y: if any (x: deepSeq x false) xs then y else y;
/* FIXME(zimbatm) Not used anywhere
*/
crossLists = f: foldl (fs: args: concatMap (f: map f args) fs) [f];
# Remove duplicate elements from the list. O(n^2) complexity.
/* Remove duplicate elements from the list. O(n^2) complexity.
Example:
unique [ 3 2 3 4 ]
=> [ 3 2 4 ]
*/
unique = list:
if list == [] then
[]
@ -262,12 +412,24 @@ rec {
xs = unique (drop 1 list);
in [x] ++ remove x xs;
/* Intersects list 'e' and another list. O(nm) complexity.
# Intersects list 'e' and another list. O(nm) complexity.
Example:
intersectLists [ 1 2 3 ] [ 6 3 2 ]
=> [ 3 2 ]
*/
intersectLists = e: filter (x: elem x e);
/* Subtracts list 'e' from another list. O(nm) complexity.
# Subtracts list 'e' from another list. O(nm) complexity.
Example:
subtractLists [ 3 2 ] [ 1 2 3 4 5 3 ]
=> [ 1 4 5 ]
*/
subtractLists = e: filter (x: !(elem x e));
/*** deprecated stuff ***/
deepSeqList = throw "removed 2016-02-29 because unused and broken";
}

View file

@ -12,6 +12,7 @@
abbradar = "Nikolay Amiantov <ab@fmap.me>";
aboseley = "Adam Boseley <adam.boseley@gmail.com>";
adev = "Adrien Devresse <adev@adev.name>";
Adjective-Object = "Maxwell Huang-Hobbs <mhuan13@gmail.com>";
aespinosa = "Allan Espinosa <allan.espinosa@outlook.com>";
aflatter = "Alexander Flatter <flatter@fastmail.fm>";
aforemny = "Alexander Foremny <alexanderforemny@googlemail.com>";
@ -59,6 +60,7 @@
bodil = "Bodil Stokke <nix@bodil.org>";
boothead = "Ben Ford <ben@perurbis.com>";
bosu = "Boris Sukholitko <boriss@gmail.com>";
bradediger = "Brad Ediger <brad@bradediger.com>";
bramd = "Bram Duvigneau <bram@bramd.nl>";
bstrik = "Berno Strik <dutchman55@gmx.com>";
bzizou = "Bruno Bzeznik <Bruno@bzizou.net>";
@ -123,6 +125,7 @@
fpletz = "Franz Pletz <fpletz@fnordicwalking.de>";
fps = "Florian Paul Schmidt <mista.tapas@gmx.net>";
fridh = "Frederik Rietdijk <fridh@fridh.nl>";
frlan = "Frank Lanitz <frank@frank.uvena.de>";
fro_ozen = "fro_ozen <fro_ozen@gmx.de>";
ftrvxmtrx = "Siarhei Zirukin <ftrvxmtrx@gmail.com>";
funfunctor = "Edward O'Callaghan <eocallaghan@alterapraxis.com>";
@ -152,7 +155,6 @@
iElectric = "Domen Kozar <domen@dev.si>";
igsha = "Igor Sharonov <igor.sharonov@gmail.com>";
ikervagyok = "Balázs Lengyel <ikervagyok@gmail.com>";
iyzsong = "Song Wenwu <iyzsong@gmail.com>";
j-keck = "Jürgen Keck <jhyphenkeck@gmail.com>";
jagajaga = "Arseniy Seroka <ars.seroka@gmail.com>";
javaguirre = "Javier Aguirre <contacto@javaguirre.net>";
@ -208,10 +210,12 @@
malyn = "Michael Alyn Miller <malyn@strangeGizmo.com>";
manveru = "Michael Fellinger <m.fellinger@gmail.com>";
marcweber = "Marc Weber <marco-oweber@gmx.de>";
markus1189 = "Markus Hauck <markus1189@gmail.com>";
markWot = "Markus Wotringer <markus@wotringer.de>";
matejc = "Matej Cotman <cotman.matej@gmail.com>";
mathnerd314 = "Mathnerd314 <mathnerd314.gph+hs@gmail.com>";
matthiasbeyer = "Matthias Beyer <mail@beyermatthias.de>";
mbauer = "Matthew Bauer <mjbauer95@gmail.com>";
maurer = "Matthew Maurer <matthew.r.maurer+nix@gmail.com>";
mbakke = "Marius Bakke <ymse@tuta.io>";
mbe = "Brandon Edens <brandonedens@gmail.com>";
@ -248,6 +252,7 @@
olcai = "Erik Timan <dev@timan.info>";
orbitz = "Malcolm Matalka <mmatalka@gmail.com>";
osener = "Ozan Sener <ozan@ozansener.com>";
otwieracz = "Slawomir Gonet <slawek@otwiera.cz>";
oxij = "Jan Malakhovski <oxij@oxij.org>";
page = "Carles Pagès <page@cubata.homelinux.net>";
paholg = "Paho Lurie-Gregg <paho@paholg.com>";
@ -255,6 +260,7 @@
palo = "Ingolf Wanger <palipalo9@googlemail.com>";
pashev = "Igor Pashev <pashev.igor@gmail.com>";
pesterhazy = "Paulus Esterhazy <pesterhazy@gmail.com>";
peterhoeg = "Peter Hoeg <peter@hoeg.com>";
philandstuff = "Philip Potter <philip.g.potter@gmail.com>";
phile314 = "Philipp Hausmann <nix@314.ch>";
Phlogistique = "Noé Rubinstein <noe.rubinstein@gmail.com>";
@ -273,6 +279,7 @@
psibi = "Sibi <sibi@psibi.in>";
pSub = "Pascal Wittmann <mail@pascal-wittmann.de>";
puffnfresh = "Brian McKenna <brian@brianmckenna.org>";
pxc = "Patrick Callahan <patrick.callahan@latitudeengineering.com>";
qknight = "Joachim Schiele <js@lastlog.de>";
ragge = "Ragnar Dahlen <r.dahlen@gmail.com>";
raskin = "Michael Raskin <7c6f434c@mail.ru>";
@ -293,13 +300,16 @@
rushmorem = "Rushmore Mushambi <rushmore@webenchanter.com>";
rvl = "Rodney Lorrimar <dev+nix@rodney.id.au>";
rvlander = "Gaëtan André <rvlander@gaetanandre.eu>";
ryanartecona = "Ryan Artecona <ryanartecona@gmail.com>";
ryantm = "Ryan Mulligan <ryan@ryantm.com>";
rycee = "Robert Helgesson <robert@rycee.net>";
samuelrivas = "Samuel Rivas <samuelrivas@gmail.com>";
sander = "Sander van der Burg <s.vanderburg@tudelft.nl>";
schmitthenner = "Fabian Schmitthenner <development@schmitthenner.eu>";
schristo = "Scott Christopher <schristopher@konputa.com>";
scolobb = "Sergiu Ivanov <sivanov@colimite.fr>";
sepi = "Raffael Mancini <raffael@mancini.lu>";
sheenobu = "Sheena Artrip <sheena.artrip@gmail.com>";
sheganinans = "Aistis Raulinaitis <sheganinans@gmail.com>";
shell = "Shell Turner <cam.turn@gmail.com>";
shlevy = "Shea Levy <shea@shealevy.com>";
@ -342,6 +352,7 @@
tv = "Tomislav Viljetić <tv@shackspace.de>";
tvestelind = "Tomas Vestelind <tomas.vestelind@fripost.org>";
twey = "James Twey Kay <twey@twey.co.uk>";
uralbash = "Svintsov Dmitry <root@uralbash.ru>";
urkud = "Yury G. Kudryashov <urkud+nix@ya.ru>";
vandenoever = "Jos van den Oever <jos@vandenoever.info>";
vanzef = "Ivan Solyankin <vanzef@gmail.com>";

View file

@ -15,7 +15,7 @@ Usage:
Attention:
let
pkgs = (import /etc/nixos/nixpkgs/pkgs/top-level/all-packages.nix) {};
pkgs = (import <nixpkgs>) {};
in let
inherit (pkgs.stringsWithDeps) fullDepEntry packEntry noDepEntry textClosureMap;
inherit (pkgs.lib) id;

View file

@ -10,67 +10,149 @@ rec {
inherit (builtins) stringLength substring head tail isString replaceStrings;
/* Concatenate a list of strings.
# Concatenate a list of strings.
Example:
concatStrings ["foo" "bar"]
=> "foobar"
*/
concatStrings =
if builtins ? concatStringsSep then
builtins.concatStringsSep ""
else
lib.foldl' (x: y: x + y) "";
/* Map a function over a list and concatenate the resulting strings.
# Map a function over a list and concatenate the resulting strings.
Example:
concatMapStrings (x: "a" + x) ["foo" "bar"]
=> "afooabar"
*/
concatMapStrings = f: list: concatStrings (map f list);
/* Like `concatMapStrings' except that the f functions also gets the
position as a parameter.
Example:
concatImapStrings (pos: x: "${toString pos}-${x}") ["foo" "bar"]
=> "1-foo2-bar"
*/
concatImapStrings = f: list: concatStrings (lib.imap f list);
/* Place an element between each element of a list
# Place an element between each element of a list, e.g.,
# `intersperse "," ["a" "b" "c"]' returns ["a" "," "b" "," "c"].
Example:
intersperse "/" ["usr" "local" "bin"]
=> ["usr" "/" "local" "/" "bin"].
*/
intersperse = separator: list:
if list == [] || length list == 1
then list
else tail (lib.concatMap (x: [separator x]) list);
/* Concatenate a list of strings with a separator between each element
# Concatenate a list of strings with a separator between each element, e.g.
# concatStringsSep " " ["foo" "bar" "xyzzy"] == "foo bar xyzzy"
Example:
concatStringsSep "/" ["usr" "local" "bin"]
=> "usr/local/bin"
*/
concatStringsSep = builtins.concatStringsSep or (separator: list:
concatStrings (intersperse separator list));
/* First maps over the list and then concatenates it.
Example:
concatMapStringsSep "-" (x: toUpper x) ["foo" "bar" "baz"]
=> "FOO-BAR-BAZ"
*/
concatMapStringsSep = sep: f: list: concatStringsSep sep (map f list);
/* First imaps over the list and then concatenates it.
Example:
concatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ]
=> "6-3-2"
*/
concatImapStringsSep = sep: f: list: concatStringsSep sep (lib.imap f list);
/* Construct a Unix-style search path consisting of each `subDir"
directory of the given list of packages.
# Construct a Unix-style search path consisting of each `subDir"
# directory of the given list of packages. For example,
# `makeSearchPath "bin" ["x" "y" "z"]' returns "x/bin:y/bin:z/bin".
Example:
makeSearchPath "bin" ["/root" "/usr" "/usr/local"]
=> "/root/bin:/usr/bin:/usr/local/bin"
makeSearchPath "bin" ["/"]
=> "//bin"
*/
makeSearchPath = subDir: packages:
concatStringsSep ":" (map (path: path + "/" + subDir) packages);
/* Construct a library search path (such as RPATH) containing the
libraries for a set of packages
# Construct a library search path (such as RPATH) containing the
# libraries for a set of packages, e.g. "${pkg1}/lib:${pkg2}/lib:...".
Example:
makeLibraryPath [ "/usr" "/usr/local" ]
=> "/usr/lib:/usr/local/lib"
pkgs = import <nixpkgs> { }
makeLibraryPath [ pkgs.openssl pkgs.zlib ]
=> "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r/lib:/nix/store/wwh7mhwh269sfjkm6k5665b5kgp7jrk2-zlib-1.2.8/lib"
*/
makeLibraryPath = pkgs: makeSearchPath "lib"
# try to guess the right output of each pkg
(map (pkg: pkg.lib or (pkg.out or pkg)) pkgs);
# Construct a binary search path (such as $PATH) containing the
# binaries for a set of packages, e.g. "${pkg1}/bin:${pkg2}/bin:...".
/* Construct a binary search path (such as $PATH) containing the
binaries for a set of packages.
Example:
makeBinPath ["/root" "/usr" "/usr/local"]
=> "/root/bin:/usr/bin:/usr/local/bin"
*/
makeBinPath = makeSearchPath "bin";
# Idem for Perl search paths.
/* Construct a perl search path (such as $PERL5LIB)
FIXME(zimbatm): this should be moved in perl-specific code
Example:
pkgs = import <nixpkgs> { }
makePerlPath [ pkgs.perlPackages.NetSMTP ]
=> "/nix/store/n0m1fk9c960d8wlrs62sncnadygqqc6y-perl-Net-SMTP-1.25/lib/perl5/site_perl"
*/
makePerlPath = makeSearchPath "lib/perl5/site_perl";
/* Dependening on the boolean `cond', return either the given string
or the empty string. Useful to contatenate against a bigger string.
# Dependening on the boolean `cond', return either the given string
# or the empty string.
Example:
optionalString true "some-string"
=> "some-string"
optionalString false "some-string"
=> ""
*/
optionalString = cond: string: if cond then string else "";
/* Determine whether a string has given prefix.
# Determine whether a string has given prefix/suffix.
Example:
hasPrefix "foo" "foobar"
=> true
hasPrefix "foo" "barfoo"
=> false
*/
hasPrefix = pref: str:
substring 0 (stringLength pref) str == pref;
/* Determine whether a string has given suffix.
Example:
hasSuffix "foo" "foobar"
=> false
hasSuffix "foo" "barfoo"
=> true
*/
hasSuffix = suff: str:
let
lenStr = stringLength str;
@ -78,36 +160,55 @@ rec {
in lenStr >= lenSuff &&
substring (lenStr - lenSuff) lenStr str == suff;
/* Convert a string to a list of characters (i.e. singleton strings).
This allows you to, e.g., map a function over each character. However,
note that this will likely be horribly inefficient; Nix is not a
general purpose programming language. Complex string manipulations
should, if appropriate, be done in a derivation.
Also note that Nix treats strings as a list of bytes and thus doesn't
handle unicode.
# Convert a string to a list of characters (i.e. singleton strings).
# For instance, "abc" becomes ["a" "b" "c"]. This allows you to,
# e.g., map a function over each character. However, note that this
# will likely be horribly inefficient; Nix is not a general purpose
# programming language. Complex string manipulations should, if
# appropriate, be done in a derivation.
Example:
stringToCharacters ""
=> [ ]
stringToCharacters "abc"
=> [ "a" "b" "c" ]
stringToCharacters "💩"
=> [ "<EFBFBD>" "<EFBFBD>" "<EFBFBD>" "<EFBFBD>" ]
*/
stringToCharacters = s:
map (p: substring p 1 s) (lib.range 0 (stringLength s - 1));
/* Manipulate a string character by character and replace them by
strings before concatenating the results.
# Manipulate a string charactter by character and replace them by
# strings before concatenating the results.
Example:
stringAsChars (x: if x == "a" then "i" else x) "nax"
=> "nix"
*/
stringAsChars = f: s:
concatStrings (
map f (stringToCharacters s)
);
/* Escape occurrence of the elements of list in string by
prefixing it with a backslash.
# Escape occurrence of the elements of list in string by
# prefixing it with a backslash. For example, escape ["(" ")"]
# "(foo)" returns the string \(foo\).
Example:
escape ["(" ")"] "(foo)"
=> "\\(foo\\)"
*/
escape = list: replaceChars list (map (c: "\\${c}") list);
/* Escape all characters that have special meaning in the Bourne shell.
# Escape all characters that have special meaning in the Bourne shell.
Example:
escapeShellArg "so([<>])me"
=> "so\\(\\[\\<\\>\\]\\)me"
*/
escapeShellArg = lib.escape (stringToCharacters "\\ ';$`()|<>\t*[]");
# Obsolete - use replaceStrings instead.
/* Obsolete - use replaceStrings instead. */
replaceChars = builtins.replaceStrings or (
del: new: s:
let
@ -121,21 +222,52 @@ rec {
in
stringAsChars subst s);
# Case conversion utilities.
lowerChars = stringToCharacters "abcdefghijklmnopqrstuvwxyz";
upperChars = stringToCharacters "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
/* Converts an ASCII string to lower-case.
Example:
toLower "HOME"
=> "home"
*/
toLower = replaceChars upperChars lowerChars;
/* Converts an ASCII string to upper-case.
Example:
toLower "home"
=> "HOME"
*/
toUpper = replaceChars lowerChars upperChars;
/* Appends string context from another string. This is an implementation
detail of Nix.
# Appends string context from another string.
Strings in Nix carry an invisible `context' which is a list of strings
representing store paths. If the string is later used in a derivation
attribute, the derivation will properly populate the inputDrvs and
inputSrcs.
Example:
pkgs = import <nixpkgs> { };
addContextFrom pkgs.coreutils "bar"
=> "bar"
*/
addContextFrom = a: b: substring 0 0 a + b;
/* Cut a string with a separator and produces a list of strings which
were separated by this separator.
# Cut a string with a separator and produces a list of strings which
# were separated by this separator; e.g., `splitString "."
# "foo.bar.baz"' returns ["foo" "bar" "baz"].
NOTE: this function is not performant and should be avoided
Example:
splitString "." "foo.bar.baz"
=> [ "foo" "bar" "baz" ]
splitString "/" "/usr/local/bin"
=> [ "" "usr" "local" "bin" ]
*/
splitString = _sep: _s:
let
sep = addContextFrom _s _sep;
@ -159,10 +291,15 @@ rec {
in
recurse 0 0;
/* Return the suffix of the second argument if the first argument matches
its prefix.
# return the suffix of the second argument if the first argument match its
# prefix. e.g.,
# `removePrefix "foo." "foo.bar.baz"' returns "bar.baz".
Example:
removePrefix "foo." "foo.bar.baz"
=> "bar.baz"
removePrefix "xxx" "foo.bar.baz"
=> "foo.bar.baz"
*/
removePrefix = pre: s:
let
preLen = stringLength pre;
@ -173,6 +310,15 @@ rec {
else
s;
/* Return the prefix of the second argument if the first argument matches
its suffix.
Example:
removeSuffix "front" "homefront"
=> "home"
removeSuffix "xxx" "homefront"
=> "homefront"
*/
removeSuffix = suf: s:
let
sufLen = stringLength suf;
@ -183,25 +329,49 @@ rec {
else
s;
# Return true iff string v1 denotes a version older than v2.
/* Return true iff string v1 denotes a version older than v2.
Example:
versionOlder "1.1" "1.2"
=> true
versionOlder "1.1" "1.1"
=> false
*/
versionOlder = v1: v2: builtins.compareVersions v2 v1 == 1;
/* Return true iff string v1 denotes a version equal to or newer than v2.
# Return true iff string v1 denotes a version equal to or newer than v2.
Example:
versionAtLeast "1.1" "1.0"
=> true
versionAtLeast "1.1" "1.1"
=> true
versionAtLeast "1.1" "1.2"
=> false
*/
versionAtLeast = v1: v2: !versionOlder v1 v2;
/* This function takes an argument that's either a derivation or a
derivation's "name" attribute and extracts the version part from that
argument.
# This function takes an argument that's either a derivation or a
# derivation's "name" attribute and extracts the version part from that
# argument. For example:
#
# lib.getVersion "youtube-dl-2016.01.01" ==> "2016.01.01"
# lib.getVersion pkgs.youtube-dl ==> "2016.01.01"
Example:
getVersion "youtube-dl-2016.01.01"
=> "2016.01.01"
getVersion pkgs.youtube-dl
=> "2016.01.01"
*/
getVersion = x: (builtins.parseDrvName (x.name or x)).version;
/* Extract name with version from URL. Ask for separator which is
supposed to start extension.
# Extract name with version from URL. Ask for separator which is
# supposed to start extension.
Example:
nameFromURL "https://nixos.org/releases/nix/nix-1.7/nix-1.7-x86_64-linux.tar.bz2" "-"
=> "nix"
nameFromURL "https://nixos.org/releases/nix/nix-1.7/nix-1.7-x86_64-linux.tar.bz2" "_"
=> "nix-1.7-x86"
*/
nameFromURL = url: sep:
let
components = splitString "/" url;
@ -209,14 +379,24 @@ rec {
name = builtins.head (splitString sep filename);
in assert name != filename; name;
/* Create an --{enable,disable}-<feat> string that can be passed to
standard GNU Autoconf scripts.
# Create an --{enable,disable}-<feat> string that can be passed to
# standard GNU Autoconf scripts.
Example:
enableFeature true "shared"
=> "--enable-shared"
enableFeature false "shared"
=> "--disable-shared"
*/
enableFeature = enable: feat: "--${if enable then "enable" else "disable"}-${feat}";
/* Create a fixed width string with additional prefix to match
required width.
# Create a fixed width string with additional prefix to match
# required width.
Example:
fixedWidthString 5 "0" (toString 15)
=> "00015"
*/
fixedWidthString = width: filler: str:
let
strw = lib.stringLength str;
@ -225,25 +405,58 @@ rec {
assert strw <= width;
if strw == width then str else filler + fixedWidthString reqWidth filler str;
/* Format a number adding leading zeroes up to fixed width.
# Format a number adding leading zeroes up to fixed width.
Example:
fixedWidthNumber 5 15
=> "00015"
*/
fixedWidthNumber = width: n: fixedWidthString width "0" (toString n);
/* Check whether a value is a store path.
# Check whether a value is a store path.
Example:
isStorePath "/nix/store/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11/bin/python"
=> false
isStorePath "/nix/store/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11/"
=> true
isStorePath pkgs.python
=> true
*/
isStorePath = x: builtins.substring 0 1 (toString x) == "/" && dirOf (builtins.toPath x) == builtins.storeDir;
# Convert string to int
# Obviously, it is a bit hacky to use fromJSON that way.
/* Convert string to int
Obviously, it is a bit hacky to use fromJSON that way.
Example:
toInt "1337"
=> 1337
toInt "-4"
=> -4
toInt "3.14"
=> error: floating point JSON numbers are not supported
*/
toInt = str:
let may_be_int = builtins.fromJSON str; in
if builtins.isInt may_be_int
then may_be_int
else throw "Could not convert ${str} to int.";
# Read a list of paths from `file', relative to the `rootPath'. Lines
# beginning with `#' are treated as comments and ignored. Whitespace
# is significant.
/* Read a list of paths from `file', relative to the `rootPath'. Lines
beginning with `#' are treated as comments and ignored. Whitespace
is significant.
NOTE: this function is not performant and should be avoided
Example:
readPathsFromFile /prefix
./pkgs/development/libraries/qt-5/5.4/qtbase/series
=> [ "/prefix/dlopen-resolv.patch" "/prefix/tzdir.patch"
"/prefix/dlopen-libXcursor.patch" "/prefix/dlopen-openssl.patch"
"/prefix/dlopen-dbus.patch" "/prefix/xdg-config-dirs.patch"
"/prefix/nix-profiles-library-paths.patch"
"/prefix/compose-search-path.patch" ]
*/
readPathsFromFile = rootPath: file:
let
root = toString rootPath;
@ -255,5 +468,4 @@ rec {
absolutePaths = builtins.map (path: builtins.toPath (root + "/" + path)) relativePaths;
in
absolutePaths;
}

View file

@ -187,6 +187,7 @@ in rec {
--param man.output.in.separate.dir 1 \
--param man.output.base.dir "'$out/share/man/'" \
--param man.endnotes.are.numbered 0 \
--param man.break.after.slash 1 \
${docbook5_xsl}/xml/xsl/docbook/manpages/docbook.xsl \
./man-pages.xml
'';

View file

@ -247,6 +247,47 @@ $TTL 1800
</programlisting>
</listitem>
<listitem>
<para>
<literal>service.syncthing.dataDir</literal> options now has to point
to exact folder where syncthing is writing to. Example configuration should
loook something like:
</para>
<programlisting>
services.syncthing = {
enable = true;
dataDir = "/home/somebody/.syncthing";
user = "somebody";
};
</programlisting>
</listitem>
<listitem>
<para>
<literal>networking.firewall.allowPing</literal> is now enabled by
default. Users are encourarged to configure an approiate rate limit for
their machines using the Kernel interface at
<filename>/proc/sys/net/ipv4/icmp_ratelimit</filename> and
<filename>/proc/sys/net/ipv6/icmp/ratelimit</filename> or using the
firewall itself, i.e. by setting the NixOS option
<literal>networking.firewall.pingLimit</literal>.
</para>
</listitem>
<listitem>
<para>
Systems with some broadcom cards used to result into a generated config
that is no longer accepted. If you get errors like
<screen>error: path /nix/store/*-broadcom-sta-* does not exist and cannot be created</screen>
you should either re-run <command>nixos-generate-config</command> or manually replace
<literal>"${config.boot.kernelPackages.broadcom_sta}"</literal>
by
<literal>config.boot.kernelPackages.broadcom_sta</literal>
in your <filename>/etc/nixos/hardware-configuration.nix</filename>.
More discussion is on <link xlink:href="https://github.com/NixOS/nixpkgs/pull/12595">
the github issue</link>.
</para>
</listitem>
</itemizedlist>

View file

@ -1,6 +1,6 @@
{ system, minimal ? false }:
{ system, minimal ? false, config ? {} }:
let pkgs = import ../.. { config = {}; inherit system; }; in
let pkgs = import ../.. { inherit system config; }; in
with pkgs.lib;
with import ../lib/qemu-flags.nix;

View file

@ -22,12 +22,13 @@
, # Shell code executed after the VM has finished.
postVM ? ""
, name ? "nixos-disk-image"
}:
with lib;
pkgs.vmTools.runInLinuxVM (
pkgs.runCommand "nixos-disk-image"
pkgs.runCommand name
{ preVM =
''
mkdir $out

View file

@ -39,7 +39,6 @@
, # The volume ID.
volumeID ? ""
}:
assert bootable -> bootImage != "";
@ -47,7 +46,7 @@ assert efiBootable -> efiBootImage != "";
assert usbBootable -> isohybridMbrImage != "";
stdenv.mkDerivation {
name = "iso9660-image";
name = isoName;
builder = ./make-iso9660-image.sh;
buildInputs = [perl xorriso syslinux];

View file

@ -133,3 +133,4 @@ fi
mkdir -p $out/nix-support
echo $system > $out/nix-support/system
echo "file iso $out/iso/$isoName" >> $out/nix-support/hydra-build-products

View file

@ -1,6 +1,6 @@
{ system, minimal ? false }:
{ system, minimal ? false, config ? {} }:
with import ./build-vms.nix { inherit system minimal; };
with import ./build-vms.nix { inherit system minimal config; };
with pkgs;
rec {

View file

@ -1,11 +1,8 @@
#! /bin/sh -e
BUCKET_NAME=${BUCKET_NAME:-nixos}
export NIX_PATH=nixpkgs=../../../..
export NIXOS_CONFIG=$(dirname $(readlink -f $0))/../../../modules/virtualisation/azure-image.nix
export TIMESTAMP=$(date +%Y%m%d%H%M)
nix-build '<nixpkgs/nixos>' \
-A config.system.build.azureImage --argstr system x86_64-linux -o azure --option extra-binary-caches http://hydra.nixos.org -j 10
azure vm image create nixos-test --location "West Europe" --md5-skip -v --os Linux azure/disk.vhd
-A config.system.build.azureImage --argstr system x86_64-linux -o azure --option extra-binary-caches https://hydra.nixos.org -j 10

View file

@ -0,0 +1,22 @@
#! /bin/sh -e
export STORAGE=${STORAGE:-nixos}
export THREADS=${THREADS:-8}
azure-vhd-utils-for-go upload --localvhdpath azure/disk.vhd --stgaccountname "$STORAGE" --stgaccountkey "$KEY" \
--containername images --blobname nixos-unstable-nixops-updated.vhd --parallelism "$THREADS" --overwrite

View file

@ -37,7 +37,6 @@ with lib;
services.openssh.enable = false;
services.lshd.enable = true;
programs.ssh.startAgent = false;
services.xserver.startGnuPGAgent = true;
# TODO: GNU dico.
# TODO: GNU Inetutils' inetd.

View file

@ -32,7 +32,7 @@ in
kdc = mkOption {
default = "kerberos.mit.edu";
description = "Kerberos Domain Controller.";
description = "Key Distribution Center";
};
kerberosAdminServer = mkOption {

View file

@ -103,7 +103,7 @@ in
hardware.opengl.extraPackages32 = mkOption {
type = types.listOf types.package;
default = [];
example = literalExample "with pkgs; [ vaapiIntel libvdpau-va-gl vaapiVdpau ]";
example = literalExample "with pkgs.pkgsi686Linux; [ vaapiIntel libvdpau-va-gl vaapiVdpau ]";
description = ''
Additional packages to add to 32-bit OpenGL drivers on
64-bit systems. Used when <option>driSupport32Bit</option> is

View file

@ -14,6 +14,8 @@ let
nvidiaForKernel = kernelPackages:
if elem "nvidia" drivers then
kernelPackages.nvidia_x11
else if elem "nvidiaBeta" drivers then
kernelPackages.nvidia_x11_beta
else if elem "nvidiaLegacy173" drivers then
kernelPackages.nvidia_x11_legacy173
else if elem "nvidiaLegacy304" drivers then

View file

@ -176,7 +176,6 @@
seeks = 148;
prosody = 149;
i2pd = 150;
dnscrypt-proxy = 151;
systemd-network = 152;
systemd-resolve = 153;
systemd-timesync = 154;
@ -254,6 +253,10 @@
octoprint = 230;
avahi-autoipd = 231;
nntp-proxy = 232;
mjpg-streamer = 233;
radicale = 234;
hydra-queue-runner = 235;
hydra-www = 236;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -410,7 +413,6 @@
seeks = 148;
prosody = 149;
i2pd = 150;
dnscrypt-proxy = 151;
systemd-network = 152;
systemd-resolve = 153;
systemd-timesync = 154;
@ -482,6 +484,7 @@
cfdyndns = 227;
pdnsd = 229;
octoprint = 230;
radicale = 234;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View file

@ -104,7 +104,7 @@ in
nixosVersion = mkDefault (maybeEnv "NIXOS_VERSION" (cfg.nixosRelease + cfg.nixosVersionSuffix));
# Note: code names must only increase in alphabetical order.
nixosCodeName = "Emu";
nixosCodeName = "Flounder";
};
# Generate /etc/os-release. See

View file

@ -77,6 +77,7 @@
./programs/shell.nix
./programs/ssh.nix
./programs/ssmtp.nix
./programs/tmux.nix
./programs/venus.nix
./programs/wvdial.nix
./programs/xfs_quota.nix
@ -114,6 +115,7 @@
./services/backup/rsnapshot.nix
./services/backup/sitecopy-backup.nix
./services/backup/tarsnap.nix
./services/backup/znapzend.nix
./services/cluster/fleet.nix
./services/cluster/kubernetes.nix
./services/cluster/panamax.nix
@ -176,6 +178,7 @@
./services/hardware/udisks2.nix
./services/hardware/upower.nix
./services/hardware/thermald.nix
./services/logging/awstats.nix
./services/logging/fluentd.nix
./services/logging/klogd.nix
./services/logging/logcheck.nix
@ -219,6 +222,7 @@
./services/misc/gitolite.nix
./services/misc/gpsd.nix
./services/misc/ihaskell.nix
./services/misc/mantisbt.nix
./services/misc/mathics.nix
./services/misc/matrix-synapse.nix
./services/misc/mbpfan.nix
@ -329,6 +333,7 @@
./services/networking/lambdabot.nix
./services/networking/libreswan.nix
./services/networking/mailpile.nix
./services/networking/mjpg-streamer.nix
./services/networking/minidlna.nix
./services/networking/miniupnpd.nix
./services/networking/mstpd.nix
@ -439,6 +444,7 @@
./services/web-servers/varnish/default.nix
./services/web-servers/winstone.nix
./services/web-servers/zope2.nix
./services/x11/colord.nix
./services/x11/unclutter.nix
./services/x11/desktop-managers/default.nix
./services/x11/display-managers/auto.nix

View file

@ -17,7 +17,6 @@
pkgs.ddrescue
pkgs.ccrypt
pkgs.cryptsetup # needed for dm-crypt volumes
pkgs.which # 88K size
# Some networking tools.
pkgs.fuse

View file

@ -56,7 +56,7 @@ in
*/
shellAliases = mkOption {
default = config.environment.shellAliases;
default = config.environment.shellAliases // { which = "type -P"; };
description = ''
Set of aliases for bash shell. See <option>environment.shellAliases</option>
for an option format description.

View file

@ -0,0 +1,35 @@
{ config, pkgs, lib, ... }:
let
inherit (lib) mkOption mkEnableOption mkIf mkMerge types;
cfg = config.programs.tmux;
in
{
###### interface
options = {
programs.tmux = {
enable = mkEnableOption "<command>tmux</command> - a <command>screen</command> replacement.";
tmuxconf = mkOption {
default = "";
description = ''
The contents of /etc/tmux.conf
'';
type = types.lines;
};
};
};
###### implementation
config = mkIf cfg.enable {
environment = {
systemPackages = [ pkgs.tmux ];
etc."tmux.conf".text = cfg.tmuxconf;
};
};
}

View file

@ -1,8 +1,8 @@
let
msg = "Importing <nixpkgs/nixos/modules/programs/virtualbox.nix> is "
+ "deprecated, please use `services.virtualboxHost.enable = true' "
+ "deprecated, please use `virtualisation.virtualbox.host.enable = true' "
+ "instead.";
in {
config.warnings = [ msg ];
config.services.virtualboxHost.enable = true;
config.virtualisation.virtualbox.host.enable = true;
}

View file

@ -98,6 +98,9 @@ with lib;
(mkRenamedOptionModule [ "services" "hostapd" "extraCfg" ] [ "services" "hostapd" "extraConfig" ])
# Enlightenment
(mkRenamedOptionModule [ "services" "xserver" "desktopManager" "e19" "enable" ] [ "services" "xserver" "desktopManager" "enlightenment" "enable" ])
# Options that are obsolete and have no replacement.
(mkRemovedOptionModule [ "boot" "initrd" "luks" "enable" ])
(mkRemovedOptionModule [ "programs" "bash" "enable" ])
@ -108,6 +111,7 @@ with lib;
(mkRemovedOptionModule [ "services" "openvpn" "enable" ])
(mkRemovedOptionModule [ "services" "printing" "cupsFilesConf" ])
(mkRemovedOptionModule [ "services" "printing" "cupsdConf" ])
(mkRemovedOptionModule [ "services" "xserver" "startGnuPGAgent" ])
];
}

View file

@ -26,19 +26,11 @@ in
'';
};
stable = mkOption {
type = types.bool;
default = false;
kernelPatch = mkOption {
type = types.attrs;
example = lib.literalExample "pkgs.kernelPatches.grsecurity_4_1";
description = ''
Enable the stable grsecurity patch, based on Linux 3.14.
'';
};
testing = mkOption {
type = types.bool;
default = false;
description = ''
Enable the testing grsecurity patch, based on Linux 4.0.
Grsecurity patch to use.
'';
};
@ -219,16 +211,7 @@ in
config = mkIf cfg.enable {
assertions =
[ { assertion = cfg.stable || cfg.testing;
message = ''
If grsecurity is enabled, you must select either the
stable patch (with kernel 3.14), or the testing patch (with
kernel 4.0) to continue.
'';
}
{ assertion = !(cfg.stable && cfg.testing);
message = "Select either one of the stable or testing patch";
}
[
{ assertion = (cfg.config.restrictProc -> !cfg.config.restrictProcWithGroup) ||
(cfg.config.restrictProcWithGroup -> !cfg.config.restrictProc);
message = "You cannot enable both restrictProc and restrictProcWithGroup";
@ -247,6 +230,8 @@ in
}
];
security.grsecurity.kernelPatch = lib.mkDefault pkgs.kernelPatches.grsecurity_latest;
systemd.services.grsec-lock = mkIf cfg.config.sysctl {
description = "grsecurity sysctl-lock Service";
requires = [ "systemd-sysctl.service" ];

View file

@ -48,6 +48,14 @@ with lib;
ensureDir ${crashplan.vardir}/cache 700
ensureDir ${crashplan.vardir}/backupArchives 700
ensureDir ${crashplan.vardir}/log 777
cp -avn ${crashplan}/conf.template/* ${crashplan.vardir}/conf
for x in app.asar bin EULA.txt install.vars lang lib libjniwrap64.so libjniwrap.so libjtux64.so libjtux.so libmd564.so libmd5.so share skin upgrade; do
if [ -e $x ]; then
true;
else
ln -s ${crashplan}/$x ${crashplan.vardir}/$x;
fi;
done
'';
serviceConfig = {

View file

@ -293,7 +293,7 @@ in
# make sure that the tarsnap server is reachable after systemd starts up
# the service - therefore we sleep in a loop until we can ping the
# endpoint.
preStart = "while ! ping -q -c 1 betatest-server.tarsnap.com &> /dev/null; do sleep 3; done";
preStart = "while ! ping -q -c 1 v1-0-0-server.tarsnap.com &> /dev/null; do sleep 3; done";
scriptArgs = "%i";
script = ''
mkdir -p -m 0755 ${dirOf cfg.cachedir}

View file

@ -0,0 +1,36 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.znapzend;
in
{
options = {
services.znapzend = {
enable = mkEnableOption "ZnapZend daemon";
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.znapzend ];
systemd.services = {
"znapzend" = {
description = "ZnapZend - ZFS Backup System";
after = [ "zfs.target" ];
path = with pkgs; [ znapzend zfs mbuffer openssh ];
script = ''
znapzend
'';
reload = ''
/bin/kill -HUP $MAINPID
'';
};
};
};
}

View file

@ -0,0 +1,123 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.awstats;
package = pkgs.awstats;
in
{
options.services.awstats = {
enable = mkOption {
type = types.bool;
default = cfg.service.enable;
description = ''
Enable the awstats program (but not service).
Currently only simple httpd (Apache) configs are supported,
and awstats plugins may not work correctly.
'';
};
vardir = mkOption {
type = types.path;
default = "/var/lib/awstats";
description = "The directory where variable awstats data will be stored.";
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = "Extra configuration to be appendend to awstats.conf.";
};
updateAt = mkOption {
type = types.nullOr types.string;
default = null;
example = "hourly";
description = ''
Specification of the time at which awstats will get updated.
(in the format described by <citerefentry>
<refentrytitle>systemd.time</refentrytitle>
<manvolnum>5</manvolnum></citerefentry>)
'';
};
service = {
enable = mkOption {
type = types.bool;
default = false;
description = ''Enable the awstats web service. This switches on httpd.'';
};
urlPrefix = mkOption {
type = types.string;
default = "/awstats";
description = "The URL prefix under which the awstats service appears.";
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ package.bin ];
/* TODO:
- heed config.services.httpd.logPerVirtualHost, etc.
- Can't AllowToUpdateStatsFromBrowser, as CGI scripts don't have permission
to read the logs, and our httpd config apparently doesn't an option for that.
*/
environment.etc."awstats/awstats.conf".source = pkgs.runCommand "awstats.conf"
{ preferLocalBuild = true; }
( let
cfg-httpd = config.services.httpd;
logFormat =
if cfg-httpd.logFormat == "combined" then "1" else
if cfg-httpd.logFormat == "common" then "4" else
throw "awstats service doesn't support Apache log format `${cfg-httpd.logFormat}`";
in
''
sed \
-e 's|^\(DirData\)=.*$|\1="${cfg.vardir}"|' \
-e 's|^\(DirIcons\)=.*$|\1="icons"|' \
-e 's|^\(CreateDirDataIfNotExists\)=.*$|\1=1|' \
-e 's|^\(SiteDomain\)=.*$|\1="${cfg-httpd.hostName}"|' \
-e 's|^\(LogFile\)=.*$|\1="${cfg-httpd.logDir}/access_log"|' \
-e 's|^\(LogFormat\)=.*$|\1=${logFormat}|' \
< '${package.out}/wwwroot/cgi-bin/awstats.model.conf' > "$out"
echo '${cfg.extraConfig}' >> "$out"
'');
# The httpd sub-service showing awstats.
services.httpd.enable = mkIf cfg.service.enable true;
services.httpd.extraSubservices = mkIf cfg.service.enable [ { function = { serverInfo, ... }: {
extraConfig =
''
Alias ${cfg.service.urlPrefix}/classes "${package.out}/wwwroot/classes/"
Alias ${cfg.service.urlPrefix}/css "${package.out}/wwwroot/css/"
Alias ${cfg.service.urlPrefix}/icons "${package.out}/wwwroot/icon/"
ScriptAlias ${cfg.service.urlPrefix}/ "${package.out}/wwwroot/cgi-bin/"
<Directory "${package.out}/wwwroot">
Options None
AllowOverride None
Order allow,deny
Allow from all
</Directory>
'';
startupScript =
let
inherit (serverInfo.serverConfig) user group;
in pkgs.writeScript "awstats_startup.sh"
''
mkdir -p '${cfg.vardir}'
chown '${user}:${group}' '${cfg.vardir}'
'';
};}];
systemd.services.awstats-update = mkIf (cfg.updateAt != null) {
description = "awstats log collector";
script = "exec '${package.bin}/bin/awstats' -update -config=awstats.conf";
startAt = cfg.updateAt;
};
};
}

View file

@ -98,8 +98,8 @@ in
package = mkOption {
type = types.package;
default = pkgs.dovecot22;
defaultText = "pkgs.dovecot22";
default = pkgs.dovecot;
defaultText = "pkgs.dovecot";
description = "Dovecot package to use.";
};

View file

@ -104,6 +104,7 @@ in {
systemd.services.dspam = {
description = "dspam spam filtering daemon";
wantedBy = [ "multi-user.target" ];
after = [ "postgresql.service" ];
restartTriggers = [ cfgfile ];
serviceConfig = {
@ -114,7 +115,7 @@ in {
RuntimeDirectoryMode = optional (cfg.domainSocket == defaultSock) "0750";
PermissionsStartOnly = true;
# DSPAM segfaults on just about every error
Restart = "on-failure";
Restart = "on-abort";
RestartSec = "1s";
};

View file

@ -12,9 +12,9 @@ with lib;
sendmailSetuidWrapper = mkOption {
default = null;
internal = true;
description = ''
Configuration for the sendmail setuid wrwapper (like an element of
security.setuidOwners)";
Configuration for the sendmail setuid wapper.
'';
};

View file

@ -27,7 +27,7 @@ let
mainCf =
''
compatibility_level = 2
compatibility_level = 9999
mail_owner = ${user}
default_privs = nobody

View file

@ -79,6 +79,11 @@ in
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
preStart = ''
# There should be only one autofs service managed by systemd, so this should be safe.
rm -f /tmp/autofs-running
'';
serviceConfig = {
ExecStart = "${pkgs.autofs5}/sbin/automount ${if cfg.debug then "-d" else ""} -f -t ${builtins.toString cfg.timeout} ${autoMaster} ${if cfg.debug then "-l7" else ""}";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";

View file

@ -114,6 +114,7 @@ in {
}) // (mapAttrs' (n: v: nameValuePair "ETCD_${n}" v) cfg.extraConf);
serviceConfig = {
Type = "notify";
ExecStart = "${pkgs.etcd}/bin/etcd";
User = "etcd";
PermissionsStartOnly = true;

View file

@ -206,12 +206,6 @@ in {
description = "Gitlab database user.";
};
emailFrom = mkOption {
type = types.str;
default = "example@example.org";
description = "The source address for emails sent by gitlab.";
};
host = mkOption {
type = types.str;
default = config.networking.hostName;
@ -328,7 +322,7 @@ in {
Group = cfg.group;
TimeoutSec = "300";
WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab";
ExecStart="${bundler}/bin/bundle exec \"sidekiq -q post_receive -q mailer -q system_hook -q project_web_hook -q gitlab_shell -q common -q default -e production -P ${cfg.statePath}/tmp/sidekiq.pid\"";
ExecStart="${bundler}/bin/bundle exec \"sidekiq -q post_receive -q mailers -q system_hook -q project_web_hook -q gitlab_shell -q common -q default -e production -P ${cfg.statePath}/tmp/sidekiq.pid\"";
};
};

View file

@ -0,0 +1,68 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.mantisbt;
freshInstall = cfg.extraConfig == "";
# combined code+config directory
mantisbt = let
config_inc = pkgs.writeText "config_inc.php" ("<?php\n" + cfg.extraConfig);
src = pkgs.fetchurl {
url = "mirror://sourceforge/mantisbt/${name}.tar.gz";
sha256 = "1pl6xn793p3mxc6ibpr2bhg85vkdlcf57yk7pfc399g47l8x4508";
};
name = "mantisbt-1.2.19";
in
# We have to copy every time; otherwise config won't be found.
pkgs.runCommand name
{ preferLocalBuild = true; allowSubstitutes = false; }
(''
mkdir -p "$out"
cd "$out"
tar -xf '${src}' --strip-components=1
ln -s '${config_inc}' config_inc.php
''
+ lib.optionalString (!freshInstall) "rm -r admin/"
);
in
{
options.services.mantisbt = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Enable the mantisbt web service.
This switches on httpd with PHP and database.
'';
};
urlPrefix = mkOption {
type = types.string;
default = "/mantisbt";
description = "The URL prefix under which the mantisbt service appears.";
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
The contents of config_inc.php, without leading &lt;?php.
If left empty, the admin directory will be accessible.
'';
};
};
config = mkIf cfg.enable {
services.mysql.enable = true;
services.httpd.enable = true;
services.httpd.enablePHP = true;
# The httpd sub-service showing mantisbt.
services.httpd.extraSubservices = [ { function = { ... }: {
extraConfig =
''
Alias ${cfg.urlPrefix} "${mantisbt}"
'';
};}];
};
}

View file

@ -39,7 +39,7 @@ let
build-users-group = nixbld
build-max-jobs = ${toString (cfg.maxJobs)}
build-cores = ${toString (cfg.buildCores)}
build-use-chroot = ${if cfg.useChroot then "true" else "false"}
build-use-chroot = ${if (builtins.isBool cfg.useChroot) then (if cfg.useChroot then "true" else "false") else cfg.useChroot}
build-chroot-dirs = ${toString cfg.chrootDirs} /bin/sh=${sh} $(echo $extraPaths)
binary-caches = ${toString cfg.binaryCaches}
trusted-binary-caches = ${toString cfg.trustedBinaryCaches}
@ -99,7 +99,7 @@ in
};
useChroot = mkOption {
type = types.bool;
type = types.either types.bool (types.enum ["relaxed"]);
default = false;
description = "
If set, Nix will perform builds in a chroot-environment that it
@ -257,13 +257,11 @@ in
type = types.bool;
default = true;
description = ''
If enabled, Nix will only download binaries from binary
caches if they are cryptographically signed with any of the
keys listed in
<option>nix.binaryCachePublicKeys</option>. If disabled (the
default), signatures are neither required nor checked, so
it's strongly recommended that you use only trustworthy
caches and https to prevent man-in-the-middle attacks.
If enabled (the default), Nix will only download binaries from binary caches if
they are cryptographically signed with any of the keys listed in
<option>nix.binaryCachePublicKeys</option>. If disabled, signatures are neither
required nor checked, so it's strongly recommended that you use only
trustworthy caches and https to prevent man-in-the-middle attacks.
'';
};

View file

@ -6,12 +6,16 @@ let
cfg = config.services.octoprint;
cfgUpdate = pkgs.writeText "octoprint-config.yaml" (builtins.toJSON {
baseConfig = {
plugins.cura.cura_engine = "${pkgs.curaengine}/bin/CuraEngine";
server.host = cfg.host;
server.port = cfg.port;
webcam.ffmpeg = "${pkgs.ffmpeg}/bin/ffmpeg";
});
};
fullConfig = recursiveUpdate cfg.extraConfig baseConfig;
cfgUpdate = pkgs.writeText "octoprint-config.yaml" (builtins.toJSON fullConfig);
pluginsEnv = pkgs.python.buildEnv.override {
extraLibs = cfg.plugins pkgs.octoprint-plugins;
@ -62,13 +66,18 @@ in
};
plugins = mkOption {
#type = types.functionTo (types.listOf types.package);
default = plugins: [];
defaultText = "plugins: []";
example = literalExample "plugins: [ m3d-fio ]";
description = "Additional plugins.";
};
extraConfig = mkOption {
type = types.attrs;
default = {};
description = "Extra options which are added to OctoPrint's YAML configuration file.";
};
};
};

View file

@ -51,7 +51,13 @@ let
'';
carbonEnv = {
PYTHONPATH = "${pkgs.python27Packages.carbon}/lib/python2.7/site-packages";
PYTHONPATH = let
cenv = pkgs.python.buildEnv.override {
extraLibs = [ pkgs.python27Packages.carbon ];
};
cenvPack = "${cenv}/${pkgs.python.sitePackages}";
# opt/graphite/lib contains twisted.plugins.carbon-cache
in "${cenvPack}/opt/graphite/lib:${cenvPack}";
GRAPHITE_ROOT = dataDir;
GRAPHITE_CONF_DIR = configDir;
GRAPHITE_STORAGE_DIR = dataDir;
@ -445,10 +451,21 @@ in {
after = [ "network-interfaces.target" ];
path = [ pkgs.perl ];
environment = {
PYTHONPATH = "${pkgs.python27Packages.graphite_web}/lib/python2.7/site-packages";
PYTHONPATH = let
penv = pkgs.python.buildEnv.override {
extraLibs = [
pkgs.python27Packages.graphite_web
pkgs.python27Packages.pysqlite
];
};
penvPack = "${penv}/${pkgs.python.sitePackages}";
# opt/graphite/webapp contains graphite/settings.py
# explicitly adding pycairo in path because it cannot be imported via buildEnv
in "${penvPack}/opt/graphite/webapp:${penvPack}:${pkgs.pycairo}/${pkgs.python.sitePackages}";
DJANGO_SETTINGS_MODULE = "graphite.settings";
GRAPHITE_CONF_DIR = configDir;
GRAPHITE_STORAGE_DIR = dataDir;
LD_LIBRARY_PATH = "${pkgs.cairo}/lib";
};
serviceConfig = {
ExecStart = ''
@ -486,9 +503,11 @@ in {
wantedBy = [ "multi-user.target" ];
after = [ "network-interfaces.target" ];
environment = {
PYTHONPATH =
"${cfg.api.package}/lib/python2.7/site-packages:" +
concatMapStringsSep ":" (f: f + "/lib/python2.7/site-packages") cfg.api.finders;
PYTHONPATH = let
aenv = pkgs.python.buildEnv.override {
extraLibs = [ cfg.api.package pkgs.cairo ] ++ cfg.api.finders;
};
in "${aenv}/${pkgs.python.sitePackages}";
GRAPHITE_API_CONFIG = graphiteApiConfig;
LD_LIBRARY_PATH = "${pkgs.cairo.out}/lib";
};

View file

@ -5,13 +5,17 @@ let
apparmorEnabled = config.security.apparmor.enable;
dnscrypt-proxy = pkgs.dnscrypt-proxy;
cfg = config.services.dnscrypt-proxy;
resolverListFile = "${dnscrypt-proxy}/share/dnscrypt-proxy/dnscrypt-resolvers.csv";
localAddress = "${cfg.localAddress}:${toString cfg.localPort}";
daemonArgs =
[ "--local-address=${localAddress}"
(optionalString cfg.tcpOnly "--tcp-only")
(optionalString cfg.ephemeralKeys "-E")
]
++ resolverArgs;
resolverArgs = if (cfg.customResolver != null)
then
[ "--resolver-address=${cfg.customResolver.address}:${toString cfg.customResolver.port}"
@ -27,43 +31,63 @@ in
{
options = {
services.dnscrypt-proxy = {
enable = mkEnableOption ''
Enable dnscrypt-proxy. The proxy relays regular DNS queries to a
DNSCrypt enabled upstream resolver. The traffic between the
client and the upstream resolver is encrypted and authenticated,
which may mitigate the risk of MITM attacks and third-party
enable = mkEnableOption "dnscrypt-proxy" // { description = ''
Whether to enable the DNSCrypt client proxy. The proxy relays
DNS queries to a DNSCrypt enabled upstream resolver. The traffic
between the client and the upstream resolver is encrypted and
authenticated, mitigating the risk of MITM attacks and third-party
snooping (assuming the upstream is trustworthy).
'';
Enabling this option does not alter the system nameserver; to relay
local queries, prepend <literal>127.0.0.1</literal> to
<option>networking.nameservers</option>.
The recommended configuration is to run DNSCrypt proxy as a forwarder
for a caching DNS client, as in
<programlisting>
{
services.dnscrypt-proxy.enable = true;
services.dnscrypt-proxy.localPort = 43;
services.dnsmasq.enable = true;
services.dnsmasq.servers = [ "127.0.0.1#43" ];
services.dnsmasq.resolveLocalQueries = true; # this is the default
}
</programlisting>
''; };
localAddress = mkOption {
default = "127.0.0.1";
type = types.string;
description = ''
Listen for DNS queries on this address.
Listen for DNS queries to relay on this address. The only reason to
change this from its default value is to proxy queries on behalf
of other machines (typically on the local network).
'';
};
localPort = mkOption {
default = 53;
type = types.int;
description = ''
Listen on this port.
Listen for DNS queries to relay on this port. The default value
assumes that the DNSCrypt proxy should relay DNS queries directly.
When running as a forwarder for another DNS client, set this option
to a different value; otherwise leave the default.
'';
};
resolverName = mkOption {
default = "opendns";
default = "dnscrypt.eu-nl";
type = types.nullOr types.string;
description = ''
The name of the upstream DNSCrypt resolver to use. See
<literal>${resolverListFile}</literal> for alternative resolvers
(e.g., if you are concerned about logging and/or server
location).
<filename>${resolverListFile}</filename> for alternative resolvers.
The default resolver is located in Holland, supports DNS security
extensions, and claims to not keep logs.
'';
};
customResolver = mkOption {
default = null;
description = ''
Use a resolver not listed in the upstream list (e.g.,
a private DNSCrypt provider). For advanced users only.
If specified, this option takes precedence.
Use an unlisted resolver (e.g., a private DNSCrypt provider). For
advanced users only. If specified, this option takes precedence.
'';
type = types.nullOr (types.submodule ({ ... }: { options = {
address = mkOption {
@ -80,20 +104,31 @@ in
type = types.str;
description = "Provider fully qualified domain name";
example = "2.dnscrypt-cert.opendns.com";
};
key = mkOption {
type = types.str;
description = "Provider public key";
example = "B735:1140:206F:225D:3E2B:D822:D7FD:691E:A1C3:3CC8:D666:8D0C:BE04:BFAB:CA43:FB79";
}; }; }));
};
key = mkOption {
type = types.str;
description = "Provider public key";
example = "B735:1140:206F:225D:3E2B:D822:D7FD:691E:A1C3:3CC8:D666:8D0C:BE04:BFAB:CA43:FB79";
};
}; }));
};
tcpOnly = mkOption {
default = false;
type = types.bool;
description = ''
Force sending encrypted DNS queries to the upstream resolver
over TCP instead of UDP (on port 443). Enabling this option may
help circumvent filtering, but should not be used otherwise.
Force sending encrypted DNS queries to the upstream resolver over
TCP instead of UDP (on port 443). Use only if the UDP port is blocked.
'';
};
ephemeralKeys = mkOption {
default = false;
type = types.bool;
description = ''
Compute a new key pair for every query. Enabling this option
increases CPU usage, but makes it more difficult for the upstream
resolver to track your usage of their service across IP addresses.
The default is to re-use the public key pair for all queries, making
tracking trivial.
'';
};
};
@ -130,16 +165,20 @@ in
${pkgs.xz.out}/lib/liblzma.so.* mr,
${pkgs.libgcrypt.out}/lib/libgcrypt.so.* mr,
${pkgs.libgpgerror.out}/lib/libgpg-error.so.* mr,
${pkgs.libcap}/lib/libcap.so.* mr,
${pkgs.lz4}/lib/liblz4.so.* mr,
${pkgs.attr}/lib/libattr.so.* mr,
${resolverListFile} r,
}
''));
users.extraUsers.dnscrypt-proxy = {
uid = config.ids.uids.dnscrypt-proxy;
users.users.dnscrypt-proxy = {
description = "dnscrypt-proxy daemon user";
isSystemUser = true;
group = "dnscrypt-proxy";
};
users.extraGroups.dnscrypt-proxy.gid = config.ids.gids.dnscrypt-proxy;
users.groups.dnscrypt-proxy = {};
systemd.sockets.dnscrypt-proxy = {
description = "dnscrypt-proxy listening socket";
@ -152,16 +191,21 @@ in
systemd.services.dnscrypt-proxy = {
description = "dnscrypt-proxy daemon";
after = [ "network.target" ] ++ optional apparmorEnabled "apparmor.service";
requires = [ "dnscrypt-proxy.socket "] ++ optional apparmorEnabled "apparmor.service";
serviceConfig = {
Type = "simple";
NonBlocking = "true";
ExecStart = "${dnscrypt-proxy}/bin/dnscrypt-proxy ${toString daemonArgs}";
User = "dnscrypt-proxy";
Group = "dnscrypt-proxy";
PrivateTmp = true;
PrivateDevices = true;
ProtectHome = true;
};
};
};

View file

@ -338,7 +338,7 @@ in
};
networking.firewall.allowPing = mkOption {
default = false;
default = true;
type = types.bool;
description =
''

View file

@ -10,9 +10,10 @@ let
extip = "EXTIP=\$(${pkgs.curl.bin}/bin/curl -sf \"http://jsonip.com\" | ${pkgs.gawk}/bin/awk -F'\"' '{print $4}')";
toOneZero = b: if b then "1" else "0";
toYesNo = b: if b then "yes" else "no";
mkEndpointOpt = name: addr: port: {
enable = mkEnableOption name;
name = mkOption {
type = types.str;
default = name;
@ -63,9 +64,9 @@ let
} // mkEndpointOpt name "127.0.0.1" 0;
i2pdConf = pkgs.writeText "i2pd.conf" ''
ipv6 = ${toOneZero cfg.enableIPv6}
notransit = ${toOneZero cfg.notransit}
floodfill = ${toOneZero cfg.floodfill}
ipv6 = ${toYesNo cfg.enableIPv6}
notransit = ${toYesNo cfg.notransit}
floodfill = ${toYesNo cfg.floodfill}
${if isNull cfg.port then "" else "port = ${toString cfg.port}"}
${flip concatMapStrings
(collect (proto: proto ? port && proto ? address && proto ? name) cfg.proto)
@ -73,6 +74,7 @@ let
[${proto.name}]
address = ${proto.address}
port = ${toString proto.port}
enabled = ${toYesNo proto.enable}
'')
}
'';

View file

@ -64,8 +64,7 @@ in
systemd.services.iodined = {
description = "iodine, ip over dns daemon";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
wantedBy = [ "ip-up.target" ];
serviceConfig.ExecStart = "${pkgs.iodine}/sbin/iodined -f -u ${iodinedUser} ${cfg.extraConfig} ${cfg.ip} ${cfg.domain}";
};

View file

@ -0,0 +1,75 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.mjpg-streamer;
in {
options = {
services.mjpg-streamer = {
enable = mkEnableOption "mjpg-streamer webcam streamer";
inputPlugin = mkOption {
type = types.str;
default = "input_uvc.so";
description = ''
Input plugin. See plugins documentation for more information.
'';
};
outputPlugin = mkOption {
type = types.str;
default = "output_http.so -w @www@ -n -p 5050";
description = ''
Output plugin. <literal>@www@</literal> is substituted for default mjpg-streamer www directory.
See plugins documentation for more information.
'';
};
user = mkOption {
type = types.str;
default = "mjpg-streamer";
description = "mjpg-streamer user name.";
};
group = mkOption {
type = types.str;
default = "video";
description = "mjpg-streamer group name.";
};
};
};
config = mkIf cfg.enable {
users.extraUsers = optional (cfg.user == "mjpg-streamer") {
name = "mjpg-streamer";
uid = config.ids.uids.mjpg-streamer;
group = cfg.group;
};
systemd.services.mjpg-streamer = {
description = "mjpg-streamer webcam streamer";
wantedBy = [ "multi-user.target" ];
serviceConfig.User = cfg.user;
serviceConfig.Group = cfg.group;
script = ''
IPLUGIN="${cfg.inputPlugin}"
OPLUGIN="${cfg.outputPlugin}"
OPLUGIN="''${OPLUGIN//@www@/${pkgs.mjpg-streamer}/share/mjpg-streamer/www}"
exec ${pkgs.mjpg-streamer}/bin/mjpg_streamer -i "$IPLUGIN" -o "$OPLUGIN"
'';
};
};
}

View file

@ -35,12 +35,27 @@ in
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.pythonPackages.radicale ];
users.extraUsers = singleton
{ name = "radicale";
uid = config.ids.uids.radicale;
description = "radicale user";
home = "/var/lib/radicale";
createHome = true;
};
users.extraGroups = singleton
{ name = "radicale";
gid = config.ids.gids.radicale;
};
systemd.services.radicale = {
description = "A Simple Calendar and Contact Server";
after = [ "network-interfaces.target" ];
wantedBy = [ "multi-user.target" ];
script = "${pkgs.pythonPackages.radicale}/bin/radicale -C ${confFile} -d";
serviceConfig.Type = "forking";
serviceConfig.User = "radicale";
serviceConfig.Group = "radicale";
};
};
}

View file

@ -85,6 +85,9 @@ let
ssl_enable=YES
rsa_cert_file=${cfg.rsaCertFile}
''}
${optionalString (cfg.rsaKeyFile != null) ''
rsa_private_key_file=${cfg.rsaKeyFile}
''}
${optionalString (cfg.userlistFile != null) ''
userlist_file=${cfg.userlistFile}
''}
@ -147,6 +150,12 @@ in
description = "RSA certificate file.";
};
rsaKeyFile = mkOption {
type = types.nullOr types.path;
default = null;
description = "RSA private key file.";
};
anonymousUmask = mkOption {
type = types.string;
default = "077";

View file

@ -125,10 +125,12 @@ in {
# FIXME: start a separate wpa_supplicant instance per interface.
systemd.services.wpa_supplicant = let
ifaces = cfg.interfaces;
deviceUnit = interface: [ "sys-subsystem-net-devices-${interface}.device" ];
in {
description = "WPA Supplicant";
after = [ "network-interfaces.target" ];
requires = lib.concatMap deviceUnit ifaces;
wantedBy = [ "network.target" ];
path = [ pkgs.wpa_supplicant ];

View file

@ -238,7 +238,8 @@ in
example = literalExample "[ pkgs.splix ]";
description = ''
CUPS drivers to use. Drivers provided by CUPS, cups-filters, Ghostscript
and Samba are added unconditionally.
and Samba are added unconditionally. For adding Gutenprint, see
<literal>gutenprint</literal>.
'';
};
@ -310,7 +311,9 @@ in
[ ! -e "/var/lib/cups/$i" ] && ln -s "${rootdir}/etc/cups/$i" "/var/lib/cups/$i"
done
${optionalString cfg.gutenprint ''
${gutenprint}/bin/cups-genppdupdate -p /etc/cups/ppd
if [ -d /var/lib/cups/ppd ]; then
${gutenprint}/bin/cups-genppdupdate -p /var/lib/cups/ppd
fi
''}
'';
};

View file

@ -46,7 +46,7 @@ in
};
systemd.services.kdc = {
description = "Kerberos Domain Controller daemon";
description = "Key Distribution Center daemon";
wantedBy = [ "multi-user.target" ];
preStart = ''
mkdir -m 0755 -p ${stateDir}
@ -55,7 +55,7 @@ in
};
systemd.services.kpasswdd = {
description = "Kerberos Domain Controller daemon";
description = "Kerberos Password Changing daemon";
wantedBy = [ "multi-user.target" ];
script = "${heimdal}/sbin/kpasswdd";
};

View file

@ -128,6 +128,7 @@ in
${pkgs.c-ares.out}/lib/libcares*.so* mr,
${pkgs.libcap.out}/lib/libcap*.so* mr,
${pkgs.attr.out}/lib/libattr*.so* mr,
${pkgs.lz4}/lib/liblz4*.so* mr,
@{PROC}/sys/kernel/random/uuid r,
@{PROC}/sys/vm/overcommit_memory r,

View file

@ -0,0 +1,78 @@
{ config, pkgs, lib, serverInfo, ... }:
let
inherit (pkgs) foswiki;
inherit (serverInfo.serverConfig) user group;
inherit (config) vardir;
in
{
options.vardir = lib.mkOption {
type = lib.types.path;
default = "/var/www/foswiki";
description = "The directory where variable foswiki data will be stored and served from.";
};
# TODO: this will probably need to be better customizable
extraConfig =
let httpd-conf = pkgs.runCommand "foswiki-httpd.conf"
{ preferLocalBuild = true; }
''
substitute '${foswiki}/foswiki_httpd_conf.txt' "$out" \
--replace /var/www/foswiki/ "${vardir}/"
'';
in
''
RewriteEngine on
RewriteRule /foswiki/(.*) ${vardir}/$1
<Directory "${vardir}">
Require all granted
</Directory>
Include ${httpd-conf}
<Directory "${vardir}/pub">
Options FollowSymlinks
</Directory>
'';
/** This handles initial setup and updates.
It will probably need some tweaking, maybe per-site. */
startupScript = pkgs.writeScript "foswiki_startup.sh" (
let storeLink = "${vardir}/package"; in
''
[ -e '${storeLink}' ] || needs_setup=1
mkdir -p '${vardir}'
cd '${vardir}'
ln -sf -T '${foswiki}' '${storeLink}'
if [ -n "$needs_setup" ]; then # do initial setup
mkdir -p bin lib
# setup most of data/ as copies only
cp -r '${foswiki}'/data '${vardir}/'
rm -r '${vardir}'/data/{System,mime.types}
ln -sr -t '${vardir}/data/' '${storeLink}'/data/{System,mime.types}
ln -sr '${storeLink}/locale' .
mkdir pub
ln -sr '${storeLink}/pub/System' pub/
mkdir templates
ln -sr '${storeLink}'/templates/* templates/
ln -sr '${storeLink}/tools' .
mkdir -p '${vardir}'/working/{logs,tmp}
ln -sr '${storeLink}/working/README' working/ # used to check dir validity
chown -R '${user}:${group}' .
chmod +w -R .
fi
# bin/* and lib/* shall always be overwritten, in case files are added
ln -srf '${storeLink}'/bin/* '${vardir}/bin/'
ln -srf '${storeLink}'/lib/* '${vardir}/lib/'
''
/* Symlinking bin/ one-by-one ensures that ${vardir}/lib/LocalSite.cfg
is used instead of ${foswiki}/... */
);
}

View file

@ -32,17 +32,27 @@ let
self = pythonPackages;
};
json = builtins.toJSON {
penv = python.buildEnv.override {
extraLibs = (c.pythonPackages or (self: [])) pythonPackages;
};
uwsgiCfg = {
uwsgi =
if c.type == "normal"
then {
inherit plugins;
} // removeAttrs c [ "type" "pythonPackages" ]
// optionalAttrs (python != null) {
pythonpath = "@PYTHONPATH@";
env = (c.env or {}) // {
PATH = optionalString (c ? env.PATH) "${c.env.PATH}:" + "@PATH@";
};
pythonpath = "${penv}/${python.sitePackages}";
env =
# Argh, uwsgi expects list of key-values there instead of a dictionary.
let env' = c.env or [];
getPath =
x: if hasPrefix "PATH=" x
then substring (stringLength "PATH=") (stringLength x) x
else null;
oldPaths = filter (x: x != null) (map getPath env');
in env' ++ [ "PATH=${optionalString (oldPaths != []) "${last oldPaths}:"}${penv}/bin" ];
}
else if c.type == "emperor"
then {
@ -55,35 +65,7 @@ let
else throw "`type` attribute in UWSGI configuration should be either 'normal' or 'emperor'";
};
in
if python == null || c.type != "normal"
then pkgs.writeTextDir "${name}.json" json
else pkgs.stdenv.mkDerivation {
name = "uwsgi-config";
inherit json;
passAsFile = [ "json" ];
nativeBuildInputs = [ pythonPackages.wrapPython ];
pythonInputs = (c.pythonPackages or (self: [])) pythonPackages;
buildCommand = ''
mkdir $out
declare -A pythonPathsSeen=()
program_PYTHONPATH=
program_PATH=
if [ -n "$pythonInputs" ]; then
for i in $pythonInputs; do
_addToPythonPath $i
done
fi
# A hack to replace "@PYTHONPATH@" with a JSON list
if [ -n "$program_PYTHONPATH" ]; then
program_PYTHONPATH="\"''${program_PYTHONPATH//:/\",\"}\""
fi
substitute $jsonPath $out/${name}.json \
--replace '"@PYTHONPATH@"' "[$program_PYTHONPATH]" \
--subst-var-by PATH "$program_PATH"
'';
};
in pkgs.writeTextDir "${name}.json" (builtins.toJSON uwsgiCfg);
in {

View file

@ -0,0 +1,39 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.colord;
in {
options = {
services.colord = {
enable = mkEnableOption "colord, the color management daemon";
};
};
config = mkIf cfg.enable {
services.dbus.packages = [ pkgs.colord ];
services.udev.packages = [ pkgs.colord ];
environment.systemPackages = [ pkgs.colord ];
systemd.services.colord = {
description = "Manage, Install and Generate Color Profiles";
serviceConfig = {
Type = "dbus";
BusName = "org.freedesktop.ColorManager";
ExecStart = "${pkgs.colord}/libexec/colord";
PrivateTmp = true;
};
};
};
}

View file

@ -19,7 +19,7 @@ in
# E.g., if KDE is enabled, it supersedes xterm.
imports = [
./none.nix ./xterm.nix ./xfce.nix ./kde4.nix ./kde5.nix
./e19.nix ./gnome3.nix ./kodi.nix
./enlightenment.nix ./gnome3.nix ./kodi.nix
];
options = {

View file

@ -4,9 +4,9 @@ with lib;
let
e = pkgs.enlightenment;
xcfg = config.services.xserver;
cfg = xcfg.desktopManager.e19;
e19_enlightenment = pkgs.e19.enlightenment.override { set_freqset_setuid = true; };
cfg = xcfg.desktopManager.enlightenment;
GST_PLUGIN_PATH = lib.makeSearchPath "lib/gstreamer-1.0" [
pkgs.gst_all_1.gst-plugins-base
pkgs.gst_all_1.gst-plugins-good
@ -18,10 +18,10 @@ in
{
options = {
services.xserver.desktopManager.e19.enable = mkOption {
services.xserver.desktopManager.enlightenment.enable = mkOption {
default = false;
example = true;
description = "Enable the E19 desktop environment.";
description = "Enable the Enlightenment desktop environment.";
};
};
@ -29,8 +29,8 @@ in
config = mkIf (xcfg.enable && cfg.enable) {
environment.systemPackages = [
pkgs.e19.efl pkgs.e19.evas pkgs.e19.emotion pkgs.e19.elementary e19_enlightenment
pkgs.e19.terminology pkgs.e19.econnman
e.efl e.evas e.emotion e.elementary e.enlightenment
e.terminology e.econnman
pkgs.xorg.xauth # used by kdesu
pkgs.gtk # To get GTK+'s themes.
pkgs.tango-icon-theme
@ -42,7 +42,7 @@ in
environment.pathsToLink = [ "/etc/enlightenment" "/etc/xdg" "/share/enlightenment" "/share/elementary" "/share/applications" "/share/locale" "/share/icons" "/share/themes" "/share/mime" "/share/desktop-directories" ];
services.xserver.desktopManager.session = [
{ name = "E19";
{ name = "Enlightenment";
start = ''
# Set GTK_DATA_PREFIX so that GTK+ can find the themes
export GTK_DATA_PREFIX=${config.system.path}
@ -53,17 +53,16 @@ in
export GST_PLUGIN_PATH="${GST_PLUGIN_PATH}"
# make available for D-BUS user services
#export XDG_DATA_DIRS=$XDG_DATA_DIRS''${XDG_DATA_DIRS:+:}:${config.system.path}/share:${pkgs.e19.efl}/share
#export XDG_DATA_DIRS=$XDG_DATA_DIRS''${XDG_DATA_DIRS:+:}:${config.system.path}/share:${e.efl}/share
# Update user dirs as described in http://freedesktop.org/wiki/Software/xdg-user-dirs/
${pkgs.xdg-user-dirs}/bin/xdg-user-dirs-update
${e19_enlightenment}/bin/enlightenment_start
waitPID=$!
exec ${e.enlightenment}/bin/enlightenment_start
'';
}];
security.setuidPrograms = [ "e19_freqset" ];
security.setuidPrograms = [ "e_freqset" ];
environment.etc = singleton
{ source = "${pkgs.xkeyboard_config}/etc/X11/xkb";
@ -75,13 +74,13 @@ in
services.udisks2.enable = true;
services.upower.enable = config.powerManagement.enable;
#services.dbus.packages = [ pkgs.efl ]; # dbus-1 folder is not in /etc but in /share, so needs fixing first
services.dbus.packages = [ e.efl ];
systemd.user.services.efreet =
{ enable = true;
description = "org.enlightenment.Efreet";
serviceConfig =
{ ExecStart = "${pkgs.e19.efl}/bin/efreetd";
{ ExecStart = "${e.efl}/bin/efreetd";
StandardOutput = "null";
};
};
@ -90,7 +89,7 @@ in
{ enable = true;
description = "org.enlightenment.Ethumb";
serviceConfig =
{ ExecStart = "${pkgs.e19.efl}/bin/ethumbd";
{ ExecStart = "${e.efl}/bin/ethumbd";
StandardOutput = "null";
};
};

View file

@ -128,6 +128,7 @@ in
++ lib.optional config.networking.networkmanager.enable kde5.plasma-nm
++ lib.optional config.hardware.pulseaudio.enable kde5.plasma-pa
++ lib.optional config.powerManagement.enable kde5.powerdevil
++ lib.optional config.services.colord.enable kde5.colord-kde
++ lib.optionals config.services.samba.enable [ kde5.kdenetwork-filesharing pkgs.samba ]
++ lib.optionals cfg.phonon.gstreamer.enable

View file

@ -49,17 +49,6 @@ let
fi
''}
${optionalString cfg.startGnuPGAgent ''
if test -z "$SSH_AUTH_SOCK"; then
# Restart this script as a child of the GnuPG agent.
exec "${pkgs.gnupg}/bin/gpg-agent" \
--enable-ssh-support --daemon \
--pinentry-program "${pkgs.pinentry}/bin/pinentry-gtk-2" \
--write-env-file "$HOME/.gpg-agent-info" \
"$0" "$sessionType"
fi
''}
# Handle being called by kdm.
if test "''${1:0:1}" = /; then eval exec "$1"; fi

View file

@ -10,13 +10,13 @@ in
imports = [
./afterstep.nix
./bspwm.nix
./clfswm.nix
./compiz.nix
./dwm.nix
./exwm.nix
./fluxbox.nix
./herbstluftwm.nix
./i3.nix
./jwm.nix
./metacity.nix
./openbox.nix
./notion.nix

View file

@ -0,0 +1,25 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.xserver.windowManager.jwm;
in
{
###### interface
options = {
services.xserver.windowManager.jwm.enable = mkEnableOption "jwm";
};
###### implementation
config = mkIf cfg.enable {
services.xserver.windowManager.session = singleton {
name = "jwm";
start = ''
${pkgs.jwm}/bin/jwm &
waitPID=$!
'';
};
environment.systemPackages = [ pkgs.jwm ];
};
}

View file

@ -13,9 +13,9 @@ let
# Map video driver names to driver packages. FIXME: move into card-specific modules.
knownVideoDrivers = {
virtualbox = { modules = [ kernelPackages.virtualboxGuestAdditions ]; driverName = "vboxvideo"; };
ati = { modules = [ pkgs.xorg.xf86videoati pkgs.xorg.glamoregl ]; };
intel-testing = { modules = with pkgs.xorg; [ xf86videointel-testing glamoregl ]; driverName = "intel"; };
virtualbox = { modules = [ kernelPackages.virtualboxGuestAdditions ]; driverName = "vboxvideo"; };
ati = { modules = with pkgs.xorg; [ xf86videoati glamoregl ]; };
intel = { modules = with pkgs.xorg; [ xf86videointel glamoregl ]; };
};
fontsForXServer =
@ -160,7 +160,7 @@ in
[ '''
Identifier "Trackpoint Wheel Emulation"
MatchProduct "ThinkPad USB Keyboard with TrackPoint"
Option "EmulateWheel" "true
Option "EmulateWheel" "true"
Option "EmulateWheelButton" "2"
Option "Emulate3Buttons" "false"
'''
@ -219,17 +219,6 @@ in
'';
};
startGnuPGAgent = mkOption {
type = types.bool;
default = false;
description = ''
Whether to start the GnuPG agent when you log in. The GnuPG agent
remembers private keys for you so that you don't have to type in
passphrases every time you make an SSH connection or sign/encrypt
data. Use <command>ssh-add</command> to add a key to the agent.
'';
};
startDbusSession = mkOption {
type = types.bool;
default = true;
@ -444,14 +433,7 @@ in
in optional (driver != null) ({ inherit name; driverName = name; } // driver));
assertions =
[ { assertion = !(config.programs.ssh.startAgent && cfg.startGnuPGAgent);
message =
''
The OpenSSH agent and GnuPG agent cannot be started both. Please
choose between programs.ssh.startAgent and services.xserver.startGnuPGAgent.
'';
}
{ assertion = config.security.polkit.enable;
[ { assertion = config.security.polkit.enable;
message = "X11 requires Polkit to be enabled (security.polkit.enable = true).";
}
];

View file

@ -33,19 +33,24 @@ with lib;
};
config = mkIf config.systemd.coredump.enable {
config = mkMerge [
(mkIf config.systemd.coredump.enable {
environment.etc."systemd/coredump.conf".text =
''
[Coredump]
${config.systemd.coredump.extraConfig}
'';
environment.etc."systemd/coredump.conf".text =
''
[Coredump]
${config.systemd.coredump.extraConfig}
'';
# Have the kernel pass core dumps to systemd's coredump helper binary.
# From systemd's 50-coredump.conf file. See:
# <https://github.com/systemd/systemd/blob/v218/sysctl.d/50-coredump.conf.in>
boot.kernel.sysctl."kernel.core_pattern" = "|${pkgs.systemd}/lib/systemd/systemd-coredump %p %u %g %s %t %e";
# Have the kernel pass core dumps to systemd's coredump helper binary.
# From systemd's 50-coredump.conf file. See:
# <https://github.com/systemd/systemd/blob/v218/sysctl.d/50-coredump.conf.in>
boot.kernel.sysctl."kernel.core_pattern" = "|${pkgs.systemd}/lib/systemd/systemd-coredump %p %u %g %s %t %e";
})
};
(mkIf (!config.systemd.coredump.enable) {
boot.kernel.sysctl."kernel.core_pattern" = mkDefault "core";
})
];
}

View file

@ -58,6 +58,7 @@ let
# Add RAID mdadm tool.
copy_bin_and_libs ${pkgs.mdadm}/sbin/mdadm
copy_bin_and_libs ${pkgs.mdadm}/sbin/mdmon
# Copy udev.
copy_bin_and_libs ${udev}/lib/systemd/systemd-udevd

View file

@ -93,7 +93,7 @@ let
config = {
mountPoint = mkDefault name;
device = mkIf (config.fsType == "tmpfs") (mkDefault config.fsType);
options = mkIf config.autoResize "x-nixos.autoresize";
options = mkIf config.autoResize [ "x-nixos.autoresize" ];
# -F needed to allow bare block device without partitions
formatOptions = mkIf ((builtins.substring 0 3 config.fsType) == "ext") (mkDefault "-F");

View file

@ -882,10 +882,8 @@ in
optionalString hasBonds "options bonding max_bonds=0";
boot.kernel.sysctl = {
"net.net.ipv4.conf.all.promote_secondaries" = true;
"net.ipv6.conf.all.disable_ipv6" = mkDefault (!cfg.enableIPv6);
"net.ipv6.conf.default.disable_ipv6" = mkDefault (!cfg.enableIPv6);
"net.ipv4.conf.all_forwarding" = mkDefault (any (i: i.proxyARP) interfaces);
"net.ipv6.conf.all.forwarding" = mkDefault (any (i: i.proxyARP) interfaces);
} // listToAttrs (concatLists (flip map (filter (i: i.proxyARP) interfaces)
(i: flip map [ "4" "6" ] (v: nameValuePair "net.ipv${v}.conf.${i.name}.proxy_arp" true))

View file

@ -12,4 +12,45 @@
cp -v ${pkgs.mdadm}/lib/udev/rules.d/*.rules $out/
'';
systemd.services.mdadm-shutdown = {
wantedBy = [ "final.target"];
after = [ "umount.target" ];
unitConfig = {
DefaultDependencies = false;
};
serviceConfig = {
Type = "oneshot";
ExecStart = ''${pkgs.mdadm}/bin/mdadm --wait-clean --scan'';
};
};
systemd.services."mdmon@" = {
description = "MD Metadata Monitor on /dev/%I";
unitConfig.DefaultDependencies = false;
serviceConfig = {
Type = "forking";
Environment = "IMSM_NO_PLATFORM=1";
ExecStart = ''${pkgs.mdadm}/bin/mdmon --offroot --takeover %I'';
KillMode = "none";
};
};
systemd.services."mdadm-grow-continue@" = {
description = "Manage MD Reshape on /dev/%I";
unitConfig.DefaultDependencies = false;
serviceConfig = {
ExecStart = ''${pkgs.mdadm}/bin/mdadm --grow --continue /dev/%I'';
StandardInput = "null";
StandardOutput = "null";
StandardError = "null";
KillMode = "none";
};
};
}

View file

@ -40,7 +40,6 @@ let cfg = config.ec2; in
# Force udev to exit to prevent random "Device or resource busy
# while trying to open /dev/xvda" errors from fsck.
udevadm control --exit || true
kill -9 -1
'';
boot.initrd.network.enable = true;

View file

@ -0,0 +1,17 @@
--- a/waagent 2016-03-12 09:58:15.728088851 +0200
+++ a/waagent 2016-03-12 09:58:43.572680025 +0200
@@ -6173,10 +6173,10 @@
Log("MAC address: " + ":".join(["%02X" % Ord(a) for a in mac]))
# Consume Entropy in ACPI table provided by Hyper-V
- try:
- SetFileContents("/dev/random", GetFileContents("/sys/firmware/acpi/tables/OEM0"))
- except:
- pass
+ #try:
+ # SetFileContents("/dev/random", GetFileContents("/sys/firmware/acpi/tables/OEM0"))
+ #except:
+ # pass
Log("Probing for Azure environment.")
self.Endpoint = self.DoDhcpWork()

View file

@ -14,6 +14,9 @@ let
rev = "1b3a8407a95344d9d12a2a377f64140975f1e8e4";
sha256 = "10byzvmpgrmr4d5mdn2kq04aapqb3sgr1admk13wjmy5cd6bwd2x";
};
patches = [ ./azure-agent-entropy.patch ];
buildInputs = [ makeWrapper python pythonPackages.wrapPython ];
runtimeDeps = [ findutils gnugrep gawk coreutils openssl openssh
nettools # for hostname
@ -54,9 +57,15 @@ in
###### interface
options.virtualisation.azure.agent.enable = mkOption {
default = false;
description = "Whether to enable the Windows Azure Linux Agent.";
options.virtualisation.azure.agent = {
enable = mkOption {
default = false;
description = "Whether to enable the Windows Azure Linux Agent.";
};
verboseLogging = mkOption {
default = false;
description = "Whether to enable verbose logging.";
};
};
###### implementation
@ -88,7 +97,7 @@ in
Provisioning.DeleteRootPassword=n
# Generate fresh host key pair.
Provisioning.RegenerateSshHostKeyPair=y
Provisioning.RegenerateSshHostKeyPair=n
# Supported values are "rsa", "dsa" and "ecdsa".
Provisioning.SshHostKeyPairType=ed25519
@ -121,7 +130,7 @@ in
Logs.Console=y
# Enable verbose logging (y|n)
Logs.Verbose=n
Logs.Verbose=${if cfg.verboseLogging then "y" else "n"}
# Root device timeout in seconds.
OS.RootDeviceScsiTimeout=300
@ -146,16 +155,30 @@ in
systemd.targets.provisioned = {
description = "Services Requiring Azure VM provisioning to have finished";
wantedBy = [ "sshd.service" ];
before = [ "sshd.service" ];
};
systemd.services.consume-hypervisor-entropy =
{ description = "Consume entropy in ACPI table provided by Hyper-V";
wantedBy = [ "sshd.service" "waagent.service" ];
before = [ "sshd.service" "waagent.service" ];
after = [ "local-fs.target" ];
path = [ pkgs.coreutils ];
script =
''
echo "Fetching entropy..."
cat /sys/firmware/acpi/tables/OEM0 > /dev/random
'';
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
serviceConfig.StandardError = "journal+console";
serviceConfig.StandardOutput = "journal+console";
};
systemd.services.waagent = {
wantedBy = [ "sshd.service" ];
before = [ "sshd.service" ];
after = [ "ip-up.target" ];
wants = [ "ip-up.target" ];
wantedBy = [ "multi-user.target" ];
after = [ "ip-up.target" "sshd.service" ];
path = [ pkgs.e2fsprogs ];
description = "Windows Azure Agent Service";

View file

@ -2,7 +2,7 @@
with lib;
let
diskSize = "4096";
diskSize = "30720";
in
{
system.build.azureImage =
@ -23,7 +23,7 @@ in
postVM =
''
mkdir -p $out
${pkgs.vmTools.qemu-220}/bin/qemu-img convert -f raw -O vpc -o subformat=fixed $diskImage $out/disk.vhd
${pkgs.vmTools.qemu-220}/bin/qemu-img convert -f raw -O vpc $diskImage $out/disk.vhd
rm $diskImage
'';
diskImageBase = "nixos-image-${config.system.nixosLabel}-${pkgs.stdenv.system}.raw";

View file

@ -22,7 +22,9 @@ in {
config = {
system.build.virtualBoxImage = import ../../lib/make-disk-image.nix {
system.build.virtualBoxOVA = import ../../lib/make-disk-image.nix {
name = "nixos-ova-${config.system.nixosLabel}-${pkgs.stdenv.system}";
inherit pkgs lib config;
partitioned = true;
diskSize = cfg.baseImageSize;
@ -37,37 +39,36 @@ in {
postVM =
''
echo "creating VirtualBox disk image..."
${pkgs.vmTools.qemu}/bin/qemu-img convert -f raw -O vdi $diskImage $out/disk.vdi
${pkgs.vmTools.qemu}/bin/qemu-img convert -f raw -O vdi $diskImage disk.vdi
rm $diskImage
echo "creating VirtualBox VM..."
export HOME=$PWD
export PATH=${pkgs.linuxPackages.virtualbox}/bin:$PATH
vmName="NixOS ${config.system.nixosLabel} (${pkgs.stdenv.system})"
VBoxManage createvm --name "$vmName" --register \
--ostype ${if pkgs.stdenv.system == "x86_64-linux" then "Linux26_64" else "Linux26"}
VBoxManage modifyvm "$vmName" \
--memory 1536 --acpi on --vram 32 \
${optionalString (pkgs.stdenv.system == "i686-linux") "--pae on"} \
--nictype1 virtio --nic1 nat \
--audiocontroller ac97 --audio alsa \
--rtcuseutc on \
--usb on --mouse usbtablet
VBoxManage storagectl "$vmName" --name SATA --add sata --portcount 4 --bootable on --hostiocache on
VBoxManage storageattach "$vmName" --storagectl SATA --port 0 --device 0 --type hdd \
--medium disk.vdi
echo "exporting VirtualBox VM..."
mkdir -p $out
fn="$out/nixos-${config.system.nixosLabel}-${pkgs.stdenv.system}.ova"
VBoxManage export "$vmName" --output "$fn"
mkdir -p $out/nix-support
echo "file ova $fn" >> $out/nix-support/hydra-build-products
'';
};
system.build.virtualBoxOVA = pkgs.runCommand "virtualbox-ova"
{ buildInputs = [ pkgs.linuxPackages.virtualbox ];
vmName = "NixOS ${config.system.nixosLabel} (${pkgs.stdenv.system})";
fileName = "nixos-image-${config.system.nixosLabel}-${pkgs.stdenv.system}.ova";
}
''
echo "creating VirtualBox VM..."
export HOME=$PWD
VBoxManage createvm --name "$vmName" --register \
--ostype ${if pkgs.stdenv.system == "x86_64-linux" then "Linux26_64" else "Linux26"}
VBoxManage modifyvm "$vmName" \
--memory 1536 --acpi on --vram 32 \
${optionalString (pkgs.stdenv.system == "i686-linux") "--pae on"} \
--nictype1 virtio --nic1 nat \
--audiocontroller ac97 --audio alsa \
--rtcuseutc on \
--usb on --mouse usbtablet
VBoxManage storagectl "$vmName" --name SATA --add sata --portcount 4 --bootable on --hostiocache on
VBoxManage storageattach "$vmName" --storagectl SATA --port 0 --device 0 --type hdd \
--medium ${config.system.build.virtualBoxImage}/disk.vdi
echo "exporting VirtualBox VM..."
mkdir -p $out
VBoxManage export "$vmName" --output "$out/$fileName"
'';
fileSystems."/".device = "/dev/disk/by-label/nixos";
boot.loader.grub.device = "/dev/sda";

View file

@ -44,11 +44,11 @@ in rec {
(all nixos.manual)
(all nixos.iso_minimal)
(all nixos.iso_graphical)
(all nixos.ova)
nixos.iso_graphical.x86_64-linux
nixos.ova.x86_64-linux
#(all nixos.tests.containers)
(all nixos.tests.chromium.stable)
#(all nixos.tests.chromium.stable)
(all nixos.tests.firefox)
(all nixos.tests.firewall)
nixos.tests.gnome3.x86_64-linux # FIXME: i686-linux

View file

@ -43,34 +43,14 @@ let
makeIso =
{ module, type, description ? type, maintainers ? ["eelco"], system }:
{ module, type, maintainers ? ["eelco"], system }:
with import nixpkgs { inherit system; };
let
config = (import lib/eval-config.nix {
inherit system;
modules = [ module versionModule { isoImage.isoBaseName = "nixos-${type}"; } ];
}).config;
iso = config.system.build.isoImage;
in
# Declare the ISO as a build product so that it shows up in Hydra.
hydraJob (runCommand "nixos-iso-${config.system.nixosVersion}"
{ meta = {
description = "NixOS installation CD (${description}) - ISO image for ${system}";
maintainers = map (x: lib.maintainers.${x}) maintainers;
};
inherit iso;
passthru = { inherit config; };
preferLocalBuild = true;
}
''
mkdir -p $out/nix-support
echo "file iso" $iso/iso/*.iso* >> $out/nix-support/hydra-build-products
''); # */
hydraJob ((import lib/eval-config.nix {
inherit system;
modules = [ module versionModule { isoImage.isoBaseName = "nixos-${type}"; } ];
}).config.system.build.isoImage);
makeSystemTarball =
@ -130,7 +110,7 @@ in rec {
inherit system;
});
iso_graphical = forAllSystems (system: makeIso {
iso_graphical = genAttrs [ "x86_64-linux" ] (system: makeIso {
module = ./modules/installer/cd-dvd/installation-cd-graphical-kde.nix;
type = "graphical";
inherit system;
@ -138,7 +118,7 @@ in rec {
# A variant with a more recent (but possibly less stable) kernel
# that might support more hardware.
iso_minimal_new_kernel = forAllSystems (system: makeIso {
iso_minimal_new_kernel = genAttrs [ "x86_64-linux" ] (system: makeIso {
module = ./modules/installer/cd-dvd/installation-cd-minimal-new-kernel.nix;
type = "minimal-new-kernel";
inherit system;
@ -146,35 +126,17 @@ in rec {
# A bootable VirtualBox virtual appliance as an OVA file (i.e. packaged OVF).
ova = forAllSystems (system:
ova = genAttrs [ "x86_64-linux" ] (system:
with import nixpkgs { inherit system; };
let
config = (import lib/eval-config.nix {
inherit system;
modules =
[ versionModule
./modules/installer/virtualbox-demo.nix
];
}).config;
in
# Declare the OVA as a build product so that it shows up in Hydra.
hydraJob (runCommand "nixos-ova-${config.system.nixosVersion}-${system}"
{ meta = {
description = "NixOS VirtualBox appliance (${system})";
maintainers = maintainers.eelco;
};
ova = config.system.build.virtualBoxOVA;
preferLocalBuild = true;
}
''
mkdir -p $out/nix-support
fn=$(echo $ova/*.ova)
echo "file ova $fn" >> $out/nix-support/hydra-build-products
'') # */
hydraJob ((import lib/eval-config.nix {
inherit system;
modules =
[ versionModule
./modules/installer/virtualbox-demo.nix
];
}).config.system.build.virtualBoxOVA)
);
@ -240,6 +202,7 @@ in rec {
tests.containers = callTest tests/containers.nix {};
tests.docker = hydraJob (import tests/docker.nix { system = "x86_64-linux"; });
tests.dockerRegistry = hydraJob (import tests/docker-registry.nix { system = "x86_64-linux"; });
tests.dnscrypt-proxy = callTest tests/dnscrypt-proxy.nix { system = "x86_64-linux"; };
tests.etcd = hydraJob (import tests/etcd.nix { system = "x86_64-linux"; });
tests.ec2-nixops = hydraJob (import tests/ec2.nix { system = "x86_64-linux"; }).boot-ec2-nixops;
tests.ec2-config = hydraJob (import tests/ec2.nix { system = "x86_64-linux"; }).boot-ec2-config;

View file

@ -1,4 +1,11 @@
{ system ? builtins.currentSystem }:
{ system ? builtins.currentSystem
, pkgs ? import ../.. { inherit system; }
, channelMap ? {
stable = pkgs.chromium;
beta = pkgs.chromiumBeta;
dev = pkgs.chromiumDev;
}
}:
with import ../lib/testing.nix { inherit system; };
with pkgs.lib;
@ -160,8 +167,4 @@ mapAttrs (channel: chromiumPkg: makeTest rec {
$machine->shutdown;
'';
}) {
stable = pkgs.chromium;
beta = pkgs.chromiumBeta;
dev = pkgs.chromiumDev;
}
}) channelMap

View file

@ -0,0 +1,33 @@
import ./make-test.nix ({ pkgs, ... }: {
name = "dnscrypt-proxy";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ joachifm ];
};
nodes = {
# A client running the recommended setup: DNSCrypt proxy as a forwarder
# for a caching DNS client.
client =
{ config, pkgs, ... }:
let localProxyPort = 43; in
{
security.apparmor.enable = true;
services.dnscrypt-proxy.enable = true;
services.dnscrypt-proxy.localPort = localProxyPort;
services.dnsmasq.enable = true;
services.dnsmasq.servers = [ "127.0.0.1#${toString localProxyPort}" ];
};
};
testScript = ''
$client->start;
$client->waitForUnit("sockets.target");
$client->waitForUnit("dnsmasq");
# The daemon is socket activated; sending a single ping should activate it.
$client->execute("${pkgs.iputils}/bin/ping -c1 example.com");
$client->succeed("systemctl is-active dnscrypt-proxy");
'';
})

View file

@ -20,7 +20,7 @@ import ./make-test.nix ({ pkgs, ...} : {
testScript = ''
startAll;
$docker->waitForUnit("docker.service");
$docker->waitForUnit("sockets.target");
$docker->succeed("tar cv --files-from /dev/null | docker import - scratchimg");
$docker->succeed("docker run -d --name=sleeping -v /nix/store:/nix/store -v /run/current-system/sw/bin:/bin scratchimg /bin/sleep 10");
$docker->succeed("docker ps | grep sleeping");

View file

@ -35,9 +35,9 @@ import ./make-test.nix ( { pkgs, ... } : {
# Local connections should still work.
$walled->succeed("curl -v http://localhost/ >&2");
# Connections to the firewalled machine should fail.
# Connections to the firewalled machine should fail, but ping should succeed.
$attacker->fail("curl --fail --connect-timeout 2 http://walled/ >&2");
$attacker->fail("ping -c 1 walled >&2");
$attacker->succeed("ping -c 1 walled >&2");
# Outgoing connections/pings should still work.
$walled->succeed("curl -v http://attacker/ >&2");

View file

@ -366,8 +366,8 @@ in {
"mkdir /mnt/boot",
"mount LABEL=boot /mnt/boot",
"udevadm settle",
"mdadm -W /dev/md0", # wait for sync to finish; booting off an unsynced device tends to fail
"mdadm -W /dev/md1",
"mdadm --verbose -W /dev/md0", # wait for sync to finish; booting off an unsynced device tends to fail
"mdadm --verbose -W /dev/md1",
);
'';
};

View file

@ -23,6 +23,8 @@ import ./make-test.nix ({ pkgs, ...} : {
{ wantedBy = [ "multi-user.target" ];
where = "/tmp2";
};
users.users.sybil = { isNormalUser = true; group = "wheel"; };
security.sudo = { enable = true; wheelNeedsPassword = false; };
};
testScript =
@ -110,5 +112,10 @@ import ./make-test.nix ({ pkgs, ...} : {
subtest "nix-db", sub {
$machine->succeed("nix-store -qR /run/current-system | grep nixos-");
};
# Test sudo
subtest "sudo", sub {
$machine->succeed("su - sybil -c 'sudo true'");
};
'';
})

View file

@ -7,7 +7,7 @@ import ./make-test.nix {
{
services.riak.enable = true;
services.riak.package = pkgs.riak2;
services.riak.package = pkgs.riak;
};
};

View file

@ -0,0 +1,55 @@
{ pkgs, stdenv, lib, fetchurl, intltool, pkgconfig, gstreamer, gst_plugins_base
, gst_plugins_good, gst_plugins_bad, gst_plugins_ugly, gst_ffmpeg, glib
, mono, mono-addins, dbus-sharp-1_0, dbus-sharp-glib-1_0, notify-sharp, gtk-sharp-2_0
, boo, gdata-sharp, taglib-sharp, sqlite, gnome-sharp, gconf, gtk-sharp-beans, gio-sharp
, libmtp, libgpod, mono-zeroconf }:
stdenv.mkDerivation rec {
name = "banshee-${version}";
version = "2.6.2";
src = fetchurl {
url = "http://ftp.gnome.org/pub/GNOME/sources/banshee/2.6/banshee-${version}.tar.xz";
sha256 = "1y30p8wxx5li39i5gpq2wib0ympy8llz0gyi6ri9bp730ndhhz7p";
};
dontStrip = true;
nativeBuildInputs = [ pkgconfig intltool ];
buildInputs = [
gtk-sharp-2_0.gtk gstreamer gst_plugins_base gst_plugins_good
gst_plugins_bad gst_plugins_ugly gst_ffmpeg
mono dbus-sharp-1_0 dbus-sharp-glib-1_0 mono-addins notify-sharp
gtk-sharp-2_0 boo gdata-sharp taglib-sharp sqlite gnome-sharp gconf gtk-sharp-beans
gio-sharp libmtp libgpod mono-zeroconf
];
makeFlags = [ "PREFIX=$(out)" ];
postPatch = ''
patchShebangs data/desktop-files/update-desktop-file.sh
patchShebangs build/private-icon-theme-installer
sed -i "s,DOCDIR=.*,DOCDIR=$out/lib/monodoc," configure
'';
postInstall = let
ldLibraryPath = lib.makeLibraryPath [ gtk-sharp-2_0.gtk gtk-sharp-2_0 sqlite gconf glib gstreamer ];
monoGACPrefix = lib.concatStringsSep ":" [
mono dbus-sharp-1_0 dbus-sharp-glib-1_0 mono-addins notify-sharp gtk-sharp-2_0
boo gdata-sharp taglib-sharp sqlite gnome-sharp gconf gtk-sharp-beans
gio-sharp libmtp libgpod mono-zeroconf
];
in ''
sed -e '2a export MONO_GAC_PREFIX=${monoGACPrefix}' \
-e 's|LD_LIBRARY_PATH=|LD_LIBRARY_PATH=${ldLibraryPath}:|' \
-e "s|GST_PLUGIN_PATH=|GST_PLUGIN_PATH=$GST_PLUGIN_SYSTEM_PATH:|" \
-e 's| mono | ${mono}/bin/mono |' \
-i $out/bin/banshee
'';
meta = with lib; {
description = "A music player written in C# using GNOME technologies";
platforms = platforms.linux;
maintainers = [ maintainers.zohl ];
};
}

View file

@ -50,6 +50,12 @@ let
name = "clementine-free-${version}";
inherit patches src buildInputs;
enableParallelBuilding = true;
postPatch = ''
sed -i src/CMakeLists.txt \
-e 's,-Werror,,g' \
-e 's,-Wno-unknown-warning-option,,g' \
-e 's,-Wno-unused-private-field,,g'
'';
meta = with stdenv.lib; {
homepage = "http://www.clementine-player.org";
description = "A multiplatform music player";

View file

@ -3,12 +3,12 @@
}:
stdenv.mkDerivation rec {
version = "0.9.8.1";
version = "0.9.9";
name = "drumgizmo-${version}";
src = fetchurl {
url = "http://www.drumgizmo.org/releases/${name}/${name}.tar.gz";
sha256 = "1plfjhwhaz1mr3kgf5imcp3kjflk6ni9sq39gmxjxzya6gn2r6gg";
sha256 = "03dnh2p4s6n107n0r86h9j1jwy85a8qwjkh0288k60qpdqy1c7vp";
};
configureFlags = [ "--enable-lv2" ];
@ -21,7 +21,7 @@ stdenv.mkDerivation rec {
meta = with stdenv.lib; {
description = "An LV2 sample based drum plugin";
homepage = http://www.drumgizmo.org;
license = licenses.gpl3;
license = licenses.lgpl3;
platforms = platforms.linux;
maintainers = [ maintainers.goibhniu maintainers.nico202 ];
};

View file

@ -20,7 +20,6 @@ in stdenv.mkDerivation rec {
license = licenses.gpl2Plus;
platforms = platforms.linux;
hydraPlatforms = [];
maintainers = with maintainers; [ iyzsong ];
};
src = fetchurl {

View file

@ -37,7 +37,8 @@ let
inherit src;
buildInputs = [ makeWrapper llvm emscripten openssl libsndfile pkgconfig libmicrohttpd vim ];
nativeBuildInputs = [ makeWrapper pkgconfig vim ];
buildInputs = [ llvm emscripten openssl libsndfile libmicrohttpd ];
passthru = {
@ -53,6 +54,20 @@ let
# correct system.
unset system
sed -e "232s/LLVM_STATIC_LIBS/LLVMLIBS/" -i compiler/Makefile.unix
# The makefile sets LLVM_<version> depending on the current llvm
# version, but the detection code is quite brittle.
#
# Failing to properly detect the llvm version means that the macro
# LLVM_VERSION ends up being the raw output of `llvm-config --version`, while
# the code assumes that it's set to a symbol like `LLVM_35`. Two problems result:
# * <command-line>:0:1: error: macro names must be identifiers.; and
# * a bunch of undefined reference errors due to conditional definitions relying on
# LLVM_XY being defined.
#
# For now, fix this by 1) pinning the llvm version; 2) manually setting LLVM_VERSION
# to something the makefile will recognize.
sed '52iLLVM_VERSION=3.7.0' -i compiler/Makefile.unix
'';
# Remove most faust2appl scripts since they won't run properly

View file

@ -10,7 +10,7 @@ stdenv.mkDerivation rec {
sha256 = "0jb6g3kbfyr5yf8mvblnciva2bmc01ijpr51m21r27rqmgi8gj5k";
};
patches = [ ./buf_rect.patch ];
patches = [ ./buf_rect.patch ./fix_build_with_gcc-5.patch];
buildInputs =
[ pkgconfig SDL SDL_image libjack2

View file

@ -0,0 +1,31 @@
Description: Fix build with gcc-5
Bug-Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=778003
Author: Jaromír Mikeš <mira.mikes@seznam.cz>
Forwarded: No
Index: meterbridge/src/linedraw.h
===================================================================
--- meterbridge.orig/src/linedraw.h
+++ meterbridge/src/linedraw.h
@@ -1,7 +1,7 @@
#ifndef LINEDRAW_H
#define LINEDRAW_H
-inline void set_rgba(SDL_Surface *surface, Uint32 x, Uint32 y, Uint32 col);
+void set_rgba(SDL_Surface *surface, Uint32 x, Uint32 y, Uint32 col);
void draw_ptr(SDL_Surface *surface, int x1, int y1, int x2, int y2, Uint32 nedle_col, Uint32 aa_col);
Index: meterbridge/src/linedraw.c
===================================================================
--- meterbridge.orig/src/linedraw.c
+++ meterbridge/src/linedraw.c
@@ -4,7 +4,7 @@
/* set a pixel on an SDL_Surface, assumes that the surface is 32bit RGBA,
* ordered ABGR (I think), probably wont work on bigendian systems */
-inline void set_rgba(SDL_Surface *surface, Uint32 x, Uint32 y, Uint32 col)
+void set_rgba(SDL_Surface *surface, Uint32 x, Uint32 y, Uint32 col)
{
Uint32 *bufp = (Uint32 *)surface->pixels + y*surface->pitch/4 + x;
*bufp = col;

View file

@ -19,6 +19,11 @@ pythonPackages.buildPythonApplication rec {
substituteInPlace setup.py --replace "/usr/share" "$out/share"
'';
postInstall = ''
mkdir -p $out/share/applications
cp -v data/pithos.desktop $out/share/applications
'';
buildInputs = [ wrapGAppsHook ];
propagatedBuildInputs =

Some files were not shown because too many files have changed in this diff Show more