Merging against master - updating smokingpig, rebase was going to be messy

This commit is contained in:
Parnell Springmeyer 2017-01-26 02:00:04 -08:00
commit a26a796d5c
No known key found for this signature in database
GPG key ID: DCCF89258EAD874A
956 changed files with 25853 additions and 24254 deletions

View file

@ -22,3 +22,7 @@ indent_size = 2
[*.{sh,py,pl}]
indent_style = space
indent_size = 4
# Match diffs, avoid to trim trailing whitespace
[*.{diff,patch}]
trim_trailing_whitespace = false

153
doc/cross-compilation.xml Normal file
View file

@ -0,0 +1,153 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-cross">
<title>Cross-compilation</title>
<section xml:id="sec-cross-intro">
<title>Introduction</title>
<para>
"Cross-compilation" means compiling a program on one machine for another type of machine.
For example, a typical use of cross compilation is to compile programs for embedded devices.
These devices often don't have the computing power and memory to compile their own programs.
One might think that cross-compilation is a fairly niche concern, but there are advantages to being rigorous about distinguishing build-time vs run-time environments even when one is developing and deploying on the same machine.
Nixpkgs is increasingly adopting this opinion in that packages should be written with cross-compilation in mind, and nixpkgs should evaluate in a similar way (by minimizing cross-compilation-specific special cases) whether or not one is cross-compiling.
</para>
<para>
This chapter will be organized in three parts.
First, it will describe the basics of how to package software in a way that supports cross-compilation.
Second, it will describe how to use Nixpkgs when cross-compiling.
Third, it will describe the internal infrastructure supporting cross-compilation.
</para>
</section>
<!--============================================================-->
<section xml:id="sec-cross-packaging">
<title>Packing in a cross-friendly manner</title>
<section>
<title>Platform parameters</title>
<para>
The three GNU Autoconf platforms, <wordasword>build</wordasword>, <wordasword>host</wordasword>, and <wordasword>cross</wordasword>, are historically the result of much confusion.
<link xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html" /> clears this up somewhat but there is more to be said.
An important advice to get out the way is, unless you are packaging a compiler or other build tool, just worry about the build and host platforms.
Dealing with just two platforms usually better matches people's preconceptions, and in this case is completely correct.
</para>
<para>
In Nixpkgs, these three platforms are defined as attribute sets under the names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>, and <literal>targetPlatform</literal>.
All are guaranteed to contain at least a <varname>platform</varname> field, which contains detailed information on the platform.
All three are always defined at the top level, so one can get at them just like a dependency in a function that is imported with <literal>callPackage</literal>:
<programlisting>{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...</programlisting>
</para>
<warning><para>
These platforms should all have the same structure in all scenarios, but that is currently not the case.
When not cross-compiling, they will each contain a <literal>system</literal> field with a short 2-part, hyphen-separated summering string name for the platform.
But, when when cross compiling, <literal>hostPlatform</literal> and <literal>targetPlatform</literal> may instead contain <literal>config</literal> with a fuller 3- or 4-part string in the manner of LLVM.
We should have all 3 platforms always contain both, and maybe give <literal>config</literal> a better name while we are at it.
</para></warning>
<variablelist>
<varlistentry>
<term><varname>buildPlatform</varname></term>
<listitem><para>
The "build platform" is the platform on which a package is built.
Once someone has a built package, or pre-built binary package, the build platform should not matter and be safe to ignore.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>hostPlatform</varname></term>
<listitem><para>
The "host platform" is the platform on which a package is run.
This is the simplest platform to understand, but also the one with the worst name.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>targetPlatform</varname></term>
<listitem>
<para>
The "target platform" is black sheep.
The other two intrinsically apply to all compiled software—or any build process with a notion of "build-time" followed by "run-time".
The target platform only applies to programming tools, and even then only is a good for for some of them.
Briefly, GCC, Binutils, GHC, and certain other tools are written in such a way such that a single build can only compiler code for a single platform.
Thus, when building them, one must think ahead about what platforms they wish to use the tool to produce machine code for, and build binaries for each.
</para>
<para>
There is no fundamental need to think about the target ahead of time like this.
LLVM, for example, was designed from the beginning with cross-compilation in mind, and so a normal LLVM binary will support every architecture that LLVM supports.
If the tool supports modular or pluggable backends, one might imagine specifying a <emphasis>set</emphasis> of target platforms / backends one wishes to support, rather than a single one.
</para>
<para>
The biggest reason for mess, if there is one, is that many compilers have the bad habit a build process that builds the compiler and standard library/runtime together.
Then the specifying target platform is essential, because it determines the host platform of the standard library/runtime.
Nixpkgs tries to avoid this where possible too, but still, because the concept of a target platform is so ingrained now in Autoconf and other tools, it is best to support it as is.
Tools like LLVM that don't need up-front target platforms can safely ignore it like normal packages, and it will do no harm.
</para>
</listitem>
</varlistentry>
</variablelist>
<note><para>
If you dig around nixpkgs, you may notice there is also <varname>stdenv.cross</varname>.
This field defined as <varname>hostPlatform</varname> when the host and build platforms differ, but otherwise not defined at all.
This field is obsolete and will soon disappear—please do not use it.
</para></note>
</section>
<section>
<title>Specifying Dependencies</title>
<para>
As mentioned in the introduction to this chapter, one can think about a build time vs run time distinction whether cross-compiling or not.
In the case of cross-compilation, this corresponds with whether a derivation running on the native or foreign platform is produced.
An interesting thing to think about is how this corresponds with the three Autoconf platforms.
In the run-time case, the depending and depended-on package simply have matching build, host, and target platforms.
But in the build-time case, one can imagine "sliding" the platforms one over.
The depended-on package's host and target platforms (respectively) become the depending package's build and host platforms.
This is the most important guiding principle behind cross-compilation with Nixpkgs, and will be called the <wordasword>sliding window principle</wordasword>.
In this manner, given the 3 platforms for one package, we can determine the three platforms for all its transitive dependencies.
</para>
<note><para>
The depending package's target platform is unconstrained by the sliding window principle, which makes sense in that one can in principle build cross compilers targeting arbitrary platforms.
</para></note>
<warning><para>
From the above, one would surmise that if a package is being built with a <literal>(build, host, target)</literal> platform triple of <literal>(foo, bar, bar)</literal>, then its build-time dependencies would have a triple of <literal>(foo, foo, bar)</literal>, and <emphasis>those packages'</emphasis> build-time dependencies would have triple of <literal>(foo, foo, foo)</literal>.
In other words, it should take two "rounds" of following build-time dependency edges before one reaches a fixed point where, by the sliding window principle, the platform triple no longer changes.
Unfortunately, at the moment, we do <emphasis>not</emphasis> implement this correctly, and after only one round of following build-time dependencies is the fixed point reached, with target incorrectly kept different than the others.
</para></warning>
<para>
How does this work in practice? Nixpkgs is now structured so that build-time dependencies are taken from from <varname>buildPackages</varname>, whereas run-time dependencies are taken from the top level attribute set.
For example, <varname>buildPackages.gcc</varname> should be used at build time, while <varname>gcc</varname> should be used at run time.
Now, for most of Nixpkgs's history, there was no <varname>buildPackages</varname>, and most packages have not been refactored to use it explicitly.
Instead, one can use the four attributes used for specifying dependencies as documented in <link linkend="ssec-stdenv-attributes" />.
We "splice" together the run-time and build-time package sets with <varname>callPackage</varname>, and then <varname>mkDerivation</varname> for each of four attributes pulls the right derivation out.
This splicing can be skipped when not cross compiling as the package sets are the same, but is a bit slow for cross compiling.
Because of this, a best-of-both-worlds solution is in the works with no splicing or explicit access of <varname>buildPackages</varname> needed.
For now, feel free to use either method.
</para>
</section>
</section>
<!--============================================================-->
<section xml:id="sec-cross-usage">
<title>Cross-building packages</title>
<para>
To be written.
This is basically unchanged so see the old wiki for now.
</para>
</section>
<!--============================================================-->
<section xml:id="sec-cross-infra">
<title>Cross-compilation infrastructure</title>
<para>To be written.</para>
<note><para>
If one explores nixpkgs, they will see derivations with names like <literal>gccCross</literal>.
Such <literal>*Cross</literal> derivations is a holdover from before we properly distinguished between the host and target platforms
—the derivation with "Cross" in the name covered the <literal>build = host != target</literal> case, while the other covered the <literal>host = target</literal>, with build platform the same or not based on whether one was using its <literal>.nativeDrv</literal> or <literal>.crossDrv</literal>.
This ugliness will disappear soon.
</para></note>
</section>
</chapter>

View file

@ -17,66 +17,6 @@
derivations or even the whole package set.
</para>
<section xml:id="sec-pkgs-overridePackages">
<title>pkgs.overridePackages</title>
<para>
This function inside the nixpkgs expression (<varname>pkgs</varname>)
can be used to override the set of packages itself.
</para>
<para>
Warning: this function is expensive and must not be used from within
the nixpkgs repository.
</para>
<para>
Example usage:
<programlisting>let
pkgs = import &lt;nixpkgs&gt; {};
newpkgs = pkgs.overridePackages (self: super: {
foo = super.foo.override { ... };
};
in ...</programlisting>
</para>
<para>
The resulting <varname>newpkgs</varname> will have the new <varname>foo</varname>
expression, and all other expressions depending on <varname>foo</varname> will also
use the new <varname>foo</varname> expression.
</para>
<para>
The behavior of this function is similar to <link
linkend="sec-modify-via-packageOverrides">config.packageOverrides</link>.
</para>
<para>
The <varname>self</varname> parameter refers to the final package set with the
applied overrides. Using this parameter may lead to infinite recursion if not
used consciously.
</para>
<para>
The <varname>super</varname> parameter refers to the old package set.
It's equivalent to <varname>pkgs</varname> in the above example.
</para>
<para>
Note that in previous versions of nixpkgs, this method replaced any changes from <link
linkend="sec-modify-via-packageOverrides">config.packageOverrides</link>,
along with that from previous calls if this function was called repeatedly.
Now those previous changes will be preserved so this function can be "chained" meaningfully.
To recover the old behavior, make sure <varname>config.packageOverrides</varname> is unset,
and call this only once off a "freshly" imported nixpkgs:
<programlisting>let
pkgs = import &lt;nixpkgs&gt; { config: {}; };
newpkgs = pkgs.overridePackages ...;
in ...</programlisting>
</para>
</section>
<section xml:id="sec-pkg-override">
<title>&lt;pkg&gt;.override</title>
@ -91,12 +31,12 @@
Example usages:
<programlisting>pkgs.foo.override { arg1 = val1; arg2 = val2; ... }</programlisting>
<programlisting>pkgs.overridePackages (self: super: {
<programlisting>import pkgs.path { overlays = [ (self: super: {
foo = super.foo.override { barSupport = true ; };
})</programlisting>
})]};</programlisting>
<programlisting>mypkg = pkgs.callPackage ./mypkg.nix {
mydep = pkgs.mydep.override { ... };
})</programlisting>
}</programlisting>
</para>
<para>

View file

@ -737,18 +737,18 @@ in (pkgs.python35.override {inherit packageOverrides;}).withPackages (ps: [ps.bl
```
The requested package `blaze` depends on `pandas` which itself depends on `scipy`.
If you want the whole of Nixpkgs to use your modifications, then you can use `pkgs.overridePackages`
If you want the whole of Nixpkgs to use your modifications, then you can use `overlays`
as explained in this manual. In the following example we build a `inkscape` using a different version of `numpy`.
```
let
pkgs = import <nixpkgs> {};
newpkgs = pkgs.overridePackages ( pkgsself: pkgssuper: {
newpkgs = import pkgs.path { overlays = [ (pkgsself: pkgssuper: {
python27 = let
packageOverrides = self: super: {
numpy = super.numpy_1_10;
};
in pkgssuper.python27.override {inherit packageOverrides;};
} );
} ) ]; };
in newpkgs.inkscape
```
@ -804,6 +804,55 @@ If you want to create a Python environment for development, then the recommended
method is to use `nix-shell`, either with or without the `python.buildEnv`
function.
### How to consume python modules using pip in a virtualenv like I am used to on other Operating Systems ?
This is an example of a `default.nix` for a `nix-shell`, which allows to consume a `virtualenv` environment,
and install python modules through `pip` the traditional way.
Create this `default.nix` file, together with a `requirements.txt` and simply execute `nix-shell`.
```
with import <nixpkgs> {};
with pkgs.python27Packages;
stdenv.mkDerivation {
name = "impurePythonEnv";
buildInputs = [
# these packages are required for virtualenv and pip to work:
#
python27Full
python27Packages.virtualenv
python27Packages.pip
# the following packages are related to the dependencies of your python
# project.
# In this particular example the python modules listed in the
# requirements.tx require the following packages to be installed locally
# in order to compile any binary extensions they may require.
#
taglib
openssl
git
libxml2
libxslt
libzip
stdenv
zlib ];
src = null;
shellHook = ''
# set SOURCE_DATE_EPOCH so that we can use python wheels
SOURCE_DATE_EPOCH=$(date +%s)
virtualenv --no-setuptools venv
export PATH=$PWD/venv/bin:$PATH
pip install -r requirements.txt
'';
}
```
Note that the `pip install` is an imperative action. So every time `nix-shell`
is executed it will attempt to download the python modules listed in
requirements.txt. However these will be cached locally within the `virtualenv`
folder and not downloaded again.
## Contributing

View file

@ -26,9 +26,8 @@ bundlerEnv rec {
version = (import gemset).sensu.version;
inherit ruby;
gemfile = ./Gemfile;
lockfile = ./Gemfile.lock;
gemset = ./gemset.nix;
# expects Gemfile, Gemfile.lock and gemset.nix in the same directory
gemdir = ./.;
meta = with lib; {
description = "A monitoring framework that aims to be simple, malleable, and scalable";

View file

@ -13,11 +13,13 @@
<xi:include href="quick-start.xml" />
<xi:include href="stdenv.xml" />
<xi:include href="multiple-output.xml" />
<xi:include href="cross-compilation.xml" />
<xi:include href="configuration.xml" />
<xi:include href="functions.xml" />
<xi:include href="meta.xml" />
<xi:include href="languages-frameworks/index.xml" />
<xi:include href="package-notes.xml" />
<xi:include href="overlays.xml" />
<xi:include href="coding-conventions.xml" />
<xi:include href="submitting-changes.xml" />
<xi:include href="reviewing-contributions.xml" />

View file

@ -61,7 +61,7 @@ stdenv.mkDerivation {
builder = ./builder.sh;
src = fetchurl {
url = http://ftp.nluug.nl/gnu/binutils/binutils-2.16.1.tar.bz2;
md5 = "6a9d529efb285071dad10e1f3d2b2967";
sha256 = "1ian3kwh2vg6hr3ymrv48s04gijs539vzrq62xr76bxbhbwnz2np";
};
inherit noSysDirs;
configureFlags = "--target=arm-linux";
@ -81,11 +81,11 @@ Step 2: build kernel headers for the target architecture
assert stdenv.system == "i686-linux";
stdenv.mkDerivation {
name = "linux-headers-2.6.13.4-arm";
name = "linux-headers-2.6.13.1-arm";
builder = ./builder.sh;
src = fetchurl {
url = http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.13.4.tar.bz2;
md5 = "94768d7eef90a9d8174639b2a7d3f58d";
url = http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.13.1.tar.bz2;
sha256 = "12qxmc827fjhaz53kjy7vyrzsaqcg78amiqsb3qm20z26w705lma";
};
}
---
@ -152,9 +152,7 @@ stdenv.mkDerivation {
builder = ./builder.sh;
src = fetchurl {
url = ftp://ftp.nluug.nl/pub/gnu/gcc/gcc-4.0.2/gcc-core-4.0.2.tar.bz2;
md5 = "f7781398ada62ba255486673e6274b26";
#url = ftp://ftp.nluug.nl/pub/gnu/gcc/gcc-4.0.2/gcc-4.0.2.tar.bz2;
#md5 = "a659b8388cac9db2b13e056e574ceeb0";
sha256 = "02fxh0asflm8825w23l2jq1wvs7hbnam0jayrivg7zdv2ifnc0rc";
};
# !!! apply only if noSysDirs is set
patches = [./no-sys-dirs.patch ./gcc-inhibit.patch];

99
doc/overlays.xml Normal file
View file

@ -0,0 +1,99 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-overlays">
<title>Overlays</title>
<para>This chapter describes how to extend and change Nixpkgs packages using
overlays. Overlays are used to add layers in the fix-point used by Nixpkgs
to compose the set of all packages.</para>
<!--============================================================-->
<section xml:id="sec-overlays-install">
<title>Installing Overlays</title>
<para>The set of overlays is looked for in the following places. The
first one present is considered, and all the rest are ignored:
<orderedlist>
<listitem>
<para>As an argument of the imported attribute set. When importing Nixpkgs,
the <varname>overlays</varname> attribute argument can be set to a list of
functions, which is described in <xref linkend="sec-overlays-layout"/>.</para>
</listitem>
<listitem>
<para>In the directory pointed by the environment variable
<varname>NIXPKGS_OVERLAYS</varname>.</para>
</listitem>
<listitem>
<para>In the directory <filename>~/.nixpkgs/overlays/</filename>.</para>
</listitem>
</orderedlist>
</para>
<para>For the second and third options, the directory should contain Nix expressions defining the
overlays. Each overlay can be a file, a directory containing a
<filename>default.nix</filename>, or a symlink to one of those. The expressions should follow
the syntax described in <xref linkend="sec-overlays-layout"/>.</para>
<para>The order of the overlay layers can influence the recipe of packages if multiple layers override
the same recipe. In the case where overlays are loaded from a directory, they are loaded in
alphabetical order.</para>
<para>To install an overlay using the last option, you can clone the overlay's repository and add
a symbolic link to it in <filename>~/.nixpkgs/overlays/</filename> directory.</para>
</section>
<!--============================================================-->
<section xml:id="sec-overlays-layout">
<title>Overlays Layout</title>
<para>Overlays are expressed as Nix functions which accept 2 arguments and return a set of
packages.</para>
<programlisting>
self: super:
{
boost = super.boost.override {
python = self.python3;
};
rr = super.callPackage ./pkgs/rr {
stdenv = self.stdenv_32bit;
};
}
</programlisting>
<para>The first argument, usually named <varname>self</varname>, corresponds to the final package
set. You should use this set for the dependencies of all packages specified in your
overlay. For example, all the dependencies of <varname>rr</varname> in the example above come
from <varname>self</varname>, as well as the overriden dependencies used in the
<varname>boost</varname> override.</para>
<para>The second argument, usually named <varname>super</varname>,
corresponds to the result of the evaluation of the previous stages of
Nixpkgs. It does not contain any of the packages added by the current
overlay nor any of the following overlays. This set should be used either
to refer to packages you wish to override, or to access functions defined
in Nixpkgs. For example, the original recipe of <varname>boost</varname>
in the above example, comes from <varname>super</varname>, as well as the
<varname>callPackage</varname> function.</para>
<para>The value returned by this function should be a set similar to
<filename>pkgs/top-level/all-packages.nix</filename>, which contains
overridden and/or new packages.</para>
</section>
</chapter>

View file

@ -194,33 +194,52 @@ genericBuild
tools.</para></listitem>
</varlistentry>
</variablelist>
<variablelist>
<title>Variables specifying dependencies</title>
<varlistentry>
<term><varname>nativeBuildInputs</varname></term>
<listitem><para>
A list of dependencies used by the new derivation at <emphasis>build</emphasis>-time.
I.e. these dependencies should not make it into the package's runtime-closure, though this is currently not checked.
For each dependency <replaceable>dir</replaceable>, the directory <filename><replaceable>dir</replaceable>/bin</filename>, if it exists, is added to the <envar>PATH</envar> environment variable.
Other environment variables are also set up via a pluggable mechanism.
For instance, if <varname>buildInputs</varname> contains Perl, then the <filename>lib/site_perl</filename> subdirectory of each input is added to the <envar>PERL5LIB</envar> environment variable.
See <xref linkend="ssec-setup-hooks"/> for details.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>buildInputs</varname></term>
<listitem><para>A list of dependencies used by
<literal>stdenv</literal> to set up the environment for the build.
For each dependency <replaceable>dir</replaceable>, the directory
<filename><replaceable>dir</replaceable>/bin</filename>, if it
exists, is added to the <envar>PATH</envar> environment variable.
Other environment variables are also set up via a pluggable
mechanism. For instance, if <varname>buildInputs</varname>
contains Perl, then the <filename>lib/site_perl</filename>
subdirectory of each input is added to the <envar>PERL5LIB</envar>
environment variable. See <xref linkend="ssec-setup-hooks"/> for
details.</para></listitem>
<listitem><para>
A list of dependencies used by the new derivation at <emphasis>run</emphasis>-time.
Currently, the build-time environment is modified in the exact same way as with <varname>nativeBuildInputs</varname>.
This is problematic in that when cross-compiling, foreign executables can clobber native ones on the <envar>PATH</envar>.
Even more confusing is static-linking.
A statically-linked library should be listed here because ultimately that generated machine code will be used at run-time, even though a derivation containing the object files or static archives will only be used at build-time.
A less confusing solution to this would be nice.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>propagatedNativeBuildInputs</varname></term>
<listitem><para>
Like <varname>nativeBuildInputs</varname>, but these dependencies are <emphasis>propagated</emphasis>:
that is, the dependencies listed here are added to the <varname>nativeBuildInputs</varname> of any package that uses <emphasis>this</emphasis> package as a dependency.
So if package Y has <literal>propagatedBuildInputs = [X]</literal>, and package Z has <literal>buildInputs = [Y]</literal>, then package X will appear in Zs build environment automatically.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>propagatedBuildInputs</varname></term>
<listitem><para>Like <varname>buildInputs</varname>, but these
dependencies are <emphasis>propagated</emphasis>: that is, the
dependencies listed here are added to the
<varname>buildInputs</varname> of any package that uses
<emphasis>this</emphasis> package as a dependency. So if package
Y has <literal>propagatedBuildInputs = [X]</literal>, and package
Z has <literal>buildInputs = [Y]</literal>, then package X will
appear in Zs build environment automatically.</para></listitem>
<listitem><para>
Like <varname>buildInputs</varname>, but propagated just like <varname>propagatedNativeBuildInputs</varname>.
This inherits <varname>buildInputs</varname>'s flaws of clobbering native executables when cross-compiling and being confusing for static linking.
</para></listitem>
</varlistentry>
</variablelist>
@ -322,7 +341,7 @@ executed and in what order:
$preInstallPhases installPhase fixupPhase $preDistPhases
distPhase $postPhases</literal>.
</para>
<para>Usually, if you just want to add a few phases, its more
convenient to set one of the variables below (such as
<varname>preInstallPhases</varname>), as you then dont specify
@ -706,7 +725,7 @@ makeFlagsArray=(CFLAGS="-O0 -g" LDFLAGS="-lfoo -lbar")
</variablelist>
<para>
<para>
You can set flags for <command>make</command> through the
<varname>makeFlags</varname> variable.</para>
@ -773,7 +792,7 @@ doCheck = true;</programlisting>
</variablelist>
</section>
@ -840,12 +859,12 @@ install phase. The default <function>fixupPhase</function> does the
following:
<itemizedlist>
<listitem><para>It moves the <filename>man/</filename>,
<filename>doc/</filename> and <filename>info/</filename>
subdirectories of <envar>$out</envar> to
<filename>share/</filename>.</para></listitem>
<listitem><para>It strips libraries and executables of debug
information.</para></listitem>
@ -1091,13 +1110,13 @@ functions.</para>
<variablelist>
<varlistentry xml:id='fun-substitute'>
<term><function>substitute</function>
<replaceable>infile</replaceable>
<replaceable>outfile</replaceable>
<replaceable>subs</replaceable></term>
<listitem>
<para>Performs string substitution on the contents of
<replaceable>infile</replaceable>, writing the result to
@ -1125,7 +1144,7 @@ functions.</para>
<literal>@<replaceable>...</replaceable>@</literal> in the
template as placeholders.</para></listitem>
</varlistentry>
<varlistentry>
<term><option>--subst-var-by</option>
<replaceable>varName</replaceable>
@ -1134,7 +1153,7 @@ functions.</para>
<literal>@<replaceable>varName</replaceable>@</literal> by
the string <replaceable>s</replaceable>.</para></listitem>
</varlistentry>
</variablelist>
</para>
@ -1162,7 +1181,7 @@ substitute ./foo.in ./foo.out \
</listitem>
</varlistentry>
<varlistentry xml:id='fun-substituteInPlace'>
<term><function>substituteInPlace</function>
@ -1173,7 +1192,7 @@ substitute ./foo.in ./foo.out \
<replaceable>file</replaceable>.</para></listitem>
</varlistentry>
<varlistentry xml:id='fun-substituteAll'>
<term><function>substituteAll</function>
<replaceable>infile</replaceable>
@ -1233,7 +1252,7 @@ echo @foo@
<listitem><para>Strips the directory and hash part of a store
path, outputting the name part to <literal>stdout</literal>.
For example:
<programlisting>
# prints coreutils-8.24
stripHash "/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24"
@ -1241,7 +1260,7 @@ stripHash "/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24"
If you wish to store the result in another variable, then the
following idiom may be useful:
<programlisting>
name="/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24"
someVar=$(stripHash $name)
@ -1250,7 +1269,7 @@ someVar=$(stripHash $name)
</para></listitem>
</varlistentry>
</variablelist>
</section>
@ -1401,8 +1420,15 @@ These can be toggled using the <varname>stdenv.mkDerivation</varname> parameters
<varname>hardeningDisable</varname> and <varname>hardeningEnable</varname>.
</para>
<para>The following flags are enabled by default and might require disabling
if the program to package is incompatible.
<para>
Both parameters take a list of flags as strings. The special
<varname>"all"</varname> flag can be passed to <varname>hardeningDisable</varname>
to turn off all hardening. These flags can also be used as environment variables
for testing or development purposes.
</para>
<para>The following flags are enabled by default and might require disabling with
<varname>hardeningDisable</varname> if the program to package is incompatible.
</para>
<variablelist>
@ -1563,7 +1589,8 @@ intel_drv.so: undefined symbol: vgaHWFreeHWRec
</variablelist>
<para>The following flags are disabled by default and should be enabled
for packages that take untrusted input, like network services.
with <varname>hardeningEnable</varname> for packages that take untrusted
input like network services.
</para>
<variablelist>
@ -1599,4 +1626,3 @@ Arch Wiki</link>.
</section>
</chapter>

View file

@ -191,6 +191,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
free = false;
};
eupl11 = spdx {
spdxId = "EUPL-1.1";
fullname = "European Union Public License 1.1";
};
fdl12 = spdx {
spdxId = "GFDL-1.2";
fullName = "GNU Free Documentation License v1.2";

View file

@ -27,6 +27,7 @@
akaWolf = "Artjom Vejsel <akawolf0@gmail.com>";
akc = "Anders Claesson <akc@akc.is>";
algorith = "Dries Van Daele <dries_van_daele@telenet.be>";
alibabzo = "Alistair Bill <alistair.bill@gmail.com>";
all = "Nix Committers <nix-commits@lists.science.uu.nl>";
ambrop72 = "Ambroz Bizjak <ambrop7@gmail.com>";
amiddelk = "Arie Middelkoop <amiddelk@gmail.com>";
@ -102,6 +103,7 @@
corngood = "David McFarland <corngood@gmail.com>";
coroa = "Jonas Hörsch <jonas@chaoflow.net>";
couchemar = "Andrey Pavlov <couchemar@yandex.ru>";
cpages = "Carles Pagès <page@ruiec.cat>";
cransom = "Casey Ransom <cransom@hubns.net>";
cryptix = "Henry Bubert <cryptix@riseup.net>";
CrystalGamma = "Jona Stubbe <nixos@crystalgamma.de>";
@ -221,9 +223,11 @@
joamaki = "Jussi Maki <joamaki@gmail.com>";
joelmo = "Joel Moberg <joel.moberg@gmail.com>";
joelteon = "Joel Taylor <me@joelt.io>";
johbo = "Johannes Bornhold <johannes@bornhold.name>";
joko = "Ioannis Koutras <ioannis.koutras@gmail.com>";
jonafato = "Jon Banafato <jon@jonafato.com>";
jpbernardy = "Jean-Philippe Bernardy <jeanphilippe.bernardy@gmail.com>";
jpierre03 = "Jean-Pierre PRUNARET <nix@prunetwork.fr>";
jraygauthier = "Raymond Gauthier <jraygauthier@gmail.com>";
juliendehos = "Julien Dehos <dehos@lisic.univ-littoral.fr>";
jwiegley = "John Wiegley <johnw@newartisans.com>";
@ -247,6 +251,7 @@
ldesgoui = "Lucas Desgouilles <ldesgoui@gmail.com>";
league = "Christopher League <league@contrapunctus.net>";
lebastr = "Alexander Lebedev <lebastr@gmail.com>";
leemachin = "Lee Machin <me@mrl.ee>";
leenaars = "Michiel Leenaars <ml.software@leenaa.rs>";
leonardoce = "Leonardo Cecchi <leonardo.cecchi@gmail.com>";
lethalman = "Luca Bruno <lucabru@src.gnome.org>";
@ -286,6 +291,7 @@
mbbx6spp = "Susan Potter <me@susanpotter.net>";
mbe = "Brandon Edens <brandonedens@gmail.com>";
mboes = "Mathieu Boespflug <mboes@tweag.net>";
mbrgm = "Marius Bergmann <marius@yeai.de>";
mcmtroffaes = "Matthias C. M. Troffaes <matthias.troffaes@gmail.com>";
mdaiter = "Matthew S. Daiter <mdaiter8121@gmail.com>";
meditans = "Carlo Nucera <meditans@gmail.com>";
@ -331,6 +337,7 @@
nicknovitski = "Nick Novitski <nixpkgs@nicknovitski.com>";
nico202 = "Nicolò Balzarotti <anothersms@gmail.com>";
NikolaMandic = "Ratko Mladic <nikola@mandic.email>";
nixy = "Andrew R. M. <andrewmiller237@gmail.com>";
notthemessiah = "Brian Cohen <brian.cohen.88@gmail.com>";
np = "Nicolas Pouillard <np.nix@nicolaspouillard.fr>";
nslqqq = "Nikita Mikhailov <nslqqq@gmail.com>";
@ -347,7 +354,6 @@
osener = "Ozan Sener <ozan@ozansener.com>";
otwieracz = "Slawomir Gonet <slawek@otwiera.cz>";
oxij = "Jan Malakhovski <oxij@oxij.org>";
page = "Carles Pagès <page@cubata.homelinux.net>";
paholg = "Paho Lurie-Gregg <paho@paholg.com>";
pakhfn = "Fedor Pakhomov <pakhfn@gmail.com>";
palo = "Ingolf Wanger <palipalo9@googlemail.com>";

View file

@ -37,6 +37,11 @@
first disable network-manager with
<command>systemctl stop network-manager</command>.</para></listitem>
<listitem><para>If you would like to continue the installation from a different
machine you need to activate the SSH daemon via <literal>systemctl start sshd</literal>.
In order to be able to login you also need to set a password for
<literal>root</literal> using <literal>passwd</literal>.</para></listitem>
<listitem><para>The NixOS installer doesnt do any partitioning or
formatting yet, so you need to do that yourself. Use the following
commands:

View file

@ -11,7 +11,9 @@ has the following highlights: </para>
<itemizedlist>
<listitem>
<para></para>
<para>Nixpkgs is now extensible through overlays. See the <link
xlink:href="https://nixos.org/nixpkgs/manual/#sec-overlays-install">Nixpkgs
manual</link> for more information.</para>
</listitem>
</itemizedlist>
@ -28,6 +30,23 @@ has the following highlights: </para>
following incompatible changes:</para>
<itemizedlist>
<listitem>
<para>
Cross compilation has been rewritten. See the nixpkgs manual for
details. The most obvious breaking change is that derivations absent a
<literal>.nativeDrv</literal> or <literal>.crossDrv</literal> are now
cross by default, not native.
</para>
</listitem>
<listitem>
<para>
<literal>stdenv.overrides</literal> is now expected to take <literal>self</literal>
and <literal>super</literal> arguments. See <literal>lib.trivial.extends</literal>
for what those parameters represent.
</para>
</listitem>
<listitem>
<para>
<literal>gnome</literal> alias has been removed along with
@ -88,6 +107,45 @@ following incompatible changes:</para>
<literal>networking.timeServers</literal>.
</para>
</listitem>
<listitem>
<para><literal>overridePackages</literal> function no longer exists.
It is replaced by <link
xlink:href="https://nixos.org/nixpkgs/manual/#sec-overlays-install">
overlays</link>. For example, the following code:
<programlisting>
let
pkgs = import &lt;nixpkgs&gt; {};
in
pkgs.overridePackages (self: super: ...)
</programlisting>
should be replaced by:
<programlisting>
let
pkgs = import &lt;nixpkgs&gt; {};
in
import pkgs.path { overlays = [(self: super: ...)] }
</programlisting>
</para>
</listitem>
<listitem>
<para>
Autoloading connection tracking helpers is now disabled by default.
This default was also changed in the Linux kernel and is considered
insecure if not configured properly in your firewall. If you need
connection tracking helpers (i.e. for active FTP) please enable
<literal>networking.firewall.autoLoadConntrackHelpers</literal> and
tune <literal>networking.firewall.connectionTrackingModules</literal>
to suit your needs.
</para>
</listitem>
</itemizedlist>

View file

@ -19,7 +19,7 @@ rm -f ec2-amis.nix
types="hvm pv"
stores="ebs s3"
regions="eu-west-1 eu-central-1 us-east-1 us-east-2 us-west-1 us-west-2 ap-southeast-1 ap-southeast-2 ap-northeast-1 ap-northeast-2 sa-east-1 ap-south-1"
regions="eu-west-1 eu-west-2 eu-central-1 us-east-1 us-east-2 us-west-1 us-west-2 ap-southeast-1 ap-southeast-2 ap-northeast-1 ap-northeast-2 sa-east-1 ap-south-1"
for type in $types; do
link=$stateDir/$type

View file

@ -13,7 +13,7 @@ let
resolvconfOptions = cfg.resolvconfOptions
++ optional cfg.dnsSingleRequest "single-request"
++ optional cfg.dnsExtensionMechanism "ends0";
++ optional cfg.dnsExtensionMechanism "edns0";
in
{

View file

@ -160,6 +160,13 @@ in {
if activated.
'';
};
config = mkOption {
type = types.attrsOf types.unspecified;
default = {};
description = ''Config of the pulse daemon. See <literal>man pulse-daemon.conf</literal>.'';
example = literalExample ''{ flat-volumes = "no"; }'';
};
};
zeroconf = {
@ -204,10 +211,13 @@ in {
(mkIf cfg.enable {
environment.systemPackages = [ overriddenPackage ];
environment.etc = singleton {
target = "asound.conf";
source = alsaConf;
};
environment.etc = [
{ target = "asound.conf";
source = alsaConf; }
{ target = "pulse/daemon.conf";
source = writeText "daemon.conf" (lib.generators.toKeyValue {} cfg.daemon.config); }
];
# Allow PulseAudio to get realtime priority using rtkit.
security.rtkit.enable = true;

View file

@ -0,0 +1,40 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.hardware.ckb;
in
{
options.hardware.ckb = {
enable = mkEnableOption "the Corsair keyboard/mouse driver";
package = mkOption {
type = types.package;
default = pkgs.ckb;
defaultText = "pkgs.ckb";
description = ''
The package implementing the Corsair keyboard/mouse driver.
'';
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
systemd.services.ckb = {
description = "Corsair Keyboard Daemon";
wantedBy = ["multi-user.target"];
script = "${cfg.package}/bin/ckb-daemon";
serviceConfig = {
Restart = "always";
StandardOutput = "syslog";
};
};
};
meta = {
maintainers = with lib.maintainers; [ kierdavis ];
};
}

View file

@ -96,7 +96,7 @@ in
example = literalExample "with pkgs; [ vaapiIntel libvdpau-va-gl vaapiVdpau ]";
description = ''
Additional packages to add to OpenGL drivers. This can be used
to add additional VA-API/VDPAU drivers.
to add OpenCL drivers, VA-API/VDPAU drivers etc.
'';
};
@ -107,7 +107,7 @@ in
description = ''
Additional packages to add to 32-bit OpenGL drivers on
64-bit systems. Used when <option>driSupport32Bit</option> is
set. This can be used to add additional VA-API/VDPAU drivers.
set. This can be used to add OpenCL drivers, VA-API/VDPAU drivers etc.
'';
};

View file

@ -10,6 +10,11 @@ let
check = x: (lib.types.package.check x) && (attrByPath ["meta" "isIbusEngine"] false x);
};
impanel =
if cfg.panel != null
then "--panel=${cfg.panel}"
else "";
ibusAutostart = pkgs.writeTextFile {
name = "autostart-ibus-daemon";
destination = "/etc/xdg/autostart/ibus-daemon.desktop";
@ -17,7 +22,7 @@ let
[Desktop Entry]
Name=IBus
Type=Application
Exec=${ibusPackage}/bin/ibus-daemon --daemonize --xim
Exec=${ibusPackage}/bin/ibus-daemon --daemonize --xim ${impanel}
'';
};
in
@ -36,6 +41,12 @@ in
in
"Enabled IBus engines. Available engines are: ${engines}.";
};
panel = mkOption {
type = with types; nullOr path;
default = null;
example = literalExample "''${pkgs.kde5.plasma-desktop}/lib/libexec/kimpanel-ibus-panel";
description = "Replace the IBus panel with another panel.";
};
};
};

View file

@ -7,9 +7,4 @@
imports =
[ ./installation-cd-base.nix
];
environment.systemPackages =
[
pkgs.vim
];
}

View file

@ -1,5 +1,5 @@
{
x86_64-linux = "/nix/store/m8z91vpfxyszhjpq4wl8m1zwlqik4fkn-nix-1.11.5";
i686-linux = "/nix/store/vk71likl32igqg6apqsj52ln3vhkq1pa-nix-1.11.5";
x86_64-darwin = "/nix/store/qfwm0b5qkr8v8gsv9dh2z3arky9p1myg-nix-1.11.5";
x86_64-linux = "/nix/store/qdkzm17csr24snk247a1s0c47ikq5sl6-nix-1.11.6";
i686-linux = "/nix/store/hiwp53747lxlniqy5wpbql5izjrs8z0z-nix-1.11.6";
x86_64-darwin = "/nix/store/hca2hqcvwncf23hiqyqgwbsdy8vvl9xv-nix-1.11.6";
}

View file

@ -282,6 +282,10 @@
infinoted = 264;
keystone = 265;
glance = 266;
couchpotato = 267;
gogs = 268;
pdns-recursor = 269;
kresd = 270;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -534,6 +538,9 @@
infinoted = 264;
keystone = 265;
glance = 266;
couchpotato = 267;
gogs = 268;
kresd = 270;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View file

@ -29,11 +29,19 @@ let
};
configType = mkOptionType {
name = "nixpkgs config";
name = "nixpkgs-config";
description = "nixpkgs config";
check = traceValIfNot isConfig;
merge = args: fold (def: mergeConfig def.value) {};
};
overlayType = mkOptionType {
name = "nixpkgs-overlay";
description = "nixpkgs overlay";
check = builtins.isFunction;
merge = lib.mergeOneOption;
};
in
{
@ -43,23 +51,37 @@ in
default = {};
example = literalExample
''
{ firefox.enableGeckoMediaPlayer = true;
packageOverrides = pkgs: {
firefox60Pkgs = pkgs.firefox60Pkgs.override {
enableOfficialBranding = true;
};
};
}
{ firefox.enableGeckoMediaPlayer = true; }
'';
type = configType;
description = ''
The configuration of the Nix Packages collection. (For
details, see the Nixpkgs documentation.) It allows you to set
package configuration options, and to override packages
globally through the <varname>packageOverrides</varname>
option. The latter is a function that takes as an argument
the <emphasis>original</emphasis> Nixpkgs, and must evaluate
to a set of new or overridden packages.
package configuration options.
'';
};
nixpkgs.overlays = mkOption {
default = [];
example = literalExample
''
[ (self: super: {
openssh = super.openssh.override {
hpnSupport = true;
withKerberos = true;
kerberos = self.libkrb5;
};
};
) ]
'';
type = types.listOf overlayType;
description = ''
List of overlays to use with the Nix Packages collection.
(For details, see the Nixpkgs documentation.) It allows
you to override packages globally. This is a function that
takes as an argument the <emphasis>original</emphasis> Nixpkgs.
The first argument should be used for finding dependencies, and
the second should be used for overriding recipes.
'';
};

View file

@ -26,6 +26,7 @@
./config/vpnc.nix
./config/zram.nix
./hardware/all-firmware.nix
./hardware/ckb.nix
./hardware/cpu/amd-microcode.nix
./hardware/cpu/intel-microcode.nix
./hardware/ksm.nix
@ -66,6 +67,7 @@
./programs/bash/bash.nix
./programs/blcr.nix
./programs/cdemu.nix
./programs/chromium.nix
./programs/command-not-found/command-not-found.nix
./programs/dconf.nix
./programs/environment.nix
@ -210,6 +212,7 @@
./services/logging/awstats.nix
./services/logging/fluentd.nix
./services/logging/graylog.nix
./services/logging/journalbeat.nix
./services/logging/klogd.nix
./services/logging/logcheck.nix
./services/logging/logrotate.nix
@ -241,6 +244,7 @@
./services/misc/cpuminer-cryptonight.nix
./services/misc/cgminer.nix
./services/misc/confd.nix
./services/misc/couchpotato.nix
./services/misc/devmon.nix
./services/misc/dictd.nix
./services/misc/dysnomia.nix
@ -255,6 +259,7 @@
#./services/misc/gitit.nix
./services/misc/gitlab.nix
./services/misc/gitolite.nix
./services/misc/gogs.nix
./services/misc/gpsd.nix
./services/misc/ihaskell.nix
./services/misc/leaps.nix
@ -294,6 +299,7 @@
./services/misc/uhub.nix
./services/misc/zookeeper.nix
./services/monitoring/apcupsd.nix
./services/monitoring/arbtt.nix
./services/monitoring/bosun.nix
./services/monitoring/cadvisor.nix
./services/monitoring/collectd.nix
@ -307,6 +313,7 @@
./services/monitoring/monit.nix
./services/monitoring/munin.nix
./services/monitoring/nagios.nix
./services/monitoring/netdata.nix
./services/monitoring/prometheus/default.nix
./services/monitoring/prometheus/alertmanager.nix
./services/monitoring/prometheus/blackbox-exporter.nix
@ -326,6 +333,7 @@
./services/monitoring/telegraf.nix
./services/monitoring/ups.nix
./services/monitoring/uptime.nix
./services/monitoring/vnstat.nix
./services/monitoring/zabbix-agent.nix
./services/monitoring/zabbix-server.nix
./services/network-filesystems/cachefilesd.nix
@ -364,6 +372,7 @@
./services/networking/dhcpd.nix
./services/networking/dnschain.nix
./services/networking/dnscrypt-proxy.nix
./services/networking/dnscrypt-wrapper.nix
./services/networking/dnsmasq.nix
./services/networking/ejabberd.nix
./services/networking/fan.nix
@ -390,6 +399,7 @@
./services/networking/iodine.nix
./services/networking/ircd-hybrid/default.nix
./services/networking/kippo.nix
./services/networking/kresd.nix
./services/networking/lambdabot.nix
./services/networking/libreswan.nix
./services/networking/logmein-hamachi.nix
@ -420,6 +430,7 @@
./services/networking/pdnsd.nix
./services/networking/polipo.nix
./services/networking/powerdns.nix
./services/networking/pdns-recursor.nix
./services/networking/pptpd.nix
./services/networking/prayer.nix
./services/networking/privoxy.nix

View file

@ -45,8 +45,13 @@ with lib;
"Type `systemctl start display-manager' to\nstart the graphical user interface."}
'';
# Allow sshd to be started manually through "start sshd".
services.openssh.enable = true;
# Allow sshd to be started manually through "systemctl start sshd".
services.openssh = {
enable = true;
# Allow password login to the installation, if the user sets a password via "passwd"
# It is safe as root doesn't have a password by default and SSH is disabled by default
permitRootLogin = "yes";
};
systemd.services.sshd.wantedBy = mkOverride 50 [];
# Enable wpa_supplicant, but don't start it by default.
@ -66,9 +71,8 @@ with lib;
boot.kernel.sysctl."vm.overcommit_memory" = "1";
# To speed up installation a little bit, include the complete
# stdenv in the Nix store on the CD. Archive::Cpio is needed for
# the initrd builder.
system.extraDependencies = [ pkgs.stdenv pkgs.busybox pkgs.perlPackages.ArchiveCpio ];
# stdenv in the Nix store on the CD.
system.extraDependencies = with pkgs; [ stdenv stdenvNoCC busybox ];
# Show all debug messages from the kernel but don't log refused packets
# because we have the firewall enabled. This makes installs from the
@ -76,5 +80,6 @@ with lib;
boot.consoleLogLevel = mkDefault 7;
networking.firewall.logRefusedConnections = mkDefault false;
environment.systemPackages = [ pkgs.vim ];
};
}

View file

@ -0,0 +1,85 @@
{ config, lib, ... }:
with lib;
let
cfg = config.programs.chromium;
defaultProfile = filterAttrs (k: v: v != null) {
HomepageLocation = cfg.homepageLocation;
DefaultSearchProviderSearchURL = cfg.defaultSearchProviderSearchURL;
DefaultSearchProviderSuggestURL = cfg.defaultSearchProviderSuggestURL;
ExtensionInstallForcelist = map (extension:
"${extension};https://clients2.google.com/service/update2/crx"
) cfg.extensions;
};
in
{
###### interface
options = {
programs.chromium = {
enable = mkEnableOption "<command>chromium</command> policies";
extensions = mkOption {
type = types.listOf types.str;
description = ''
List of chromium extensions to install.
For list of plugins ids see id in url of extensions on
<link xlink:href="https://chrome.google.com/webstore/category/extensions">chrome web store</link>
page.
'';
default = [];
example = literalExample ''
[
"chlffgpmiacpedhhbkiomidkjlcfhogd" # pushbullet
"mbniclmhobmnbdlbpiphghaielnnpgdp" # lightshot
"gcbommkclmclpchllfjekcdonpmejbdp" # https everywhere
]
'';
};
homepageLocation = mkOption {
type = types.nullOr types.str;
description = "Chromium default homepage";
default = null;
example = "https://nixos.org";
};
defaultSearchProviderSearchURL = mkOption {
type = types.nullOr types.str;
description = "Chromium default search provider url.";
default = null;
example =
"https://encrypted.google.com/search?q={searchTerms}&{google:RLZ}{google:originalQueryForSuggestion}{google:assistedQueryStats}{google:searchFieldtrialParameter}{google:
searchClient}{google:sourceId}{google:instantExtendedEnabledParameter}ie={inputEncoding}";
};
defaultSearchProviderSuggestURL = mkOption {
type = types.nullOr types.str;
description = "Chromium default search provider url for suggestions.";
default = null;
example =
"https://encrypted.google.com/complete/search?output=chrome&q={searchTerms}";
};
extraOpts = mkOption {
type = types.attrs;
description = ''
Extra chromium policy options, see
<link xlink:href="https://www.chromium.org/administrators/policy-list-3">https://www.chromium.org/administrators/policy-list-3</link>
for a list of avalible options
'';
default = {};
};
};
};
###### implementation
config = lib.mkIf cfg.enable {
environment.etc."chromium/policies/managed/default.json".text = builtins.toJSON defaultProfile;
environment.etc."chromium/policies/managed/extra.json".text = builtins.toJSON cfg.extraOpts;
};
}

View file

@ -11,6 +11,7 @@ with lib;
default = true;
description = ''
Whether to enable manual pages and the <command>man</command> command.
This also includes "man" outputs of all <literal>systemPackages</literal>.
'';
};

View file

@ -1,4 +1,4 @@
{ config, lib, ... }:
{ config, lib, pkgs, ... }:
let
cfg = config.programs.nano;
@ -20,16 +20,22 @@ in
example = ''
set nowrap
set tabstospaces
set tabsize 4
set tabsize 2
'';
};
syntaxHighlight = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to enable syntax highlight for various languages.";
};
};
};
###### implementation
config = lib.mkIf (cfg.nanorc != "") {
environment.etc."nanorc".text = cfg.nanorc;
environment.etc."nanorc".text = lib.concatStrings [ cfg.nanorc
(lib.optionalString cfg.syntaxHighlight ''include "${pkgs.nano}/share/nano/*.nanorc"'') ];
};
}

View file

@ -123,11 +123,6 @@ in
setopt HIST_IGNORE_DUPS SHARE_HISTORY HIST_FCNTL_LOCK
${cfge.interactiveShellInit}
${cfg.promptInit}
${zshAliases}
# Tell zsh how to find installed completions
for p in ''${(z)NIX_PROFILES}; do
fpath+=($p/share/zsh/site-functions $p/share/zsh/$ZSH_VERSION/functions)
@ -143,6 +138,12 @@ in
"source ${pkgs.zsh-autosuggestions}/share/zsh-autosuggestions/zsh-autosuggestions.zsh"
}
${zshAliases}
${cfg.promptInit}
${cfge.interactiveShellInit}
HELPDIR="${pkgs.zsh}/share/zsh/$ZSH_VERSION/help"
'';

View file

@ -17,6 +17,7 @@ with lib;
(mkRenamedOptionModule [ "services" "elasticsearch" "host" ] [ "services" "elasticsearch" "listenAddress" ])
(mkRenamedOptionModule [ "services" "graphite" "api" "host" ] [ "services" "graphite" "api" "listenAddress" ])
(mkRenamedOptionModule [ "services" "graphite" "web" "host" ] [ "services" "graphite" "web" "listenAddress" ])
(mkRenamedOptionModule [ "services" "logstash" "address" ] [ "services" "logstash" "listenAddress" ])
(mkRenamedOptionModule [ "services" "kibana" "host" ] [ "services" "kibana" "listenAddress" ])
(mkRenamedOptionModule [ "services" "mpd" "network" "host" ] [ "services" "mpd" "network" "listenAddress" ])
(mkRenamedOptionModule [ "services" "neo4j" "host" ] [ "services" "neo4j" "listenAddress" ])
@ -163,6 +164,9 @@ with lib;
else { addr = value inetAddr; port = value inetPort; }
))
# dhcpd
(mkRenamedOptionModule [ "services" "dhcpd" ] [ "services" "dhcpd4" ])
# Options that are obsolete and have no replacement.
(mkRemovedOptionModule [ "boot" "initrd" "luks" "enable" ] "")
(mkRemovedOptionModule [ "programs" "bash" "enable" ] "")

View file

@ -284,6 +284,8 @@ in
OnCalendar = cfg.renewInterval;
Unit = "acme-${cert}.service";
Persistent = "yes";
AccuracySec = "5m";
RandomizedDelaySec = "1h";
};
})
);

View file

@ -737,6 +737,8 @@ in {
wantedBy = [ "multi-user.target" ];
after = [ "kube-apiserver.service" ];
serviceConfig = {
RestartSec = "30s";
Restart = "on-failure";
ExecStart = ''${cfg.package}/bin/kube-controller-manager \
--address=${cfg.controllerManager.address} \
--port=${toString cfg.controllerManager.port} \

View file

@ -14,6 +14,31 @@ let
read-data=${factorio}/share/factorio/data
write-data=${stateDir}
'';
serverSettings = {
name = cfg.game-name;
description = cfg.description;
visibility = {
public = cfg.public;
lan = cfg.lan;
};
username = cfg.username;
password = cfg.password;
token = cfg.token;
game_password = cfg.game-password;
require_user_verification = true;
max_upload_in_kilobytes_per_second = 0;
minimum_latency_in_ticks = 0;
ignore_player_limit_for_returning_players = false;
allow_commands = "admins-only";
autosave_interval = cfg.autosave-interval;
autosave_slots = 5;
afk_autokick_interval = 0;
auto_pause = true;
only_admins_can_pause_the_game = true;
autosave_only_on_server = true;
admins = [];
};
serverSettingsFile = pkgs.writeText "server-settings.json" (builtins.toJSON (filterAttrsRecursive (n: v: v != null) serverSettings));
modDir = pkgs.factorio-mkModDirDrv cfg.mods;
in
{
@ -67,12 +92,68 @@ in
derivations via nixos-channel. Until then, this is for experts only.
'';
};
game-name = mkOption {
type = types.nullOr types.string;
default = "Factorio Game";
description = ''
Name of the game as it will appear in the game listing.
'';
};
description = mkOption {
type = types.nullOr types.string;
default = "";
description = ''
Description of the game that will appear in the listing.
'';
};
public = mkOption {
type = types.bool;
default = false;
description = ''
Game will be published on the official Factorio matching server.
'';
};
lan = mkOption {
type = types.bool;
default = false;
description = ''
Game will be broadcast on LAN.
'';
};
username = mkOption {
type = types.nullOr types.string;
default = null;
description = ''
Your factorio.com login credentials. Required for games with visibility public.
'';
};
password = mkOption {
type = types.nullOr types.string;
default = null;
description = ''
Your factorio.com login credentials. Required for games with visibility public.
'';
};
token = mkOption {
type = types.nullOr types.string;
default = null;
description = ''
Authentication token. May be used instead of 'password' above.
'';
};
game-password = mkOption {
type = types.nullOr types.string;
default = null;
description = ''
Game password.
'';
};
autosave-interval = mkOption {
type = types.nullOr types.int;
default = null;
example = 2;
example = 10;
description = ''
The time, in minutes, between autosaves.
Autosave interval in minutes.
'';
};
};
@ -120,8 +201,8 @@ in
"--config=${cfg.configFile}"
"--port=${toString cfg.port}"
"--start-server=${mkSavePath cfg.saveName}"
"--server-settings=${serverSettingsFile}"
(optionalString (cfg.mods != []) "--mod-directory=${modDir}")
(optionalString (cfg.autosave-interval != null) "--autosave-interval ${toString cfg.autosave-interval}")
];
};
};

View file

@ -143,7 +143,10 @@ let
done
echo "Generating hwdb database..."
${udev}/bin/udevadm hwdb --update --root=$(pwd)
# hwdb --update doesn't return error code even on errors!
res="$(${udev}/bin/udevadm hwdb --update --root=$(pwd) 2>&1)"
echo "$res"
[ -z "$(echo "$res" | egrep '^Error')" ]
mv etc/udev/hwdb.bin $out
'';

View file

@ -0,0 +1,76 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.journalbeat;
journalbeatYml = pkgs.writeText "journalbeat.yml" ''
name: ${cfg.name}
tags: ${builtins.toJSON cfg.tags}
journalbeat.cursor_state_file: ${cfg.stateDir}/cursor-state
${cfg.extraConfig}
'';
in
{
options = {
services.journalbeat = {
enable = mkEnableOption "journalbeat";
name = mkOption {
type = types.str;
default = "journalbeat";
description = "Name of the beat";
};
tags = mkOption {
type = types.listOf types.str;
default = [];
description = "Tags to place on the shipped log messages";
};
stateDir = mkOption {
type = types.str;
default = "/var/lib/journalbeat";
description = "The state directory. Journalbeat's own logs and other data are stored here.";
};
extraConfig = mkOption {
type = types.lines;
default = ''
journalbeat:
seek_position: cursor
cursor_seek_fallback: tail
write_cursor_state: true
cursor_flush_period: 5s
clean_field_names: true
convert_to_numbers: false
move_metadata_to_field: journal
default_type: journal
'';
description = "Any other configuration options you want to add";
};
};
};
config = mkIf cfg.enable {
systemd.services.journalbeat = with pkgs; {
description = "Journalbeat log shipper";
wantedBy = [ "multi-user.target" ];
preStart = ''
mkdir -p ${cfg.stateDir}/data
mkdir -p ${cfg.stateDir}/logs
'';
serviceConfig = {
ExecStart = "${pkgs.journalbeat}/bin/journalbeat -c ${journalbeatYml} -path.data ${cfg.stateDir}/data -path.logs ${cfg.stateDir}/logs";
};
};
};
}

View file

@ -63,7 +63,7 @@ in
description = "Enable the logstash web interface.";
};
address = mkOption {
listenAddress = mkOption {
type = types.str;
default = "0.0.0.0";
description = "Address on which to start webserver.";
@ -77,7 +77,7 @@ in
inputConfig = mkOption {
type = types.lines;
default = ''stdin { type => "example" }'';
default = ''generator { }'';
description = "Logstash input configuration.";
example = ''
# Read from journal
@ -90,7 +90,7 @@ in
filterConfig = mkOption {
type = types.lines;
default = ''noop {}'';
default = "";
description = "logstash filter configuration.";
example = ''
if [type] == "syslog" {
@ -108,11 +108,11 @@ in
outputConfig = mkOption {
type = types.lines;
default = ''stdout { debug => true debug_format => "json"}'';
default = ''stdout { codec => rubydebug }'';
description = "Logstash output configuration.";
example = ''
redis { host => "localhost" data_type => "list" key => "logstash" codec => json }
elasticsearch { embedded => true }
redis { host => ["localhost"] data_type => "list" key => "logstash" codec => json }
elasticsearch { }
'';
};
@ -147,7 +147,7 @@ in
${cfg.outputConfig}
}
''} " +
ops cfg.enableWeb "-- web -a ${cfg.address} -p ${cfg.port}";
ops cfg.enableWeb "-- web -a ${cfg.listenAddress} -p ${cfg.port}";
};
};
};

View file

@ -241,6 +241,9 @@ in
RuntimeDirectory = [ "dovecot2" ];
};
# When copying sieve scripts preserve the original time stamp
# (should be 0) so that the compiled sieve script is newer than
# the source file and Dovecot won't try to compile it.
preStart = ''
rm -rf ${stateDir}/sieve
'' + optionalString (cfg.sieveScripts != {}) ''
@ -248,11 +251,11 @@ in
${concatStringsSep "\n" (mapAttrsToList (to: from: ''
if [ -d '${from}' ]; then
mkdir '${stateDir}/sieve/${to}'
cp "${from}/"*.sieve '${stateDir}/sieve/${to}'
cp -p "${from}/"*.sieve '${stateDir}/sieve/${to}'
else
cp '${from}' '${stateDir}/sieve/${to}'
cp -p '${from}' '${stateDir}/sieve/${to}'
fi
${pkgs.dovecot_pigeonhole}/bin/sievec '${stateDir}/sieve/${to}'
${pkgs.dovecot_pigeonhole}/bin/sievec '${stateDir}/sieve/${to}'
'') cfg.sieveScripts)}
chown -R '${cfg.mailUser}:${cfg.mailGroup}' '${stateDir}/sieve'
'';

View file

@ -38,7 +38,7 @@ in {
brokerId = mkOption {
description = "Broker ID.";
default = 0;
default = -1;
type = types.int;
};

View file

@ -0,0 +1,50 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.couchpotato;
in
{
options = {
services.couchpotato = {
enable = mkEnableOption "CouchPotato Server";
};
};
config = mkIf cfg.enable {
systemd.services.couchpotato = {
description = "CouchPotato Server";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
mkdir -p /var/lib/couchpotato
chown -R couchpotato:couchpotato /var/lib/couchpotato
'';
serviceConfig = {
Type = "simple";
User = "couchpotato";
Group = "couchpotato";
PermissionsStartOnly = "true";
ExecStart = "${pkgs.couchpotato}/bin/couchpotato";
Restart = "on-failure";
};
};
users.extraUsers = singleton
{ name = "couchpotato";
group = "couchpotato";
home = "/var/lib/couchpotato/";
description = "CouchPotato daemon user";
uid = config.ids.uids.couchpotato;
};
users.extraGroups = singleton
{ name = "couchpotato";
gid = config.ids.gids.couchpotato;
};
};
}

View file

@ -0,0 +1,215 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.gogs;
configFile = pkgs.writeText "app.ini" ''
APP_NAME = ${cfg.appName}
RUN_USER = ${cfg.user}
RUN_MODE = prod
[database]
DB_TYPE = ${cfg.database.type}
HOST = ${cfg.database.host}:${toString cfg.database.port}
NAME = ${cfg.database.name}
USER = ${cfg.database.user}
PASSWD = ${cfg.database.password}
PATH = ${cfg.database.path}
[repository]
ROOT = ${cfg.repositoryRoot}
[server]
DOMAIN = ${cfg.domain}
HTTP_ADDR = ${cfg.httpAddress}
HTTP_PORT = ${toString cfg.httpPort}
ROOT_URL = ${cfg.rootUrl}
[security]
SECRET_KEY = #secretkey#
INSTALL_LOCK = true
${cfg.extraConfig}
'';
in
{
options = {
services.gogs = {
enable = mkOption {
default = false;
type = types.bool;
description = "Enable Go Git Service.";
};
useWizard = mkOption {
default = false;
type = types.bool;
description = "Do not generate a configuration and use Gogs' installation wizard instead. The first registered user will be administrator.";
};
stateDir = mkOption {
default = "/var/lib/gogs";
type = types.str;
description = "Gogs data directory.";
};
user = mkOption {
type = types.str;
default = "gogs";
description = "User account under which Gogs runs.";
};
group = mkOption {
type = types.str;
default = "gogs";
description = "Group account under which Gogs runs.";
};
database = {
type = mkOption {
type = types.enum [ "sqlite3" "mysql" "postgres" ];
example = "mysql";
default = "sqlite3";
description = "Database engine to use.";
};
host = mkOption {
type = types.str;
default = "127.0.0.1";
description = "Database host address.";
};
port = mkOption {
type = types.int;
default = 3306;
description = "Database host port.";
};
name = mkOption {
type = types.str;
default = "gogs";
description = "Database name.";
};
user = mkOption {
type = types.str;
default = "gogs";
description = "Database user.";
};
password = mkOption {
type = types.str;
default = "";
description = "Database password.";
};
path = mkOption {
type = types.str;
default = "${cfg.stateDir}/data/gogs.db";
description = "Path to the sqlite3 database file.";
};
};
appName = mkOption {
type = types.str;
default = "Gogs: Go Git Service";
description = "Application name.";
};
repositoryRoot = mkOption {
type = types.str;
default = "${cfg.stateDir}/repositories";
description = "Path to the git repositories.";
};
domain = mkOption {
type = types.str;
default = "localhost";
description = "Domain name of your server.";
};
rootUrl = mkOption {
type = types.str;
default = "http://localhost:3000/";
description = "Full public URL of Gogs server.";
};
httpAddress = mkOption {
type = types.str;
default = "0.0.0.0";
description = "HTTP listen address.";
};
httpPort = mkOption {
type = types.int;
default = 3000;
description = "HTTP listen port.";
};
extraConfig = mkOption {
type = types.str;
default = "";
description = "Configuration lines appended to the generated Gogs configuration file.";
};
};
};
config = mkIf cfg.enable {
systemd.services.gogs = {
description = "Gogs (Go Git Service)";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = [ pkgs.gogs.bin ];
preStart = ''
# copy custom configuration and generate a random secret key if needed
${optionalString (cfg.useWizard == false) ''
mkdir -p ${cfg.stateDir}/custom/conf
cp -f ${configFile} ${cfg.stateDir}/custom/conf/app.ini
KEY=$(head -c 16 /dev/urandom | tr -dc A-Za-z0-9)
sed -i "s,#secretkey#,$KEY,g" ${cfg.stateDir}/custom/conf/app.ini
''}
mkdir -p ${cfg.repositoryRoot}
# update all hooks' binary paths
HOOKS=$(find ${cfg.repositoryRoot} -mindepth 4 -maxdepth 4 -type f -wholename "*git/hooks/*")
if [ "$HOOKS" ]
then
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gogs,${pkgs.gogs.bin}/bin/gogs,g' $HOOKS
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/env,${pkgs.coreutils}/bin/env,g' $HOOKS
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/bash,${pkgs.bash}/bin/bash,g' $HOOKS
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/perl,${pkgs.perl}/bin/perl,g' $HOOKS
fi
'';
serviceConfig = {
Type = "simple";
User = cfg.user;
Group = cfg.group;
WorkingDirectory = cfg.stateDir;
ExecStart = "${pkgs.gogs.bin}/bin/gogs web";
Restart = "always";
};
environment = {
USER = cfg.user;
HOME = cfg.stateDir;
GOGS_WORK_DIR = cfg.stateDir;
};
};
users = {
extraUsers.gogs = {
description = "Go Git Service";
uid = config.ids.uids.gogs;
group = "gogs";
home = cfg.stateDir;
createHome = true;
};
extraGroups.gogs.gid = config.ids.gids.gogs;
};
};
}

View file

@ -16,12 +16,30 @@ in {
type = types.bool;
};
ip = mkOption {
description = "IP address to listen on.";
default = "0.0.0.0";
type = types.str;
};
port = mkOption {
description = "Mesos Master port";
default = 5050;
type = types.int;
};
advertiseIp = mkOption {
description = "IP address advertised to reach this master.";
default = null;
type = types.nullOr types.str;
};
advertisePort = mkOption {
description = "Port advertised to reach this Mesos master.";
default = null;
type = types.nullOr types.int;
};
zk = mkOption {
description = ''
ZooKeeper URL (used for leader election amongst masters).
@ -84,7 +102,10 @@ in {
serviceConfig = {
ExecStart = ''
${pkgs.mesos}/bin/mesos-master \
--ip=${cfg.ip} \
--port=${toString cfg.port} \
${optionalString (cfg.advertiseIp != null) "--advertise_ip=${cfg.advertiseIp}"} \
${optionalString (cfg.advertisePort != null) "--advertise_port=${toString cfg.advertisePort}"} \
${if cfg.quorum == 0
then "--registry=in_memory"
else "--zk=${cfg.zk} --registry=replicated_log --quorum=${toString cfg.quorum}"} \

View file

@ -12,7 +12,23 @@ let
attribsArg = optionalString (cfg.attributes != {})
"--attributes=${mkAttributes cfg.attributes}";
containerizers = [ "mesos" ] ++ (optional cfg.withDocker "docker");
containerizersArg = concatStringsSep "," (
lib.unique (
cfg.containerizers ++ (optional cfg.withDocker "docker")
)
);
imageProvidersArg = concatStringsSep "," (
lib.unique (
cfg.imageProviders ++ (optional cfg.withDocker "docker")
)
);
isolationArg = concatStringsSep "," (
lib.unique (
cfg.isolation ++ (optionals cfg.withDocker [ "filesystem/linux" "docker/runtime"])
)
);
in {
@ -27,7 +43,7 @@ in {
ip = mkOption {
description = "IP address to listen on.";
default = "0.0.0.0";
type = types.string;
type = types.str;
};
port = mkOption {
@ -36,6 +52,53 @@ in {
type = types.int;
};
advertiseIp = mkOption {
description = "IP address advertised to reach this agent.";
default = null;
type = types.nullOr types.str;
};
advertisePort = mkOption {
description = "Port advertised to reach this agent.";
default = null;
type = types.nullOr types.int;
};
containerizers = mkOption {
description = ''
List of containerizer implementations to compose in order to provide
containerization. Available options are mesos and docker.
The order the containerizers are specified is the order they are tried.
'';
default = [ "mesos" ];
type = types.listOf types.str;
};
imageProviders = mkOption {
description = "List of supported image providers, e.g., APPC,DOCKER.";
default = [ ];
type = types.listOf types.str;
};
imageProvisionerBackend = mkOption {
description = ''
Strategy for provisioning container rootfs from images,
e.g., aufs, bind, copy, overlay.
'';
default = "copy";
type = types.str;
};
isolation = mkOption {
description = ''
Isolation mechanisms to use, e.g., posix/cpu,posix/mem, or
cgroups/cpu,cgroups/mem, or network/port_mapping, or `gpu/nvidia` for nvidia
specific gpu isolation.
'';
default = [ "posix/cpu" "posix/mem" ];
type = types.listOf types.str;
};
master = mkOption {
description = ''
May be one of:
@ -57,6 +120,16 @@ in {
type = types.bool;
};
dockerRegistry = mkOption {
description = ''
The default url for pulling Docker images.
It could either be a Docker registry server url,
or a local path in which Docker image archives are stored.
'';
default = null;
type = types.nullOr (types.either types.str types.path);
};
workDir = mkOption {
description = "The Mesos work directory.";
default = "/var/lib/mesos/slave";
@ -96,28 +169,45 @@ in {
host = "aabc123";
os = "nixos"; };
};
executorEnvironmentVariables = mkOption {
description = ''
The environment variables that should be passed to the executor, and thus subsequently task(s).
'';
default = {
PATH = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin";
};
type = types.attrsOf types.str;
};
};
};
config = mkIf cfg.enable {
systemd.services.mesos-slave = {
description = "Mesos Slave";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment.MESOS_CONTAINERIZERS = concatStringsSep "," containerizers;
path = [ pkgs.stdenv.shellPackage ];
serviceConfig = {
ExecStart = ''
${pkgs.mesos}/bin/mesos-slave \
--containerizers=${containerizersArg} \
--image_providers=${imageProvidersArg} \
--image_provisioner_backend=${cfg.imageProvisionerBackend} \
--isolation=${isolationArg} \
--ip=${cfg.ip} \
--port=${toString cfg.port} \
${optionalString (cfg.advertiseIp != null) "--advertise_ip=${cfg.advertiseIp}"} \
${optionalString (cfg.advertisePort != null) "--advertise_port=${toString cfg.advertisePort}"} \
--master=${cfg.master} \
--work_dir=${cfg.workDir} \
--logging_level=${cfg.logLevel} \
${attribsArg} \
${optionalString cfg.withHadoop "--hadoop-home=${pkgs.hadoop}"} \
${optionalString cfg.withDocker "--docker=${pkgs.docker}/libexec/docker/docker"} \
${optionalString (cfg.dockerRegistry != null) "--docker_registry=${cfg.dockerRegistry}"} \
--executor_environment_variables=${lib.escapeShellArg (builtins.toJSON cfg.executorEnvironmentVariables)} \
${toString cfg.extraCmdLineOptions}
'';
PermissionsStartOnly = true;

View file

@ -0,0 +1,63 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.arbtt;
in {
options = {
services.arbtt = {
enable = mkOption {
type = types.bool;
default = false;
example = true;
description = ''
Enable the arbtt statistics capture service.
'';
};
package = mkOption {
type = types.package;
default = pkgs.haskellPackages.arbtt;
defaultText = "pkgs.haskellPackages.arbtt";
example = literalExample "pkgs.haskellPackages.arbtt";
description = ''
The package to use for the arbtt binaries.
'';
};
logFile = mkOption {
type = types.str;
default = "%h/.arbtt/capture.log";
example = "/home/username/.arbtt-capture.log";
description = ''
The log file for captured samples.
'';
};
sampleRate = mkOption {
type = types.int;
default = 60;
example = 120;
description = ''
The sampling interval in seconds.
'';
};
};
};
config = mkIf cfg.enable {
systemd.user.services.arbtt = {
description = "arbtt statistics capture service";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = "${cfg.package}/bin/arbtt-capture --logfile=${cfg.logFile} --sample-rate=${toString cfg.sampleRate}";
Restart = "always";
};
};
};
meta.maintainers = [ maintainers.michaelpj ];
}

View file

@ -0,0 +1,78 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.netdata;
configFile = pkgs.writeText "netdata.conf" cfg.configText;
defaultUser = "netdata";
in {
options = {
services.netdata = {
enable = mkOption {
default = false;
type = types.bool;
description = "Whether to enable netdata monitoring.";
};
user = mkOption {
type = types.str;
default = "netdata";
description = "User account under which netdata runs.";
};
group = mkOption {
type = types.str;
default = "netdata";
description = "Group under which netdata runs.";
};
configText = mkOption {
type = types.lines;
default = "";
description = "netdata.conf configuration.";
example = ''
[global]
debug log = syslog
access log = syslog
error log = syslog
'';
};
};
};
config = mkIf cfg.enable {
systemd.services.netdata = {
description = "Real time performance monitoring";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = concatStringsSep "\n" (map (dir: ''
mkdir -vp ${dir}
chmod 750 ${dir}
chown -R ${cfg.user}:${cfg.group} ${dir}
'') [ "/var/cache/netdata"
"/var/log/netdata"
"/var/lib/netdata" ]);
serviceConfig = {
User = cfg.user;
Group = cfg.group;
PermissionsStartOnly = true;
ExecStart = "${pkgs.netdata}/bin/netdata -D -c ${configFile}";
TimeoutStopSec = 60;
};
};
users.extraUsers = optional (cfg.user == defaultUser) {
name = defaultUser;
};
users.extraGroups = optional (cfg.group == defaultUser) {
name = defaultUser;
};
};
}

View file

@ -5,6 +5,10 @@ with lib;
let
cfg = config.services.prometheus.alertmanager;
mkConfigFile = pkgs.writeText "alertmanager.yml" (builtins.toJSON cfg.configuration);
alertmanagerYml =
if cfg.configText != null then
pkgs.writeText "alertmanager.yml" cfg.configText
else mkConfigFile;
in {
options = {
services.prometheus.alertmanager = {
@ -34,6 +38,17 @@ in {
'';
};
configText = mkOption {
type = types.nullOr types.lines;
default = null;
description = ''
Alertmanager configuration as YAML text. If non-null, this option
defines the text that is written to alertmanager.yml. If null, the
contents of alertmanager.yml is generated from the structured config
options.
'';
};
logFormat = mkOption {
type = types.nullOr types.str;
default = null;
@ -96,7 +111,7 @@ in {
after = [ "network.target" ];
script = ''
${pkgs.prometheus-alertmanager.bin}/bin/alertmanager \
-config.file ${mkConfigFile} \
-config.file ${alertmanagerYml} \
-web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
-log.level ${cfg.logLevel} \
${optionalString (cfg.webExternalUrl != null) ''-web.external-url ${cfg.webExternalUrl} \''}

View file

@ -0,0 +1,43 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.vnstat;
in {
options.services.vnstat = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable update of network usage statistics via vnstatd.
'';
};
};
config = mkIf cfg.enable {
users.extraUsers.vnstatd = {
isSystemUser = true;
description = "vnstat daemon user";
home = "/var/lib/vnstat";
createHome = true;
};
systemd.services.vnstat = {
description = "vnStat network traffic monitor";
path = [ pkgs.coreutils ];
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
unitConfig.documentation = "man:vnstatd(1) man:vnstat(1) man:vnstat.conf(5)";
preStart = "chmod 755 /var/lib/vnstat";
serviceConfig = {
ExecStart = "${pkgs.vnstat}/bin/vnstatd -n";
ExecReload = "kill -HUP $MAINPID";
ProtectHome = true;
PrivateDevices = true;
PrivateTmp = true;
User = "vnstatd";
};
};
};
}

View file

@ -67,6 +67,14 @@ in
'';
};
emptyRepo = mkOption {
type = types.bool;
default = false;
description = ''
If set to true, the repo won't be initialized with help files
'';
};
extraFlags = mkOption {
type = types.listOf types.str;
description = "Extra flags passed to the IPFS daemon";
@ -103,16 +111,17 @@ in
after = [ "network.target" "local-fs.target" ];
path = [ pkgs.ipfs pkgs.su pkgs.bash ];
preStart =
''
install -m 0755 -o ${cfg.user} -g ${cfg.group} -d ${cfg.dataDir}
if [[ ! -d ${cfg.dataDir}/.ipfs ]]; then
cd ${cfg.dataDir}
${pkgs.su}/bin/su -s ${pkgs.bash}/bin/sh ${cfg.user} -c "${ipfs}/bin/ipfs init"
fi
${pkgs.su}/bin/su -s ${pkgs.bash}/bin/sh ${cfg.user} -c "${ipfs}/bin/ipfs config Addresses.API ${cfg.apiAddress}"
${pkgs.su}/bin/su -s ${pkgs.bash}/bin/sh ${cfg.user} -c "${ipfs}/bin/ipfs config Addresses.Gateway ${cfg.gatewayAddress}"
'';
preStart = ''
install -m 0755 -o ${cfg.user} -g ${cfg.group} -d ${cfg.dataDir}
if [[ ! -d ${cfg.dataDir}/.ipfs ]]; then
cd ${cfg.dataDir}
${pkgs.su}/bin/su -s ${pkgs.bash}/bin/sh ${cfg.user} -c \
"${ipfs}/bin/ipfs init ${if cfg.emptyRepo then "-e" else ""}"
fi
${pkgs.su}/bin/su -s ${pkgs.bash}/bin/sh ${cfg.user} -c \
"${ipfs}/bin/ipfs --local config Addresses.API ${cfg.apiAddress} && \
${ipfs}/bin/ipfs --local config Addresses.Gateway ${cfg.gatewayAddress}"
'';
serviceConfig = {
ExecStart = "${ipfs}/bin/ipfs daemon ${ipfsFlags}";

View file

@ -343,7 +343,7 @@ in
preStart = ''
if [ \! -d ${nodedir} ]; then
mkdir -p /var/db/tahoe-lafs
tahoe create-node ${nodedir}
tahoe create-node --hostname=localhost ${nodedir}
fi
# Tahoe has created a predefined tahoe.cfg which we must now

View file

@ -132,7 +132,8 @@ in
login=${config.services.ddclient.username}
password=${config.services.ddclient.password}
protocol=${config.services.ddclient.protocol}
server=${config.services.ddclient.server}
${let server = config.services.ddclient.server; in
lib.optionalString (server != "") "server=${server}"}
ssl=${if config.services.ddclient.ssl then "yes" else "no"}
wildcard=YES
${config.services.ddclient.domain}

View file

@ -4,11 +4,10 @@ with lib;
let
cfg = config.services.dhcpd;
cfg4 = config.services.dhcpd4;
cfg6 = config.services.dhcpd6;
stateDir = "/var/lib/dhcp"; # Don't use /var/state/dhcp; not FHS-compliant.
configFile = if cfg.configFile != null then cfg.configFile else pkgs.writeText "dhcpd.conf"
writeConfig = cfg: pkgs.writeText "dhcpd.conf"
''
default-lease-time 600;
max-lease-time 7200;
@ -29,6 +28,154 @@ let
}
'';
dhcpdService = postfix: cfg: optionalAttrs cfg.enable {
"dhcpd${postfix}" = {
description = "DHCPv${postfix} server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
preStart = ''
mkdir -m 755 -p ${cfg.stateDir}
touch ${cfg.stateDir}/dhcpd.leases
'';
serviceConfig =
let
configFile = if cfg.configFile != null then cfg.configFile else writeConfig cfg;
args = [ "@${pkgs.dhcp}/sbin/dhcpd" "dhcpd${postfix}" "-${postfix}"
"-pf" "/run/dhcpd${postfix}/dhcpd.pid"
"-cf" "${configFile}"
"-lf" "${cfg.stateDir}/dhcpd.leases"
"-user" "dhcpd" "-group" "nogroup"
] ++ cfg.extraFlags
++ cfg.interfaces;
in {
ExecStart = concatMapStringsSep " " escapeShellArg args;
Type = "forking";
Restart = "always";
RuntimeDirectory = [ "dhcpd${postfix}" ];
PIDFile = "/run/dhcpd${postfix}/dhcpd.pid";
};
};
};
machineOpts = {...}: {
config = {
hostName = mkOption {
type = types.str;
example = "foo";
description = ''
Hostname which is assigned statically to the machine.
'';
};
ethernetAddress = mkOption {
type = types.str;
example = "00:16:76:9a:32:1d";
description = ''
MAC address of the machine.
'';
};
ipAddress = mkOption {
type = types.str;
example = "192.168.1.10";
description = ''
IP address of the machine.
'';
};
};
};
dhcpConfig = postfix: {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable the DHCPv${postfix} server.
'';
};
stateDir = mkOption {
type = types.path;
# We use /var/lib/dhcp for DHCPv4 to save backwards compatibility.
default = "/var/lib/dhcp${if postfix == "4" then "" else postfix}";
description = ''
State directory for the DHCP server.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
example = ''
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
option routers 192.168.1.5;
option domain-name-servers 130.161.158.4, 130.161.33.17, 130.161.180.1;
option domain-name "example.org";
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.100 192.168.1.200;
}
'';
description = ''
Extra text to be appended to the DHCP server configuration
file. Currently, you almost certainly need to specify something
there, such as the options specifying the subnet mask, DNS servers,
etc.
'';
};
extraFlags = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Additional command line flags to be passed to the dhcpd daemon.
'';
};
configFile = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
The path of the DHCP server configuration file. If no file
is specified, a file is generated using the other options.
'';
};
interfaces = mkOption {
type = types.listOf types.str;
default = ["eth0"];
description = ''
The interfaces on which the DHCP server should listen.
'';
};
machines = mkOption {
type = types.listOf (types.submodule machineOpts);
default = [];
example = [
{ hostName = "foo";
ethernetAddress = "00:16:76:9a:32:1d";
ipAddress = "192.168.1.10";
}
{ hostName = "bar";
ethernetAddress = "00:19:d1:1d:c4:9a";
ipAddress = "192.168.1.11";
}
];
description = ''
A list mapping Ethernet addresses to IPv${postfix} addresses for the
DHCP server.
'';
};
};
in
{
@ -37,85 +184,15 @@ in
options = {
services.dhcpd = {
enable = mkOption {
default = false;
description = "
Whether to enable the DHCP server.
";
};
extraConfig = mkOption {
type = types.lines;
default = "";
example = ''
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
option routers 192.168.1.5;
option domain-name-servers 130.161.158.4, 130.161.33.17, 130.161.180.1;
option domain-name "example.org";
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.100 192.168.1.200;
}
'';
description = "
Extra text to be appended to the DHCP server configuration
file. Currently, you almost certainly need to specify
something here, such as the options specifying the subnet
mask, DNS servers, etc.
";
};
extraFlags = mkOption {
default = "";
example = "-6";
description = "
Additional command line flags to be passed to the dhcpd daemon.
";
};
configFile = mkOption {
default = null;
description = "
The path of the DHCP server configuration file. If no file
is specified, a file is generated using the other options.
";
};
interfaces = mkOption {
default = ["eth0"];
description = "
The interfaces on which the DHCP server should listen.
";
};
machines = mkOption {
default = [];
example = [
{ hostName = "foo";
ethernetAddress = "00:16:76:9a:32:1d";
ipAddress = "192.168.1.10";
}
{ hostName = "bar";
ethernetAddress = "00:19:d1:1d:c4:9a";
ipAddress = "192.168.1.11";
}
];
description = "
A list mapping ethernet addresses to IP addresses for the
DHCP server.
";
};
};
services.dhcpd4 = dhcpConfig "4";
services.dhcpd6 = dhcpConfig "6";
};
###### implementation
config = mkIf config.services.dhcpd.enable {
config = mkIf (cfg4.enable || cfg6.enable) {
users = {
extraUsers.dhcpd = {
@ -124,36 +201,7 @@ in
};
};
systemd.services.dhcpd =
{ description = "DHCP server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [ pkgs.dhcp ];
preStart =
''
mkdir -m 755 -p ${stateDir}
touch ${stateDir}/dhcpd.leases
mkdir -m 755 -p /run/dhcpd
chown dhcpd /run/dhcpd
'';
serviceConfig =
{ ExecStart = "@${pkgs.dhcp}/sbin/dhcpd dhcpd"
+ " -pf /run/dhcpd/dhcpd.pid -cf ${configFile}"
+ " -lf ${stateDir}/dhcpd.leases -user dhcpd -group nogroup"
+ " ${cfg.extraFlags}"
+ " ${toString cfg.interfaces}";
Restart = "always";
Type = "forking";
PIDFile = "/run/dhcpd/dhcpd.pid";
};
};
systemd.services = dhcpdService "4" cfg4 // dhcpdService "6" cfg6;
};

View file

@ -0,0 +1,187 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.dnscrypt-wrapper;
dataDir = "/var/lib/dnscrypt-wrapper";
daemonArgs = with cfg; [
"--listen-address=${address}:${toString port}"
"--resolver-address=${upstream.address}:${toString upstream.port}"
"--provider-name=${providerName}"
"--provider-publickey-file=public.key"
"--provider-secretkey-file=secret.key"
"--provider-cert-file=${providerName}.crt"
"--crypt-secretkey-file=${providerName}.key"
];
genKeys = ''
# generates time-limited keypairs
keyGen() {
dnscrypt-wrapper --gen-crypt-keypair \
--crypt-secretkey-file=${cfg.providerName}.key
dnscrypt-wrapper --gen-cert-file \
--crypt-secretkey-file=${cfg.providerName}.key \
--provider-cert-file=${cfg.providerName}.crt \
--provider-publickey-file=public.key \
--provider-secretkey-file=secret.key \
--cert-file-expire-days=${toString cfg.keys.expiration}
}
cd ${dataDir}
# generate provider keypair (first run only)
if [ ! -f public.key ] || [ ! -f secret.key ]; then
dnscrypt-wrapper --gen-provider-keypair
fi
# generate new keys for rotation
if [ ! -f ${cfg.providerName}.key ] || [ ! -f ${cfg.providerName}.crt ]; then
keyGen
fi
'';
rotateKeys = ''
# check if keys are not expired
keyValid() {
fingerprint=$(dnscrypt-wrapper --show-provider-publickey-fingerprint | awk '{print $(NF)}')
dnscrypt-proxy --test=${toString (cfg.keys.checkInterval + 1)} \
--resolver-address=127.0.0.1:${toString cfg.port} \
--provider-name=${cfg.providerName} \
--provider-key=$fingerprint
}
cd ${dataDir}
# archive old keys and restart the service
if ! keyValid; then
mkdir -p oldkeys
mv ${cfg.providerName}.key oldkeys/${cfg.providerName}-$(date +%F-%T).key
mv ${cfg.providerName}.crt oldkeys/${cfg.providerName}-$(date +%F-%T).crt
systemctl restart dnscrypt-wrapper
fi
'';
in {
###### interface
options.services.dnscrypt-wrapper = {
enable = mkEnableOption "DNSCrypt wrapper";
address = mkOption {
type = types.str;
default = "127.0.0.1";
description = ''
The DNSCrypt wrapper will bind to this IP address.
'';
};
port = mkOption {
type = types.int;
default = 5353;
description = ''
The DNSCrypt wrapper will listen for DNS queries on this port.
'';
};
providerName = mkOption {
type = types.str;
default = "2.dnscrypt-cert.${config.networking.hostName}";
example = "2.dnscrypt-cert.myresolver";
description = ''
The name that will be given to this DNSCrypt resolver.
Note: the resolver name must start with <literal>2.dnscrypt-cert.</literal>.
'';
};
upstream.address = mkOption {
type = types.str;
default = "127.0.0.1";
description = ''
The IP address of the upstream DNS server DNSCrypt will "wrap".
'';
};
upstream.port = mkOption {
type = types.int;
default = 53;
description = ''
The port of the upstream DNS server DNSCrypt will "wrap".
'';
};
keys.expiration = mkOption {
type = types.int;
default = 30;
description = ''
The duration (in days) of the time-limited secret key.
This will be automatically rotated before expiration.
'';
};
keys.checkInterval = mkOption {
type = types.int;
default = 1440;
description = ''
The time interval (in minutes) between key expiration checks.
'';
};
};
###### implementation
config = mkIf cfg.enable {
users.users.dnscrypt-wrapper = {
description = "dnscrypt-wrapper daemon user";
home = "${dataDir}";
createHome = true;
};
users.groups.dnscrypt-wrapper = { };
systemd.services.dnscrypt-wrapper = {
description = "dnscrypt-wrapper daemon";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = [ pkgs.dnscrypt-wrapper ];
serviceConfig = {
User = "dnscrypt-wrapper";
WorkingDirectory = dataDir;
Restart = "on-failure";
ExecStart = "${pkgs.dnscrypt-wrapper}/bin/dnscrypt-wrapper ${toString daemonArgs}";
};
preStart = genKeys;
};
systemd.services.dnscrypt-wrapper-rotate = {
after = [ "network.target" ];
requires = [ "dnscrypt-wrapper.service" ];
description = "Rotates DNSCrypt wrapper keys if soon to expire";
path = with pkgs; [ dnscrypt-wrapper dnscrypt-proxy gawk ];
script = rotateKeys;
};
systemd.timers.dnscrypt-wrapper-rotate = {
description = "Periodically check DNSCrypt wrapper keys for expiration";
wantedBy = [ "multi-user.target" ];
timerConfig = {
Unit = "dnscrypt-wrapper-rotate.service";
OnBootSec = "1min";
OnUnitActiveSec = cfg.keys.checkInterval * 60;
};
};
};
}

View file

@ -4,17 +4,29 @@
networking.firewall.extraCommands. For modularity, the firewall
uses several chains:
- nixos-fw-input is the main chain for input packet processing.
- nixos-fw is the main chain for input packet processing.
- nixos-fw-accept is called for accepted packets. If you want
additional logging, or want to reject certain packets anyway, you
can insert rules at the start of this chain.
- nixos-fw-log-refuse and nixos-fw-refuse are called for
refused packets. (The former jumps to the latter after logging
the packet.) If you want additional logging, or want to accept
certain packets anyway, you can insert rules at the start of
these chain.
this chain.
- nixos-fw-accept is called for accepted packets. If you want
additional logging, or want to reject certain packets anyway, you
can insert rules at the start of this chain.
- nixos-fw-rpfilter is used as the main chain in the raw table,
called from the built-in PREROUTING chain. If the kernel
supports it and `cfg.checkReversePath` is set this chain will
perform a reverse path filter test.
- nixos-drop is used while reloading the firewall in order to drop
all traffic. Since reloading isn't implemented in an atomic way
this'll prevent any traffic from leaking through while reloading
the firewall. However, if the reloading fails, the firewall-stop
script will be called which in return will effectively disable the
complete firewall (in the default configuration).
*/
@ -26,6 +38,10 @@ let
cfg = config.networking.firewall;
kernelPackages = config.boot.kernelPackages;
kernelHasRPFilter = kernelPackages.kernel.features.netfilterRPFilter or false;
helpers =
''
# Helper command to manipulate both the IPv4 and IPv6 tables.
@ -49,7 +65,7 @@ let
# firewall would be atomic. Apparently that's possible
# with iptables-restore.
ip46tables -D INPUT -j nixos-fw 2> /dev/null || true
for chain in nixos-fw nixos-fw-accept nixos-fw-log-refuse nixos-fw-refuse FW_REFUSE; do
for chain in nixos-fw nixos-fw-accept nixos-fw-log-refuse nixos-fw-refuse; do
ip46tables -F "$chain" 2> /dev/null || true
ip46tables -X "$chain" 2> /dev/null || true
done
@ -172,13 +188,16 @@ let
}-j nixos-fw-accept
''}
# Accept all ICMPv6 messages except redirects and node
# information queries (type 139). See RFC 4890, section
# 4.4.
${optionalString config.networking.enableIPv6 ''
# Accept all ICMPv6 messages except redirects and node
# information queries (type 139). See RFC 4890, section
# 4.4.
ip6tables -A nixos-fw -p icmpv6 --icmpv6-type redirect -j DROP
ip6tables -A nixos-fw -p icmpv6 --icmpv6-type 139 -j DROP
ip6tables -A nixos-fw -p icmpv6 -j nixos-fw-accept
# Allow this host to act as a DHCPv6 client
ip6tables -A nixos-fw -d fe80::/64 -p udp --dport 546 -j nixos-fw-accept
''}
${cfg.extraCommands}
@ -228,11 +247,6 @@ let
fi
'';
kernelPackages = config.boot.kernelPackages;
kernelHasRPFilter = kernelPackages.kernel.features.netfilterRPFilter or false;
kernelCanDisableHelpers = kernelPackages.kernel.features.canDisableNetfilterConntrackHelpers or false;
in
{
@ -290,26 +304,30 @@ in
default = false;
description =
''
If set, forbidden packets are rejected rather than dropped
If set, refused packets are rejected rather than dropped
(ignored). This means that an ICMP "port unreachable" error
message is sent back to the client. Rejecting packets makes
message is sent back to the client (or a TCP RST packet in
case of an existing connection). Rejecting packets makes
port scanning somewhat easier.
'';
};
networking.firewall.trustedInterfaces = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "enp0s2" ];
description =
''
Traffic coming in from these interfaces will be accepted
unconditionally.
unconditionally. Traffic from the loopback (lo) interface
will always be accepted.
'';
};
networking.firewall.allowedTCPPorts = mkOption {
default = [];
example = [ 22 80 ];
type = types.listOf types.int;
default = [ ];
example = [ 22 80 ];
description =
''
List of TCP ports on which incoming connections are
@ -318,9 +336,9 @@ in
};
networking.firewall.allowedTCPPortRanges = mkOption {
default = [];
example = [ { from = 8999; to = 9003; } ];
type = types.listOf (types.attrsOf types.int);
default = [ ];
example = [ { from = 8999; to = 9003; } ];
description =
''
A range of TCP ports on which incoming connections are
@ -329,9 +347,9 @@ in
};
networking.firewall.allowedUDPPorts = mkOption {
default = [];
example = [ 53 ];
type = types.listOf types.int;
default = [ ];
example = [ 53 ];
description =
''
List of open UDP ports.
@ -339,9 +357,9 @@ in
};
networking.firewall.allowedUDPPortRanges = mkOption {
default = [];
example = [ { from = 60000; to = 61000; } ];
type = types.listOf (types.attrsOf types.int);
default = [ ];
example = [ { from = 60000; to = 61000; } ];
description =
''
Range of open UDP ports.
@ -349,8 +367,8 @@ in
};
networking.firewall.allowPing = mkOption {
default = true;
type = types.bool;
default = true;
description =
''
Whether to respond to incoming ICMPv4 echo requests
@ -361,36 +379,43 @@ in
};
networking.firewall.pingLimit = mkOption {
default = null;
type = types.nullOr (types.separatedString " ");
default = null;
example = "--limit 1/minute --limit-burst 5";
description =
''
If pings are allowed, this allows setting rate limits
on them. If non-null, this option should be in the form
of flags like "--limit 1/minute --limit-burst 5"
on them. If non-null, this option should be in the form of
flags like "--limit 1/minute --limit-burst 5"
'';
};
networking.firewall.checkReversePath = mkOption {
default = kernelHasRPFilter;
type = types.either types.bool (types.enum ["strict" "loose"]);
default = kernelHasRPFilter;
example = "loose";
description =
''
Performs a reverse path filter test on a packet.
If a reply to the packet would not be sent via the same interface
that the packet arrived on, it is refused.
Performs a reverse path filter test on a packet. If a reply
to the packet would not be sent via the same interface that
the packet arrived on, it is refused.
If using asymmetric routing or other complicated routing,
set this option to loose mode or disable it and setup your
own counter-measures.
If using asymmetric routing or other complicated routing, set
this option to loose mode or disable it and setup your own
counter-measures.
This option can be either true (or "strict"), "loose" (only
drop the packet if the source address is not reachable via any
interface) or false. Defaults to the value of
kernelHasRPFilter.
(needs kernel 3.3+)
'';
};
networking.firewall.logReversePathDrops = mkOption {
default = false;
type = types.bool;
default = false;
description =
''
Logs dropped packets failing the reverse path filter test if
@ -399,9 +424,9 @@ in
};
networking.firewall.connectionTrackingModules = mkOption {
default = [ "ftp" ];
example = [ "ftp" "irc" "sane" "sip" "tftp" "amanda" "h323" "netbios_sn" "pptp" "snmp" ];
type = types.listOf types.str;
default = [ ];
example = [ "ftp" "irc" "sane" "sip" "tftp" "amanda" "h323" "netbios_sn" "pptp" "snmp" ];
description =
''
List of connection-tracking helpers that are auto-loaded.
@ -409,17 +434,19 @@ in
As helpers can pose as a security risk, it is advised to
set this to an empty list and disable the setting
networking.firewall.autoLoadConntrackHelpers
networking.firewall.autoLoadConntrackHelpers unless you
know what you are doing. Connection tracking is disabled
by default.
Loading of helpers is recommended to be done through the new
CT target. More info:
Loading of helpers is recommended to be done through the
CT target. More info:
https://home.regit.org/netfilter-en/secure-use-of-helpers/
'';
};
networking.firewall.autoLoadConntrackHelpers = mkOption {
default = true;
type = types.bool;
default = false;
description =
''
Whether to auto-load connection-tracking helpers.
@ -461,7 +488,8 @@ in
''
Additional shell commands executed as part of the firewall
shutdown script. These are executed just after the removal
of the nixos input rule, or if the service enters a failed state.
of the NixOS input rule, or if the service enters a failed
state.
'';
};
@ -478,15 +506,14 @@ in
environment.systemPackages = [ pkgs.iptables ] ++ cfg.extraPackages;
boot.kernelModules = map (x: "nf_conntrack_${x}") cfg.connectionTrackingModules;
boot.extraModprobeConfig = optionalString (!cfg.autoLoadConntrackHelpers) ''
options nf_conntrack nf_conntrack_helper=0
boot.kernelModules = (optional cfg.autoLoadConntrackHelpers "nf_conntrack")
++ map (x: "nf_conntrack_${x}") cfg.connectionTrackingModules;
boot.extraModprobeConfig = optionalString cfg.autoLoadConntrackHelpers ''
options nf_conntrack nf_conntrack_helper=1
'';
assertions = [ { assertion = (cfg.checkReversePath != false) || kernelHasRPFilter;
message = "This kernel does not support rpfilter"; }
{ assertion = cfg.autoLoadConntrackHelpers || kernelCanDisableHelpers;
message = "This kernel does not support disabling conntrack helpers"; }
];
systemd.services.firewall = {
@ -499,7 +526,7 @@ in
path = [ pkgs.iptables ] ++ cfg.extraPackages;
# FIXME: this module may also try to load kernel modules, but
# containers don't have CAP_SYS_MODULE. So the host system had
# containers don't have CAP_SYS_MODULE. So the host system had
# better have all necessary modules already loaded.
unitConfig.ConditionCapability = "CAP_NET_ADMIN";
unitConfig.DefaultDependencies = false;

View file

@ -149,6 +149,6 @@ in {
serviceConfig.ExecStart = "${cfg.package}/bin/flannel";
};
services.etcd.enable = mkDefault cfg.etcd.endpoints == ["http://127.0.0.1:2379"];
services.etcd.enable = mkDefault (cfg.etcd.endpoints == ["http://127.0.0.1:2379"]);
};
}

View file

@ -0,0 +1,119 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.kresd;
package = pkgs.knot-resolver;
configFile = pkgs.writeText "kresd.conf" cfg.extraConfig;
in
{
meta.maintainers = [ maintainers.vcunat /* upstream developer */ ];
###### interface
options.services.kresd = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable knot-resolver domain name server.
DNSSEC validation is turned on by default.
You can run <literal>sudo nc -U /run/kresd/control</literal>
and give commands interactively to kresd.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra lines to be added verbatim to the generated configuration file.
'';
};
cacheDir = mkOption {
type = types.path;
default = "/var/cache/kresd";
description = ''
Directory for caches. They are intended to survive reboots.
'';
};
interfaces = mkOption {
type = with types; listOf str;
default = [ "::1" "127.0.0.1" ];
description = ''
What addresses the server should listen on.
'';
};
# TODO: perhaps options for more common stuff like cache size or forwarding
};
###### implementation
config = mkIf cfg.enable {
environment.etc."kresd.conf".source = configFile; # not required
users.extraUsers = singleton
{ name = "kresd";
uid = config.ids.uids.kresd;
group = "kresd";
description = "Knot-resolver daemon user";
};
users.extraGroups = singleton
{ name = "kresd";
gid = config.ids.gids.kresd;
};
systemd.sockets.kresd = rec {
wantedBy = [ "sockets.target" ];
before = wantedBy;
listenStreams = map
# Syntax depends on being IPv6 or IPv4.
(iface: if elem ":" (stringToCharacters iface) then "[${iface}]:53" else "${iface}:53")
cfg.interfaces;
socketConfig.ListenDatagram = listenStreams;
};
systemd.sockets.kresd-control = rec {
wantedBy = [ "sockets.target" ];
before = wantedBy;
partOf = [ "kresd.socket" ];
listenStreams = [ "/run/kresd/control" ];
socketConfig = {
FileDescriptorName = "control";
Service = "kresd.service";
SocketMode = "0660"; # only root user/group may connect
};
};
# Create the cacheDir; tmpfiles don't work on nixos-rebuild switch.
systemd.services.kresd-cachedir = {
serviceConfig.Type = "oneshot";
script = ''
if [ ! -d '${cfg.cacheDir}' ]; then
mkdir -p '${cfg.cacheDir}'
chown kresd:kresd '${cfg.cacheDir}'
fi
'';
};
systemd.services.kresd = {
description = "Knot-resolver daemon";
serviceConfig = {
User = "kresd";
Type = "notify";
WorkingDirectory = cfg.cacheDir;
};
script = ''
exec '${package}/bin/kresd' --config '${configFile}' \
-k '${cfg.cacheDir}/root.key'
'';
after = [ "kresd-cachedir.service" ];
requires = [ "kresd.socket" "kresd-cachedir.service" ];
wantedBy = [ "sockets.target" ];
};
};
}

View file

@ -82,7 +82,6 @@ in
serviceConfig = {
Restart = "always";
RestartSec = "5s";
ExecStartPre = "${cfg.package}/bin/miredo-checkconf -f ${miredoConf}";
ExecStart = "${cfg.package}/bin/miredo -c ${miredoConf} -p ${pidFile} -f";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
};

View file

@ -174,7 +174,7 @@ in {
assertions = [{
assertion = config.networking.wireless.enable == false;
message = "You can not use networking.networkmanager with services.networking.wireless";
message = "You can not use networking.networkmanager with networking.wireless";
}];
boot.kernelModules = [ "ppp_mppe" ]; # Needed for most (all?) PPTP VPN connections.
@ -239,7 +239,8 @@ in {
# Turn off NixOS' network management
networking = {
useDHCP = false;
wireless.enable = false;
# use mkDefault to trigger the assertion about the conflict above
wireless.enable = lib.mkDefault false;
};
powerManagement.resumeCommands = ''

View file

@ -0,0 +1,168 @@
{ config, lib, pkgs, ... }:
with lib;
let
dataDir = "/var/lib/pdns-recursor";
username = "pdns-recursor";
cfg = config.services.pdns-recursor;
zones = mapAttrsToList (zone: uri: "${zone}.=${uri}") cfg.forwardZones;
configFile = pkgs.writeText "recursor.conf" ''
local-address=${cfg.dns.address}
local-port=${toString cfg.dns.port}
allow-from=${concatStringsSep "," cfg.dns.allowFrom}
webserver-address=${cfg.api.address}
webserver-port=${toString cfg.api.port}
webserver-allow-from=${concatStringsSep "," cfg.api.allowFrom}
forward-zones=${concatStringsSep "," zones}
export-etc-hosts=${if cfg.exportHosts then "yes" else "no"}
dnssec=${cfg.dnssecValidation}
serve-rfc1918=${if cfg.serveRFC1918 then "yes" else "no"}
${cfg.extraConfig}
'';
in {
options.services.pdns-recursor = {
enable = mkEnableOption "PowerDNS Recursor, a recursive DNS server";
dns.address = mkOption {
type = types.str;
default = "0.0.0.0";
description = ''
IP address Recursor DNS server will bind to.
'';
};
dns.port = mkOption {
type = types.int;
default = 53;
description = ''
Port number Recursor DNS server will bind to.
'';
};
dns.allowFrom = mkOption {
type = types.listOf types.str;
default = [ "10.0.0.0/8" "172.16.0.0/12" "192.168.0.0/16" ];
example = [ "0.0.0.0/0" ];
description = ''
IP address ranges of clients allowed to make DNS queries.
'';
};
api.address = mkOption {
type = types.str;
default = "0.0.0.0";
description = ''
IP address Recursor REST API server will bind to.
'';
};
api.port = mkOption {
type = types.int;
default = 8082;
description = ''
Port number Recursor REST API server will bind to.
'';
};
api.allowFrom = mkOption {
type = types.listOf types.str;
default = [ "0.0.0.0/0" ];
description = ''
IP address ranges of clients allowed to make API requests.
'';
};
exportHosts = mkOption {
type = types.bool;
default = false;
description = ''
Whether to export names and IP addresses defined in /etc/hosts.
'';
};
forwardZones = mkOption {
type = types.attrs;
example = { eth = "127.0.0.1:5353"; };
default = {};
description = ''
DNS zones to be forwarded to other servers.
'';
};
dnssecValidation = mkOption {
type = types.enum ["off" "process-no-validate" "process" "log-fail" "validate"];
default = "validate";
description = ''
Controls the level of DNSSEC processing done by the PowerDNS Recursor.
See https://doc.powerdns.com/md/recursor/dnssec/ for a detailed explanation.
'';
};
serveRFC1918 = mkOption {
type = types.bool;
default = true;
description = ''
Whether to directly resolve the RFC1918 reverse-mapping domains:
<literal>10.in-addr.arpa</literal>,
<literal>168.192.in-addr.arpa</literal>,
<literal>16-31.172.in-addr.arpa</literal>
This saves load on the AS112 servers.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra options to be appended to the configuration file.
'';
};
};
config = mkIf cfg.enable {
users.extraUsers."${username}" = {
home = dataDir;
createHome = true;
uid = config.ids.uids.pdns-recursor;
description = "PowerDNS Recursor daemon user";
};
systemd.services.pdns-recursor = {
unitConfig.Documentation = "man:pdns_recursor(1) man:rec_control(1)";
description = "PowerDNS recursive server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
User = username;
Restart ="on-failure";
RestartSec = "5";
PrivateTmp = true;
PrivateDevices = true;
AmbientCapabilities = "cap_net_bind_service";
ExecStart = ''${pkgs.pdns-recursor}/bin/pdns_recursor \
--config-dir=${dataDir} \
--socket-dir=${dataDir} \
--disable-syslog
'';
};
preStart = ''
# Link configuration file into recursor home directory
configPath=${dataDir}/recursor.conf
if [ "$(realpath $configPath)" != "${configFile}" ]; then
rm -f $configPath
ln -s ${configFile} $configPath
fi
'';
};
};
}

View file

@ -275,7 +275,14 @@ in
];
security.permissionsWrappers.setuid = [
{ program = "fping";
source = "${e.enlightenment.out}/bin/fping";
source = "${pkgs.fping}/bin/fping";
owner = "root";
group = "root";
setuid = true;
}
{ program = "fping";
source = "${pkgs.fping}/bin/fping6";
owner = "root";
group = "root";
setuid = true;

View file

@ -81,6 +81,7 @@ in
users.extraUsers = singleton {
name = clamavUser;
uid = config.ids.uids.clamav;
group = clamavGroup;
description = "ClamAV daemon user";
home = stateDir;
};

View file

@ -6,7 +6,7 @@ with lib;
let
# Upgrading? We have a test! nix-build ./nixos/tests/wordpress.nix
version = "4.6.1";
version = "4.7.1";
fullversion = "${version}";
# Our bare-bones wp-config.php file using the above settings
@ -75,7 +75,7 @@ let
owner = "WordPress";
repo = "WordPress";
rev = "${fullversion}";
sha256 = "0n82xgjg1ry2p73hhgpslnkdzrma5n6hxxq76s7qskkzj0qjfvpn";
sha256 = "1wb4f4zn55d23qi0whsfpbpcd4sjvzswgmni6f5rzrmlawq9ssgr";
};
installPhase = ''
mkdir -p $out

View file

@ -39,6 +39,13 @@ in
type = types.path;
description = "The data directory, for storing certificates.";
};
package = mkOption {
default = pkgs.caddy;
defaultText = "pkgs.caddy";
type = types.package;
description = "Caddy package to use.";
};
};
config = mkIf cfg.enable {
@ -47,7 +54,7 @@ in
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = ''${pkgs.caddy.bin}/bin/caddy -conf=${configFile} \
ExecStart = ''${cfg.package.bin}/bin/caddy -conf=${configFile} \
-ca=${cfg.ca} -email=${cfg.email} ${optionalString cfg.agree "-agree"}
'';
Type = "simple";

View file

@ -5,7 +5,11 @@ with lib;
let
cfg = config.services.nginx;
virtualHosts = mapAttrs (vhostName: vhostConfig:
vhostConfig // (optionalAttrs vhostConfig.enableACME {
vhostConfig // {
serverName = if vhostConfig.serverName != null
then vhostConfig.serverName
else vhostName;
} // (optionalAttrs vhostConfig.enableACME {
sslCertificate = "/var/lib/acme/${vhostName}/fullchain.pem";
sslCertificateKey = "/var/lib/acme/${vhostName}/key.pem";
})
@ -112,8 +116,9 @@ let
${cfg.appendConfig}
'';
vhosts = concatStringsSep "\n" (mapAttrsToList (serverName: vhost:
vhosts = concatStringsSep "\n" (mapAttrsToList (vhostName: vhost:
let
serverName = vhost.serverName;
ssl = vhost.enableSSL || vhost.forceSSL;
port = if vhost.port != null then vhost.port else (if ssl then 443 else 80);
listenString = toString port + optionalString ssl " ssl http2"
@ -161,7 +166,7 @@ let
ssl_certificate_key ${vhost.sslCertificateKey};
''}
${optionalString (vhost.basicAuth != {}) (mkBasicAuth serverName vhost.basicAuth)}
${optionalString (vhost.basicAuth != {}) (mkBasicAuth vhostName vhost.basicAuth)}
${mkLocations vhost.locations}
@ -178,8 +183,8 @@ let
${config.extraConfig}
}
'') locations);
mkBasicAuth = serverName: authDef: let
htpasswdFile = pkgs.writeText "${serverName}.htpasswd" (
mkBasicAuth = vhostName: authDef: let
htpasswdFile = pkgs.writeText "${vhostName}.htpasswd" (
concatStringsSep "\n" (mapAttrsToList (user: password: ''
${user}:{PLAIN}${password}
'') authDef)
@ -393,17 +398,20 @@ in
};
security.acme.certs = filterAttrs (n: v: v != {}) (
mapAttrs (vhostName: vhostConfig:
optionalAttrs vhostConfig.enableACME {
user = cfg.user;
group = cfg.group;
webroot = vhostConfig.acmeRoot;
extraDomains = genAttrs vhostConfig.serverAliases (alias: null);
postRun = ''
systemctl reload nginx
'';
}
) virtualHosts
let
vhostsConfigs = mapAttrsToList (vhostName: vhostConfig: vhostConfig) virtualHosts;
acmeEnabledVhosts = filter (vhostConfig: vhostConfig.enableACME) vhostsConfigs;
acmePairs = map (vhostConfig: { name = vhostConfig.serverName; value = {
user = cfg.user;
group = cfg.group;
webroot = vhostConfig.acmeRoot;
extraDomains = genAttrs vhostConfig.serverAliases (alias: null);
postRun = ''
systemctl reload nginx
'';
}; }) acmeEnabledVhosts;
in
listToAttrs acmePairs
);
users.extraUsers = optionalAttrs (cfg.user == "nginx") (singleton

View file

@ -8,6 +8,15 @@
with lib;
{
options = {
serverName = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Name of this virtual host. Defaults to attribute name in virtualHosts.
'';
example = "example.org";
};
serverAliases = mkOption {
type = types.listOf types.str;
default = [];

View file

@ -228,6 +228,8 @@ in
# Enable helpful DBus services.
services.udisks2.enable = true;
services.upower.enable = config.powerManagement.enable;
services.dbus.packages =
mkIf config.services.printing.enable [ pkgs.system-config-printer ];
# Extra UDEV rules used by Solid
services.udev.packages = [
@ -246,6 +248,11 @@ in
security.pam.services.kde = { allowNullPassword = true; };
# use kimpanel as the default IBus panel
i18n.inputMethod.ibus.panel =
lib.mkDefault
"${pkgs.kde5.plasma-desktop}/lib/libexec/kimpanel-ibus-panel";
})
];

View file

@ -20,6 +20,7 @@ let
${optionalString (cfg.defaultUser != null) ("default_user " + cfg.defaultUser)}
${optionalString (cfg.defaultUser != null) ("focus_password yes")}
${optionalString cfg.autoLogin "auto_login yes"}
${optionalString (cfg.consoleCmd != null) "console_cmd ${cfg.consoleCmd}"}
${cfg.extraConfig}
'';
@ -105,6 +106,18 @@ in
'';
};
consoleCmd = mkOption {
type = types.nullOr types.str;
default = ''
${pkgs.xterm}/bin/xterm -C -fg white -bg black +sb -T "Console login" -e ${pkgs.shadow}/bin/login
'';
defaultText = ''
''${pkgs.xterm}/bin/xterm -C -fg white -bg black +sb -T "Console login" -e ''${pkgs.shadow}/bin/login
'';
description = ''
The command to run when "console" is given as the username.
'';
};
};
};

View file

@ -41,7 +41,7 @@ with lib;
{ description = "Terminal Server";
path =
[ pkgs.xorgserver.out pkgs.gawk pkgs.which pkgs.openssl pkgs.xorg.xauth
[ pkgs.xorg.xorgserver.out pkgs.gawk pkgs.which pkgs.openssl pkgs.xorg.xauth
pkgs.nettools pkgs.shadow pkgs.procps pkgs.utillinux pkgs.bash
];

View file

@ -28,6 +28,8 @@ def write_loader_conf(generation):
if "@timeout@" != "":
f.write("timeout @timeout@\n")
f.write("default nixos-generation-%d\n" % generation)
if not @editor@:
f.write("editor 0");
os.rename("@efiSysMountPoint@/loader/loader.conf.tmp", "@efiSysMountPoint@/loader/loader.conf")
def copy_from_profile(generation, name, dry_run=False):

View file

@ -20,6 +20,8 @@ let
timeout = if config.boot.loader.timeout != null then config.boot.loader.timeout else "";
editor = if cfg.editor then "True" else "False";
inherit (efi) efiSysMountPoint canTouchEfiVariables;
};
in {
@ -36,6 +38,20 @@ in {
description = "Whether to enable the systemd-boot (formerly gummiboot) EFI boot manager";
};
editor = mkOption {
default = true;
type = types.bool;
description = ''
Whether to allow editing the kernel command-line before
boot. It is recommended to set this to false, as it allows
gaining root access by passing init=/bin/sh as a kernel
parameter. However, it is enabled by default for backwards
compatibility.
'';
};
};
config = mkIf cfg.enable {

View file

@ -135,51 +135,59 @@ let self = {
"16.03".us-west-2.pv-ebs = "ami-5e61a23e";
"16.03".us-west-2.pv-s3 = "ami-734c8f13";
# 16.09.666.3738950
"16.09".ap-northeast-1.hvm-ebs = "ami-35578954";
"16.09".ap-northeast-1.hvm-s3 = "ami-d6528cb7";
"16.09".ap-northeast-1.pv-ebs = "ami-07548a66";
"16.09".ap-northeast-1.pv-s3 = "ami-f1548a90";
"16.09".ap-northeast-2.hvm-ebs = "ami-d48753ba";
"16.09".ap-northeast-2.hvm-s3 = "ami-4c865222";
"16.09".ap-northeast-2.pv-ebs = "ami-ca8551a4";
"16.09".ap-northeast-2.pv-s3 = "ami-9c8551f2";
"16.09".ap-south-1.hvm-ebs = "ami-922450fd";
"16.09".ap-south-1.hvm-s3 = "ami-6d3a4e02";
"16.09".ap-south-1.pv-ebs = "ami-4d394d22";
"16.09".ap-south-1.pv-s3 = "ami-17384c78";
"16.09".ap-southeast-1.hvm-ebs = "ami-f824809b";
"16.09".ap-southeast-1.hvm-s3 = "ami-f924809a";
"16.09".ap-southeast-1.pv-ebs = "ami-af2480cc";
"16.09".ap-southeast-1.pv-s3 = "ami-5826823b";
"16.09".ap-southeast-2.hvm-ebs = "ami-40fecd23";
"16.09".ap-southeast-2.hvm-s3 = "ami-48fecd2b";
"16.09".ap-southeast-2.pv-ebs = "ami-dffecdbc";
"16.09".ap-southeast-2.pv-s3 = "ami-e0fccf83";
"16.09".eu-central-1.hvm-ebs = "ami-1d8b7472";
"16.09".eu-central-1.hvm-s3 = "ami-1c8b7473";
"16.09".eu-central-1.pv-ebs = "ami-8c8d72e3";
"16.09".eu-central-1.pv-s3 = "ami-3488775b";
"16.09".eu-west-1.hvm-ebs = "ami-15662766";
"16.09".eu-west-1.hvm-s3 = "ami-476b2a34";
"16.09".eu-west-1.pv-ebs = "ami-876928f4";
"16.09".eu-west-1.pv-s3 = "ami-70682903";
"16.09".sa-east-1.hvm-ebs = "ami-27bc2e4b";
"16.09".sa-east-1.hvm-s3 = "ami-e4b92b88";
"16.09".sa-east-1.pv-ebs = "ami-4dbe2c21";
"16.09".sa-east-1.pv-s3 = "ami-77fc6e1b";
"16.09".us-east-1.hvm-ebs = "ami-93347684";
"16.09".us-east-1.hvm-s3 = "ami-5e347649";
"16.09".us-east-1.pv-ebs = "ami-b0387aa7";
"16.09".us-east-1.pv-s3 = "ami-51357746";
"16.09".us-west-1.hvm-ebs = "ami-06337a66";
"16.09".us-west-1.hvm-s3 = "ami-76307916";
"16.09".us-west-1.pv-ebs = "ami-fd327b9d";
"16.09".us-west-1.pv-s3 = "ami-cc347dac";
"16.09".us-west-2.hvm-ebs = "ami-49fe2729";
"16.09".us-west-2.hvm-s3 = "ami-93fc25f3";
"16.09".us-west-2.pv-ebs = "ami-14fe2774";
"16.09".us-west-2.pv-s3 = "ami-74f12814";
# 16.09.1508.3909827
"16.09".ap-northeast-1.hvm-ebs = "ami-68453b0f";
"16.09".ap-northeast-1.hvm-s3 = "ami-f9bec09e";
"16.09".ap-northeast-1.pv-ebs = "ami-254a3442";
"16.09".ap-northeast-1.pv-s3 = "ami-ef473988";
"16.09".ap-northeast-2.hvm-ebs = "ami-18ae7f76";
"16.09".ap-northeast-2.hvm-s3 = "ami-9eac7df0";
"16.09".ap-northeast-2.pv-ebs = "ami-57aa7b39";
"16.09".ap-northeast-2.pv-s3 = "ami-5cae7f32";
"16.09".ap-south-1.hvm-ebs = "ami-b3f98fdc";
"16.09".ap-south-1.hvm-s3 = "ami-98e690f7";
"16.09".ap-south-1.pv-ebs = "ami-aef98fc1";
"16.09".ap-south-1.pv-s3 = "ami-caf88ea5";
"16.09".ap-southeast-1.hvm-ebs = "ami-80fb51e3";
"16.09".ap-southeast-1.hvm-s3 = "ami-2df3594e";
"16.09".ap-southeast-1.pv-ebs = "ami-37f05a54";
"16.09".ap-southeast-1.pv-s3 = "ami-27f35944";
"16.09".ap-southeast-2.hvm-ebs = "ami-57ece834";
"16.09".ap-southeast-2.hvm-s3 = "ami-87f4f0e4";
"16.09".ap-southeast-2.pv-ebs = "ami-d8ede9bb";
"16.09".ap-southeast-2.pv-s3 = "ami-a6ebefc5";
"16.09".eu-central-1.hvm-ebs = "ami-1b884774";
"16.09".eu-central-1.hvm-s3 = "ami-b08c43df";
"16.09".eu-central-1.pv-ebs = "ami-888946e7";
"16.09".eu-central-1.pv-s3 = "ami-06874869";
"16.09".eu-west-1.hvm-ebs = "ami-1ed3e76d";
"16.09".eu-west-1.hvm-s3 = "ami-73d1e500";
"16.09".eu-west-1.pv-ebs = "ami-44c0f437";
"16.09".eu-west-1.pv-s3 = "ami-f3d8ec80";
"16.09".eu-west-2.hvm-ebs = "ami-2c9c9648";
"16.09".eu-west-2.hvm-s3 = "ami-6b9e940f";
"16.09".eu-west-2.pv-ebs = "ami-f1999395";
"16.09".eu-west-2.pv-s3 = "ami-bb9f95df";
"16.09".sa-east-1.hvm-ebs = "ami-a11882cd";
"16.09".sa-east-1.hvm-s3 = "ami-7726bc1b";
"16.09".sa-east-1.pv-ebs = "ami-9725bffb";
"16.09".sa-east-1.pv-s3 = "ami-b027bddc";
"16.09".us-east-1.hvm-ebs = "ami-854ca593";
"16.09".us-east-1.hvm-s3 = "ami-2241a834";
"16.09".us-east-1.pv-ebs = "ami-a441a8b2";
"16.09".us-east-1.pv-s3 = "ami-e841a8fe";
"16.09".us-east-2.hvm-ebs = "ami-3f41645a";
"16.09".us-east-2.hvm-s3 = "ami-804065e5";
"16.09".us-east-2.pv-ebs = "ami-f1466394";
"16.09".us-east-2.pv-s3 = "ami-05426760";
"16.09".us-west-1.hvm-ebs = "ami-c2efbca2";
"16.09".us-west-1.hvm-s3 = "ami-d71042b7";
"16.09".us-west-1.pv-ebs = "ami-04e8bb64";
"16.09".us-west-1.pv-s3 = "ami-31e9ba51";
"16.09".us-west-2.hvm-ebs = "ami-6449f504";
"16.09".us-west-2.hvm-s3 = "ami-344af654";
"16.09".us-west-2.pv-ebs = "ami-6d4af60d";
"16.09".us-west-2.pv-s3 = "ami-de48f4be";
latest = self."16.09";
}; in self

View file

@ -273,6 +273,7 @@ in rec {
tests.mysql = callTest tests/mysql.nix {};
tests.mysqlReplication = callTest tests/mysql-replication.nix {};
tests.nat.firewall = callTest tests/nat.nix { withFirewall = true; };
tests.nat.firewall-conntrack = callTest tests/nat.nix { withFirewall = true; withConntrackHelpers = true; };
tests.nat.standalone = callTest tests/nat.nix { withFirewall = false; };
tests.networking.networkd = callSubTests tests/networking.nix { networkd = true; };
tests.networking.scripted = callSubTests tests/networking.nix { networkd = false; };

View file

@ -11,7 +11,7 @@ import ./make-test.nix ({ pkgs, ... }:
let
# Some random file to serve.
file = pkgs.nixUnstable.src;
file = pkgs.hello.src;
miniupnpdConf = nodes: pkgs.writeText "miniupnpd.conf"
''

View file

@ -115,8 +115,8 @@ let
# Did the swap device get activated?
# uncomment once https://bugs.freedesktop.org/show_bug.cgi?id=86930 is resolved
#$machine->waitForUnit("swap.target");
$machine->waitUntilSucceeds("cat /proc/swaps | grep -q /dev");
$machine->waitForUnit("swap.target");
$machine->succeed("cat /proc/swaps | grep -q /dev");
# Check whether the channel works.
$machine->succeed("nix-env -iA nixos.procps >&2");

View file

@ -59,6 +59,7 @@ in {
virtualisation.diskSize = 2048;
programs.bash.enableCompletion = true;
environment.systemPackages = with pkgs; [ netcat bind ];
services.kubernetes.roles = ["master" "node"];
virtualisation.docker.extraOptions = "--iptables=false --ip-masq=false -b cbr0";

View file

@ -1,32 +1,91 @@
import ./make-test.nix ({ pkgs, ...} : {
name = "simple";
import ./make-test.nix ({ pkgs, ...} : rec {
name = "mesos";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ offline ];
maintainers = [ offline kamilchm cstrahan ];
};
machine = { config, pkgs, ... }: {
services.zookeeper.enable = true;
virtualisation.docker.enable = true;
services.mesos = {
slave = {
enable = true;
master = "zk://localhost:2181/mesos";
attributes = {
tag1 = "foo";
tag2 = "bar";
};
};
master = {
enable = true;
zk = "zk://localhost:2181/mesos";
nodes = {
master = { config, pkgs, ... }: {
networking.firewall.enable = false;
services.zookeeper.enable = true;
services.mesos.master = {
enable = true;
zk = "zk://master:2181/mesos";
};
};
slave = { config, pkgs, ... }: {
networking.firewall.enable = false;
networking.nat.enable = true;
virtualisation.docker.enable = true;
services.mesos = {
slave = {
enable = true;
master = "master:5050";
dockerRegistry = registry;
executorEnvironmentVariables = {
PATH = "/run/current-system/sw/bin";
};
};
};
};
};
simpleDocker = pkgs.dockerTools.buildImage {
name = "echo";
contents = [ pkgs.stdenv.shellPackage pkgs.coreutils ];
config = {
Env = [
# When shell=true, mesos invokes "sh -c '<cmd>'", so make sure "sh" is
# on the PATH.
"PATH=${pkgs.stdenv.shellPackage}/bin:${pkgs.coreutils}/bin"
];
Entrypoint = [ "echo" ];
};
};
registry = pkgs.runCommand "registry" { } ''
mkdir -p $out
cp ${simpleDocker} $out/echo:latest.tar
'';
testFramework = pkgs.pythonPackages.buildPythonPackage {
name = "mesos-tests";
propagatedBuildInputs = [ pkgs.mesos ];
catchConflicts = false;
src = ./mesos_test.py;
phases = [ "installPhase" "fixupPhase" ];
installPhase = ''
mkdir $out
cp $src $out/mesos_test.py
chmod +x $out/mesos_test.py
echo "done" > test.result
tar czf $out/test.tar.gz test.result
'';
};
testScript =
''
startAll;
$machine->waitForUnit("mesos-master.service");
$machine->waitForUnit("mesos-slave.service");
$master->waitForUnit("mesos-master.service");
$slave->waitForUnit("mesos-slave.service");
$master->waitForOpenPort(5050);
$slave->waitForOpenPort(5051);
# is slave registred?
$master->waitUntilSucceeds("curl -s --fail http://master:5050/master/slaves".
" | grep -q \"\\\"hostname\\\":\\\"slave\\\"\"");
# try to run docker image
$master->succeed("${pkgs.mesos}/bin/mesos-execute --master=master:5050".
" --resources=\"cpus:0.1;mem:32\" --name=simple-docker".
" --containerizer=mesos --docker_image=echo:latest".
" --shell=true --command=\"echo done\" | grep -q TASK_FINISHED");
# simple command with .tar.gz uri
$master->succeed("${testFramework}/mesos_test.py master ".
"${testFramework}/test.tar.gz");
'';
})

72
nixos/tests/mesos_test.py Normal file
View file

@ -0,0 +1,72 @@
#!/usr/bin/env python
import uuid
import time
import subprocess
import os
import sys
from mesos.interface import Scheduler
from mesos.native import MesosSchedulerDriver
from mesos.interface import mesos_pb2
def log(msg):
process = subprocess.Popen("systemd-cat", stdin=subprocess.PIPE)
(out,err) = process.communicate(msg)
class NixosTestScheduler(Scheduler):
def __init__(self):
self.master_ip = sys.argv[1]
self.download_uri = sys.argv[2]
def resourceOffers(self, driver, offers):
log("XXX got resource offer")
offer = offers[0]
task = self.new_task(offer)
uri = task.command.uris.add()
uri.value = self.download_uri
task.command.value = "cat test.result"
driver.launchTasks(offer.id, [task])
def statusUpdate(self, driver, update):
log("XXX status update")
if update.state == mesos_pb2.TASK_FAILED:
log("XXX test task failed with message: " + update.message)
driver.stop()
sys.exit(1)
elif update.state == mesos_pb2.TASK_FINISHED:
driver.stop()
sys.exit(0)
def new_task(self, offer):
task = mesos_pb2.TaskInfo()
id = uuid.uuid4()
task.task_id.value = str(id)
task.slave_id.value = offer.slave_id.value
task.name = "task {}".format(str(id))
cpus = task.resources.add()
cpus.name = "cpus"
cpus.type = mesos_pb2.Value.SCALAR
cpus.scalar.value = 0.1
mem = task.resources.add()
mem.name = "mem"
mem.type = mesos_pb2.Value.SCALAR
mem.scalar.value = 32
return task
if __name__ == '__main__':
log("XXX framework started")
framework = mesos_pb2.FrameworkInfo()
framework.user = "root"
framework.name = "nixos-test-framework"
driver = MesosSchedulerDriver(
NixosTestScheduler(),
framework,
sys.argv[1] + ":5050"
)
driver.run()

View file

@ -3,34 +3,47 @@
# client on the inside network, a server on the outside network, and a
# router connected to both that performs Network Address Translation
# for the client.
import ./make-test.nix ({ pkgs, withFirewall, ... }:
import ./make-test.nix ({ pkgs, lib, withFirewall, withConntrackHelpers ? false, ... }:
let
unit = if withFirewall then "firewall" else "nat";
in
{
name = "nat${if withFirewall then "WithFirewall" else "Standalone"}";
meta = with pkgs.stdenv.lib.maintainers; {
name = "nat" + (if withFirewall then "WithFirewall" else "Standalone")
+ (lib.optionalString withConntrackHelpers "withConntrackHelpers");
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ eelco chaoflow rob wkennington ];
};
nodes =
{ client =
{ config, pkgs, nodes, ... }:
{ virtualisation.vlans = [ 1 ];
networking.firewall.allowPing = true;
networking.defaultGateway =
(pkgs.lib.head nodes.router.config.networking.interfaces.eth2.ip4).address;
};
lib.mkMerge [
{ virtualisation.vlans = [ 1 ];
networking.firewall.allowPing = true;
networking.defaultGateway =
(pkgs.lib.head nodes.router.config.networking.interfaces.eth2.ip4).address;
}
(lib.optionalAttrs withConntrackHelpers {
networking.firewall.connectionTrackingModules = [ "ftp" ];
networking.firewall.autoLoadConntrackHelpers = true;
})
];
router =
{ config, pkgs, ... }:
{ virtualisation.vlans = [ 2 1 ];
networking.firewall.enable = withFirewall;
networking.firewall.allowPing = true;
networking.nat.enable = true;
networking.nat.internalIPs = [ "192.168.1.0/24" ];
networking.nat.externalInterface = "eth1";
};
lib.mkMerge [
{ virtualisation.vlans = [ 2 1 ];
networking.firewall.enable = withFirewall;
networking.firewall.allowPing = true;
networking.nat.enable = true;
networking.nat.internalIPs = [ "192.168.1.0/24" ];
networking.nat.externalInterface = "eth1";
}
(lib.optionalAttrs withConntrackHelpers {
networking.firewall.connectionTrackingModules = [ "ftp" ];
networking.firewall.autoLoadConntrackHelpers = true;
})
];
server =
{ config, pkgs, ... }:
@ -66,7 +79,8 @@ import ./make-test.nix ({ pkgs, withFirewall, ... }:
$client->succeed("curl -v ftp://server/foo.txt >&2");
# Test whether active FTP works.
$client->succeed("curl -v -P - ftp://server/foo.txt >&2");
$client->${if withConntrackHelpers then "succeed" else "fail"}(
"curl -v -P - ftp://server/foo.txt >&2");
# Test ICMP.
$client->succeed("ping -c 1 router >&2");

View file

@ -10,29 +10,61 @@ let
vlanIfs = range 1 (length config.virtualisation.vlans);
in {
virtualisation.vlans = [ 1 2 3 ];
boot.kernel.sysctl."net.ipv6.conf.all.forwarding" = true;
networking = {
useDHCP = false;
useNetworkd = networkd;
firewall.allowPing = true;
firewall.checkReversePath = true;
firewall.allowedUDPPorts = [ 547 ];
interfaces = mkOverride 0 (listToAttrs (flip map vlanIfs (n:
nameValuePair "eth${toString n}" {
ipAddress = "192.168.${toString n}.1";
prefixLength = 24;
ipv6Address = "fd00:1234:5678:${toString n}::1";
ipv6PrefixLength = 64;
})));
};
services.dhcpd = {
services.dhcpd4 = {
enable = true;
interfaces = map (n: "eth${toString n}") vlanIfs;
extraConfig = ''
option subnet-mask 255.255.255.0;
authoritative;
'' + flip concatMapStrings vlanIfs (n: ''
subnet 192.168.${toString n}.0 netmask 255.255.255.0 {
option broadcast-address 192.168.${toString n}.255;
option routers 192.168.${toString n}.1;
# XXX: technically it's _not guaranteed_ that IP addresses will be
# issued from the first item in range onwards! We assume that in
# our tests however.
range 192.168.${toString n}.2 192.168.${toString n}.254;
}
'');
};
services.radvd = {
enable = true;
config = flip concatMapStrings vlanIfs (n: ''
interface eth${toString n} {
AdvSendAdvert on;
AdvManagedFlag on;
AdvOtherConfigFlag on;
prefix fd00:1234:5678:${toString n}::/64 {
AdvAutonomous off;
};
};
'');
};
services.dhcpd6 = {
enable = true;
interfaces = map (n: "eth${toString n}") vlanIfs;
extraConfig = ''
authoritative;
'' + flip concatMapStrings vlanIfs (n: ''
subnet6 fd00:1234:5678:${toString n}::/64 {
range6 fd00:1234:5678:${toString n}::2 fd00:1234:5678:${toString n}::2;
}
'');
};
};
testCases = {
@ -108,8 +140,14 @@ let
useNetworkd = networkd;
firewall.allowPing = true;
useDHCP = true;
interfaces.eth1.ip4 = mkOverride 0 [ ];
interfaces.eth2.ip4 = mkOverride 0 [ ];
interfaces.eth1 = {
ip4 = mkOverride 0 [ ];
ip6 = mkOverride 0 [ ];
};
interfaces.eth2 = {
ip4 = mkOverride 0 [ ];
ip6 = mkOverride 0 [ ];
};
};
};
testScript = { nodes, ... }:
@ -121,21 +159,31 @@ let
# Wait until we have an ip address on each interface
$client->waitUntilSucceeds("ip addr show dev eth1 | grep -q '192.168.1'");
$client->waitUntilSucceeds("ip addr show dev eth1 | grep -q 'fd00:1234:5678:1:'");
$client->waitUntilSucceeds("ip addr show dev eth2 | grep -q '192.168.2'");
$client->waitUntilSucceeds("ip addr show dev eth2 | grep -q 'fd00:1234:5678:2:'");
# Test vlan 1
$client->waitUntilSucceeds("ping -c 1 192.168.1.1");
$client->waitUntilSucceeds("ping -c 1 192.168.1.2");
$client->waitUntilSucceeds("ping6 -c 1 fd00:1234:5678:1::1");
$client->waitUntilSucceeds("ping6 -c 1 fd00:1234:5678:1::2");
$router->waitUntilSucceeds("ping -c 1 192.168.1.1");
$router->waitUntilSucceeds("ping -c 1 192.168.1.2");
$router->waitUntilSucceeds("ping6 -c 1 fd00:1234:5678:1::1");
$router->waitUntilSucceeds("ping6 -c 1 fd00:1234:5678:1::2");
# Test vlan 2
$client->waitUntilSucceeds("ping -c 1 192.168.2.1");
$client->waitUntilSucceeds("ping -c 1 192.168.2.2");
$client->waitUntilSucceeds("ping6 -c 1 fd00:1234:5678:2::1");
$client->waitUntilSucceeds("ping6 -c 1 fd00:1234:5678:2::2");
$router->waitUntilSucceeds("ping -c 1 192.168.2.1");
$router->waitUntilSucceeds("ping -c 1 192.168.2.2");
$router->waitUntilSucceeds("ping6 -c 1 fd00:1234:5678:2::1");
$router->waitUntilSucceeds("ping6 -c 1 fd00:1234:5678:2::2");
'';
};
dhcpOneIf = {

View file

@ -39,7 +39,7 @@ in stdenv.mkDerivation {
store historical records of the ledger and participate in consensus.
'';
homepage = https://www.stellar.org/;
platforms = platforms.linux;
platforms = [ "x86_64-linux" ];
maintainers = with maintainers; [ chris-martin ];
license = licenses.asl20;
};

View file

@ -1,94 +0,0 @@
{ stdenv, fetchgit, alsaLib, aubio, boost, cairomm, curl, doxygen, dbus, fftw
, fftwSinglePrec, flac, glibc, glibmm, graphviz, gtkmm2, libjack2
, libgnomecanvas, libgnomecanvasmm, liblo, libmad, libogg, librdf
, librdf_raptor, librdf_rasqal, libsamplerate, libsigcxx, libsndfile
, libusb, libuuid, libxml2, libxslt, lilv, lv2, makeWrapper, pango
, perl, pkgconfig, python2, rubberband, serd, sord, sratom, suil, taglib, vampSDK }:
let
# Ardour git repo uses a mix of annotated and lightweight tags. Annotated
# tags are used for MAJOR.MINOR versioning, and lightweight tags are used
# in-between; MAJOR.MINOR.REV where REV is the number of commits since the
# last annotated tag. A slightly different version string format is needed
# for the 'revision' info that is built into the binary; it is the format of
# "git describe" when _not_ on an annotated tag(!): MAJOR.MINOR-REV-HASH.
# Version to build.
#tag = "3.5.403";
# Version info that is built into the binary. Keep in sync with 'tag'. The
# last 8 digits is a (fake) commit id.
revision = "3.5-4539-g7024232";
# temporarily use a non tagged version, because 3.5.403 has a bug that
# causes loss of audio-files, and it was decided that there won't be a
# hotfix release, and we should use 4.0 when it comes out.
# more info: http://comments.gmane.org/gmane.comp.audio.ardour.user/13665
version = "2015-02-20";
in
stdenv.mkDerivation rec {
name = "ardour3-git-${version}";
src = fetchgit {
url = git://git.ardour.org/ardour/ardour.git;
rev = "7024232855d268633760674d34c096ce447b7240";
sha256 = "0pnnx22asizin5rvf352nfv6003zarw3jd64magp10310wrfiwbq";
};
buildInputs =
[ alsaLib aubio boost cairomm curl doxygen dbus fftw fftwSinglePrec flac glibc
glibmm graphviz gtkmm2 libjack2 libgnomecanvas libgnomecanvasmm liblo
libmad libogg librdf librdf_raptor librdf_rasqal libsamplerate
libsigcxx libsndfile libusb libuuid libxml2 libxslt lilv lv2
makeWrapper pango perl pkgconfig python2 rubberband serd sord sratom suil taglib vampSDK
];
patchPhase = ''
printf '#include "libs/ardour/ardour/revision.h"\nnamespace ARDOUR { const char* revision = \"${revision}\"; }\n' > libs/ardour/revision.cc
sed 's|/usr/include/libintl.h|${glibc.dev}/include/libintl.h|' -i wscript
patchShebangs ./tools/
'';
configurePhase = "${python2.interpreter} waf configure --optimize --docs --with-backends=jack,alsa --prefix=$out";
buildPhase = "${python2.interpreter} waf";
installPhase = ''
${python2.interpreter} waf install
# Install desktop file
mkdir -p "$out/share/applications"
cat > "$out/share/applications/ardour.desktop" << EOF
[Desktop Entry]
Name=Ardour 3
GenericName=Digital Audio Workstation
Comment=Multitrack harddisk recorder
Exec=$out/bin/ardour3
Icon=$out/share/ardour3/icons/ardour_icon_256px.png
Terminal=false
Type=Application
X-MultipleArgs=false
Categories=GTK;Audio;AudioVideoEditing;AudioVideo;Video;
EOF
'';
meta = with stdenv.lib; {
description = "Multi-track hard disk recording software";
longDescription = ''
Ardour is a digital audio workstation (DAW), You can use it to
record, edit and mix multi-track audio and midi. Produce your
own CDs. Mix video soundtracks. Experiment with new ideas about
music and sound.
Please consider supporting the ardour project financially:
https://community.ardour.org/node/8288
'';
homepage = http://ardour.org/;
license = licenses.gpl2;
platforms = platforms.linux;
maintainers = [ maintainers.goibhniu ];
};
}

View file

@ -1,86 +0,0 @@
{ stdenv, fetchFromGitHub, alsaLib, aubio, boost, cairomm, curl, doxygen, dbus, fftw
, fftwSinglePrec, flac, glibc, glibmm, graphviz, gtkmm2, libjack2
, libgnomecanvas, libgnomecanvasmm, liblo, libmad, libogg, librdf
, librdf_raptor, librdf_rasqal, libsamplerate, libsigcxx, libsndfile
, libusb, libuuid, libxml2, libxslt, lilv, lv2, makeWrapper, pango
, perl, pkgconfig, python2, rubberband, serd, sord, sratom, suil, taglib, vampSDK }:
let
# Ardour git repo uses a mix of annotated and lightweight tags. Annotated
# tags are used for MAJOR.MINOR versioning, and lightweight tags are used
# in-between; MAJOR.MINOR.REV where REV is the number of commits since the
# last annotated tag. A slightly different version string format is needed
# for the 'revision' info that is built into the binary; it is the format of
# "git describe" when _not_ on an annotated tag(!): MAJOR.MINOR-REV-HASH.
# Version to build.
tag = "4.7";
in
stdenv.mkDerivation rec {
name = "ardour-${tag}";
src = fetchFromGitHub {
owner = "Ardour";
repo = "ardour";
rev = "d84a8222f2b6dab5028b2586f798535a8766670e";
sha256 = "149gswphz77m3pkzsn2nqbm6yvcfa3fva560bcvjzlgb73f64q5l";
};
buildInputs =
[ alsaLib aubio boost cairomm curl doxygen dbus fftw fftwSinglePrec flac glibc
glibmm graphviz gtkmm2 libjack2 libgnomecanvas libgnomecanvasmm liblo
libmad libogg librdf librdf_raptor librdf_rasqal libsamplerate
libsigcxx libsndfile libusb libuuid libxml2 libxslt lilv lv2
makeWrapper pango perl pkgconfig python2 rubberband serd sord sratom suil taglib vampSDK
];
# ardour's wscript has a "tarball" target but that required the git revision
# be available. Since this is an unzipped tarball fetched from github we
# have to do that ourself.
patchPhase = ''
printf '#include "libs/ardour/ardour/revision.h"\nnamespace ARDOUR { const char* revision = \"${tag}-${builtins.substring 0 8 src.rev}\"; }\n' > libs/ardour/revision.cc
sed 's|/usr/include/libintl.h|${glibc.dev}/include/libintl.h|' -i wscript
patchShebangs ./tools/
'';
configurePhase = "${python2.interpreter} waf configure --optimize --docs --with-backends=jack,alsa --prefix=$out";
buildPhase = "${python2.interpreter} waf";
installPhase = ''
${python2.interpreter} waf install
# Install desktop file
mkdir -p "$out/share/applications"
cat > "$out/share/applications/ardour.desktop" << EOF
[Desktop Entry]
Name=Ardour 4
GenericName=Digital Audio Workstation
Comment=Multitrack harddisk recorder
Exec=$out/bin/ardour4
Icon=$out/share/ardour4/icons/ardour_icon_256px.png
Terminal=false
Type=Application
X-MultipleArgs=false
Categories=GTK;Audio;AudioVideoEditing;AudioVideo;Video;
EOF
'';
meta = with stdenv.lib; {
description = "Multi-track hard disk recording software";
longDescription = ''
Ardour is a digital audio workstation (DAW), You can use it to
record, edit and mix multi-track audio and midi. Produce your
own CDs. Mix video soundtracks. Experiment with new ideas about
music and sound.
Please consider supporting the ardour project financially:
https://community.ardour.org/node/8288
'';
homepage = http://ardour.org/;
license = licenses.gpl2;
platforms = platforms.linux;
maintainers = [ maintainers.goibhniu maintainers.fps ];
};
}

View file

@ -16,7 +16,7 @@ let
# "git describe" when _not_ on an annotated tag(!): MAJOR.MINOR-REV-HASH.
# Version to build.
tag = "5.4";
tag = "5.5";
in

View file

@ -2,7 +2,7 @@
utillinux, pythonPackages, libnotify }:
stdenv.mkDerivation {
name = "clerk-unstable-2016-10-14";
name = "clerk-2016-10-14";
src = fetchFromGitHub {
owner = "carnager";

View file

@ -1,24 +1,20 @@
{stdenv, fetchurl, SDL, SDL_gfx, SDL_image, tremor, flac, mpg123, libmikmod
, speex
, keymap ? "newdefault"
, speex, ncurses
, keymap ? "default"
, conf ? "unknown"
}:
stdenv.mkDerivation rec {
name = "gmu-0.7.2";
name = "gmu-0.10.1";
src = fetchurl {
url = http://wejp.k.vu/files/gmu-0.7.2.tar.gz;
sha256 = "0gvhwhhlj64lc425wqch4g6v59ldd5i3rxll3zdcrdgk2vkh8nys";
url = "http://wejp.k.vu/files/${name}.tar.gz";
sha256 = "03x0mc0xw2if0bpf0a15yprcyx1xccki039zvl2099dagwk6xskv";
};
buildInputs = [ SDL SDL_gfx SDL_image tremor flac mpg123 libmikmod speex ];
buildInputs = [ SDL SDL_gfx SDL_image tremor flac mpg123 libmikmod speex ncurses ];
NIX_LDFLAGS = "-lgcc_s";
preBuild = ''
makeFlags="$makeFlags PREFIX=$out"
'';
makeFlags = [ "PREFIX=$(out)" ];
postInstall = ''
cp ${keymap}.keymap $out/share/gmu/default.keymap

View file

@ -1,9 +1,9 @@
{ stdenv, fetchurl, pythonPackages, mygpoclient, intltool
{ stdenv, fetchurl, python2Packages, mygpoclient, intltool
, ipodSupport ? true, libgpod
, gnome3
}:
pythonPackages.buildPythonApplication rec {
python2Packages.buildPythonApplication rec {
name = "gpodder-${version}";
version = "3.9.1";
@ -24,12 +24,12 @@ pythonPackages.buildPythonApplication rec {
'';
buildInputs = [
intltool pythonPackages.coverage pythonPackages.minimock
intltool python2Packages.coverage python2Packages.minimock
gnome3.gnome_themes_standard gnome3.defaultIconTheme
gnome3.gsettings_desktop_schemas
];
propagatedBuildInputs = with pythonPackages; [
propagatedBuildInputs = with python2Packages; [
feedparser dbus-python mygpoclient pygtk eyeD3
] ++ stdenv.lib.optional ipodSupport libgpod;

View file

@ -1,7 +1,7 @@
{ stdenv, fetchurl, makeWrapper, pkgconfig, MMA, libjack2, libsmf, pythonPackages }:
{ stdenv, fetchurl, makeWrapper, pkgconfig, MMA, libjack2, libsmf, python2Packages }:
let
inherit (pythonPackages) pyGtkGlade pygtksourceview python;
inherit (python2Packages) pyGtkGlade pygtksourceview python;
in stdenv.mkDerivation rec {
version = "12.02.1";
name = "linuxband-${version}";

View file

@ -12,7 +12,7 @@ let
inherit (python2Packages) buildPythonApplication python mutagen pygtk pygobject2 dbus-python;
in buildPythonApplication {
# call the package quodlibet and just quodlibet
name = "quodlibet${stdenv.lib.optionalString withGstPlugins "-with-gst-plugins"}-${version}";
name = "quodlibet${stdenv.lib.optionalString (!withGstPlugins) "-without-gst-plugins"}-${version}";
# XXX, tests fail
doCheck = false;

View file

@ -6,7 +6,7 @@ assert stdenv.system == "x86_64-linux";
let
# Please update the stable branch!
version = "1.0.45.186.g3b5036d6-95";
version = "1.0.47.13.gd8e05b1f-47";
deps = [
alsaLib
@ -51,7 +51,7 @@ stdenv.mkDerivation {
src =
fetchurl {
url = "http://repository-origin.spotify.com/pool/non-free/s/spotify-client/spotify-client_${version}_amd64.deb";
sha256 = "0fpvz1mzyva1sypg4gjmrv0clckb0c3xwjfcxnb8gvkxx9vm56p1";
sha256 = "0079vq2nw07795jyqrjv68sc0vqjy6abjh6jjd5cg3hqlxdf4ckz";
};
buildInputs = [ dpkg makeWrapper ];

View file

@ -2,11 +2,11 @@
stdenv.mkDerivation rec {
name = "atom-${version}";
version = "1.12.9";
version = "1.13.0";
src = fetchurl {
url = "https://github.com/atom/atom/releases/download/v${version}/atom-amd64.deb";
sha256 = "1yp4wwv0vxsad7jqkn2rj4n7k2ccgqscs89p3j6z8vpm6as0i6sg";
sha256 = "17k4v5hibaq4zi86y1sjx09hqng4sm3lr024v2mjnhj65m2nhjb8";
name = "${name}.deb";
};

View file

@ -1,16 +1,17 @@
{ stdenv, fetchurl, intltool, pkgconfig , gtk, libxml2
, enchant, gucharmap, python
{ stdenv, fetchurl, intltool, wrapGAppsHook, pkgconfig , gtk, libxml2
, enchant, gucharmap, python, gnome3
}:
stdenv.mkDerivation rec {
name = "bluefish-2.2.7";
name = "bluefish-2.2.9";
src = fetchurl {
url = "mirror://sourceforge/bluefish/${name}.tar.bz2";
sha256 = "1psqx3ljz13ylqs4zkaxv9lv1hgzld6904kdp0alwx99p5rlnlr3";
sha256 = "1l7pg6h485yj84i34jr09y8qzc1yr4ih6w5jdhmnrg156db7nwav";
};
buildInputs = [ intltool pkgconfig gtk libxml2
nativeBuildInputs = [ intltool pkgconfig wrapGAppsHook ];
buildInputs = [ gnome3.defaultIconTheme gtk libxml2
enchant gucharmap python ];
meta = with stdenv.lib; {

View file

@ -1,7 +1,8 @@
{ fetchurl, stdenv }:
stdenv.mkDerivation rec {
name = "ed-1.13";
name = "ed-${version}";
version = "1.14.1";
src = fetchurl {
# gnu only provides *.lz tarball, which is unfriendly for stdenv bootstrapping
@ -9,13 +10,13 @@ stdenv.mkDerivation rec {
# When updating, please make sure the sources pulled match those upstream by
# Unpacking both tarballs and running `find . -type f -exec sha256sum \{\} \; | sha256sum`
# in the resulting directory
urls = let file_md5 = "fb8ffc8d8072e13dd5799131e889bfa5"; # for fedora mirror
urls = let file_sha512 = "84396fe4e4f0bf0b591037277ff8679a08b2883207628aaa387644ad83ca5fbdaa74a581f33310e28222d2fea32a0b8ba37e579597cc7d6145df6eb956ea75db";
in [
("http://pkgs.fedoraproject.org/repo/extras/ed"
+ "/${name}.tar.bz2/${file_md5}/${name}.tar.bz2")
+ "/${name}.tar.bz2/sha512/${file_sha512}/${name}.tar.bz2")
"http://fossies.org/linux/privat/${name}.tar.bz2"
];
sha256 = "1iym2fsamxr886l3sz8lqzgf00bip5cr0aly8jp04f89kf5mvl0j";
sha256 = "1pk6qa4sr7qc6vgm34hjx44hsh8x2bwaxhdi78jhsacnn4zwi7bw";
};
/* FIXME: Tests currently fail on Darwin:

View file

@ -175,10 +175,10 @@
}) {};
auctex = callPackage ({ elpaBuild, fetchurl, lib }: elpaBuild {
pname = "auctex";
version = "11.89.8";
version = "11.90.0";
src = fetchurl {
url = "https://elpa.gnu.org/packages/auctex-11.89.8.tar";
sha256 = "0rilldzb7sm7k22vfifdsnxz1an94jnn1bn8gfmqkac4g9cskl46";
url = "https://elpa.gnu.org/packages/auctex-11.90.0.tar";
sha256 = "04nsndwcf0dimgc2p1yzzrymc36amzdnjg0158nxplmjkzdp28gy";
};
packageRequires = [];
meta = {
@ -295,10 +295,10 @@
}) {};
cl-lib = callPackage ({ elpaBuild, fetchurl, lib }: elpaBuild {
pname = "cl-lib";
version = "0.5";
version = "0.6.1";
src = fetchurl {
url = "https://elpa.gnu.org/packages/cl-lib-0.5.el";
sha256 = "1z4ffcx7b95bxz52586lhvdrdm5vp473g3afky9h5my3jp5cd994";
url = "https://elpa.gnu.org/packages/cl-lib-0.6.1.el";
sha256 = "00w7bw6wkig13pngijh7ns45s1jn5kkbbjaqznsdh6jk5x089j9y";
};
packageRequires = [];
meta = {
@ -306,6 +306,19 @@
license = lib.licenses.free;
};
}) {};
cobol-mode = callPackage ({ elpaBuild, fetchurl, lib }: elpaBuild {
pname = "cobol-mode";
version = "1.0.0";
src = fetchurl {
url = "https://elpa.gnu.org/packages/cobol-mode-1.0.0.el";
sha256 = "1zmcfpl7v787yacc7gxm8mkp53fmrznp5mnad628phf3vj4kwnxi";
};
packageRequires = [];
meta = {
homepage = "https://elpa.gnu.org/packages/cobol-mode.html";
license = lib.licenses.free;
};
}) {};
coffee-mode = callPackage ({ elpaBuild, fetchurl, lib }: elpaBuild {
pname = "coffee-mode";
version = "0.4.1.1";
@ -809,10 +822,10 @@
gnugo = callPackage ({ ascii-art-to-unicode, cl-lib ? null, elpaBuild, fetchurl, lib, xpm }:
elpaBuild {
pname = "gnugo";
version = "3.0.0";
version = "3.0.1";
src = fetchurl {
url = "https://elpa.gnu.org/packages/gnugo-3.0.0.tar";
sha256 = "0b94kbqxir023wkmqn9kpjjj2v0gcz856mqipz30gxjbjj42w27x";
url = "https://elpa.gnu.org/packages/gnugo-3.0.1.tar";
sha256 = "08z2hg9mvsxdznq027cmwhkb5i7n7s9r2kvd4jha9xskrcnzj3pp";
};
packageRequires = [ ascii-art-to-unicode cl-lib xpm ];
meta = {
@ -956,10 +969,10 @@
js2-mode = callPackage ({ cl-lib ? null, elpaBuild, emacs, fetchurl, lib }:
elpaBuild {
pname = "js2-mode";
version = "20160623";
version = "20170116";
src = fetchurl {
url = "https://elpa.gnu.org/packages/js2-mode-20160623.tar";
sha256 = "057djy6amda8kyprkb3v733d21nlmq5fgfazi65fywlfwyq1adxs";
url = "https://elpa.gnu.org/packages/js2-mode-20170116.tar";
sha256 = "1z4k7710yz1fbm2w8m17q81yyp8sxllld0zmgfnc336iqrc07hmk";
};
packageRequires = [ cl-lib emacs ];
meta = {
@ -2103,10 +2116,10 @@
ztree = callPackage ({ cl-lib ? null, elpaBuild, fetchurl, lib }:
elpaBuild {
pname = "ztree";
version = "1.0.4";
version = "1.0.5";
src = fetchurl {
url = "https://elpa.gnu.org/packages/ztree-1.0.4.tar";
sha256 = "0xiiaa660s8z7901siwvmqkqz30agfzsy3zcyry2r017m3ghqjph";
url = "https://elpa.gnu.org/packages/ztree-1.0.5.tar";
sha256 = "14pbbsyav1dzz8m8waqdcmcx9bhw5g8m2kh1ahpxc3i2lfhdan1x";
};
packageRequires = [ cl-lib ];
meta = {

File diff suppressed because it is too large Load diff

View file

@ -136,12 +136,12 @@ in
{
clion = buildClion rec {
name = "clion-${version}";
version = "2016.3";
version = "2016.3.2";
description = "C/C++ IDE. New. Intelligent. Cross-platform";
license = stdenv.lib.licenses.unfree;
src = fetchurl {
url = "https://download.jetbrains.com/cpp/CLion-${version}.tar.gz";
sha256 = "16nszamr0bxg8aghyrg4wzxbp9158kjzhr957ljpbipz0rlixf31";
sha256 = "0ygnj3yszgd1si1qgx7m4n7smm583l5pww8xhx8n86mvz7ywdhbn";
};
wmClass = "jetbrains-clion";
};
@ -172,12 +172,12 @@ in
idea-community = buildIdea rec {
name = "idea-community-${version}";
version = "2016.3.2";
version = "2016.3.3";
description = "Integrated Development Environment (IDE) by Jetbrains, community edition";
license = stdenv.lib.licenses.asl20;
src = fetchurl {
url = "https://download.jetbrains.com/idea/ideaIC-${version}.tar.gz";
sha256 = "0ngign34gq7i121ss2s9wfziy3vkv1jb79pw8nf1qp7rb15xn4vc";
sha256 = "1v9rzfj84fyz3m3b6bh45jns8wcil9n8f8mfha0x8m8534r6w368";
};
wmClass = "jetbrains-idea-ce";
};
@ -208,24 +208,24 @@ in
idea-ultimate = buildIdea rec {
name = "idea-ultimate-${version}";
version = "2016.3.2";
version = "2016.3.3";
description = "Integrated Development Environment (IDE) by Jetbrains, requires paid license";
license = stdenv.lib.licenses.unfree;
src = fetchurl {
url = "https://download.jetbrains.com/idea/ideaIU-${version}.tar.gz";
sha256 = "13pd95zad29c3i9qpwhjii601ixb4dgcld0kxk3liq4zmnv6wqxa";
sha256 = "1bwy86rm0mifizmhkm9wxwc4nrrizk2zp4zl5ycxh6zdiad1r1wm";
};
wmClass = "jetbrains-idea";
};
ruby-mine = buildRubyMine rec {
name = "ruby-mine-${version}";
version = "2016.2.5";
version = "2016.3.1";
description = "The Most Intelligent Ruby and Rails IDE";
license = stdenv.lib.licenses.unfree;
src = fetchurl {
url = "https://download.jetbrains.com/ruby/RubyMine-${version}.tar.gz";
sha256 = "1rncnm5dvhpfb7l5p2k0hs4yqzp8n1c4rvz9vldlf5k7mvwggp7p";
sha256 = "10d1ba6qpizhz4d7fz0ya565pdvkgcmsdgs7b8dv98s9hxfjsldy";
};
wmClass = "jetbrains-rubymine";
};
@ -256,36 +256,36 @@ in
pycharm-community = buildPycharm rec {
name = "pycharm-community-${version}";
version = "2016.3";
version = "2016.3.2";
description = "PyCharm Community Edition";
license = stdenv.lib.licenses.asl20;
src = fetchurl {
url = "https://download.jetbrains.com/python/${name}.tar.gz";
sha256 = "1pi822ihzy58jszdy7y2pyni6pki9ih8s9xdbwlbwg9vck1iqprs";
sha256 = "0fag5ng9n953mnf3gmxpac1icnb1qz6dybhqwjbr13qij8v2s2g1";
};
wmClass = "jetbrains-pycharm-ce";
};
pycharm-professional = buildPycharm rec {
name = "pycharm-professional-${version}";
version = "2016.3";
version = "2016.3.2";
description = "PyCharm Professional Edition";
license = stdenv.lib.licenses.unfree;
src = fetchurl {
url = "https://download.jetbrains.com/python/${name}.tar.gz";
sha256 = "1b4ib77wzg0y12si8zqrfwbhv4kvmy9nm5dsrdr3k7f89dqg3279";
sha256 = "1nylq0fyvix68l4dp9852dak58dbiamjphx2hin087cadaji6r63";
};
wmClass = "jetbrains-pycharm";
};
phpstorm = buildPhpStorm rec {
name = "phpstorm-${version}";
version = "2016.3";
version = "2016.3.2";
description = "Professional IDE for Web and PHP developers";
license = stdenv.lib.licenses.unfree;
src = fetchurl {
url = "https://download.jetbrains.com/webide/PhpStorm-${version}.tar.gz";
sha256 = "0hzjhwij2x3b5fqwyd69h24ld13bpc2bf9wdcd1jy758waf0d91y";
sha256 = "05ylhpn1mijjphcmv6ay3123xp72yypw19430dgr8101zpsnifa5";
};
wmClass = "jetbrains-phpstorm";
};
@ -304,12 +304,12 @@ in
webstorm = buildWebStorm rec {
name = "webstorm-${version}";
version = "2016.3.1";
version = "2016.3.2";
description = "Professional IDE for Web and JavaScript development";
license = stdenv.lib.licenses.unfree;
src = fetchurl {
url = "https://download.jetbrains.com/webstorm/WebStorm-${version}.tar.gz";
sha256 = "10za4d6w9yns7kclbviizslq2y7zas9rkmvs3xwrfw1rdw2b69af";
sha256 = "1h3kjvd10j48n9ch2ldqjsizq5n8gkm0vrrvznayc1bz2kjvhavn";
};
wmClass = "jetbrains-webstorm";
};
@ -340,12 +340,12 @@ in
datagrip = buildDataGrip rec {
name = "datagrip-${version}";
version = "2016.3";
version = "2016.3.2";
description = "Your Swiss Army Knife for Databases and SQL";
license = stdenv.lib.licenses.unfree;
src = fetchurl {
url = "https://download.jetbrains.com/datagrip/${name}.tar.gz";
sha256 = "10nah7v330qrrczzz5jldnr0k7w2xzljiny32gm9pqmjbl0i70il";
sha256 = "19njb6i7nl6szql7cy99jmig59b304c6im3988p1dd8dj2j6csv3";
};
wmClass = "jetbrains-datagrip";
};

View file

@ -1,4 +1,4 @@
{ stdenv, fetchurl
{ stdenv, fetchurl, fetchFromGitHub
, ncurses
, texinfo
, gettext ? null
@ -10,7 +10,14 @@ assert enableNls -> (gettext != null);
with stdenv.lib;
stdenv.mkDerivation rec {
let
nixSyntaxHighlight = fetchFromGitHub {
owner = "seitz";
repo = "nanonix";
rev = "17e0de65e1cbba3d6baa82deaefa853b41f5c161";
sha256 = "1g51h65i31andfs2fbp1v3vih9405iknqn11fzywjxji00kjqv5s";
};
in stdenv.mkDerivation rec {
name = "nano-${version}";
version = "2.7.3";
src = fetchurl {
@ -30,6 +37,10 @@ stdenv.mkDerivation rec {
substituteInPlace src/text.c --replace "__time_t" "time_t"
'';
postInstall = ''
cp ${nixSyntaxHighlight}/nix.nanorc $out/share/nano/
'';
meta = {
homepage = http://www.nano-editor.org/;
description = "A small, user-friendly console text editor";

View file

@ -19,7 +19,8 @@ stdenv.mkDerivation rec {
patchPhase = ''
sed -i build/configure \
-e s@vi_cv_path_preserve=no@vi_cv_path_preserve=/tmp/vi.recover@ \
-e s@/var/tmp@@
-e s@/var/tmp@@ \
-e s@-lcurses@-lncurses@
'';
configurePhase = ''

View file

@ -2,22 +2,23 @@
makeWrapper, libXScrnSaver }:
let
version = "1.8.0";
rev = "38746938a4ab94f2f57d9e1309c51fd6fb37553d";
version = "1.8.1";
rev = "ee428b0eead68bf0fb99ab5fdc4439be227b6281";
channel = "stable";
sha256 = if stdenv.system == "i686-linux" then "0p7r1i71v2ab4dzlwh43hqih958a31cqskf64ds4vgc35x2mfjcq"
else if stdenv.system == "x86_64-linux" then "1k15701jskk7w5kwzlzfri96vvw7fcinyfqqafls8nms8h5csv76"
else if stdenv.system == "x86_64-darwin" then "12fqz62gs2wcg2wwx1k6gv2gqil9c54yq254vk3rqdf82q9zyapk"
sha256 = if stdenv.system == "i686-linux" then "f48c2eb302de0742612f6c5e4ec4842fa474a85c1bcf421456526c9472d4641f"
else if stdenv.system == "x86_64-linux" then "99bd463707f3a21bc949eec3e857c80aafef8f66e06a295148c1c23875244760"
else if stdenv.system == "x86_64-darwin" then "9202c85669853b07d1cbac9e6bcb01e7c08e13fd2a2b759dd53994e0fa51e7a1"
else throw "Unsupported system: ${stdenv.system}";
urlBase = "https://az764295.vo.msecnd.net/stable/${rev}/";
urlBase = "https://az764295.vo.msecnd.net/${channel}/${rev}/";
urlStr = if stdenv.system == "i686-linux" then
urlBase + "code-stable-code_${version}-1481650382_i386.tar.gz"
urlBase + "code-${channel}-code_${version}-1482159060_i386.tar.gz"
else if stdenv.system == "x86_64-linux" then
urlBase + "code-stable-code_${version}-1481651903_amd64.tar.gz"
urlBase + "code-${channel}-code_${version}-1482158209_amd64.tar.gz"
else if stdenv.system == "x86_64-darwin" then
urlBase + "VSCode-darwin-stable.zip"
urlBase + "VSCode-darwin-${channel}.zip"
else throw "Unsupported system: ${stdenv.system}";
in
stdenv.mkDerivation rec {
@ -33,10 +34,7 @@ in
name = "code";
exec = "code";
icon = "code";
comment = ''
Code editor redefined and optimized for building and debugging modern
web and cloud applications
'';
comment = "Code editor redefined and optimized for building and debugging modern web and cloud applications";
desktopName = "Visual Studio Code";
genericName = "Text Editor";
categories = "GNOME;GTK;Utility;TextEditor;Development;";

View file

@ -59,7 +59,7 @@ stdenv.mkDerivation {
postInstall = ''
wrapProgram $out/bin/grass70 \
--set PYTHONPATH $PYTHONPATH \
--set GRASS_PYTHON ${python2Packages.python}/bin/${python2Packages.python.executable}
--set GRASS_PYTHON ${python2Packages.python}/bin/${python2Packages.python.executable} \
--suffix LD_LIBRARY_PATH ':' '${gdal}/lib'
ln -s $out/grass-*/lib $out/lib
'';

View file

@ -5,7 +5,7 @@
}:
stdenv.mkDerivation rec {
name = "qgis-2.16.2";
name = "qgis-2.18.3";
buildInputs = [ gdal qt4 flex openssl bison proj geos xlibsWrapper sqlite gsl qwt qscintilla
fcgi libspatialindex libspatialite postgresql qjson qca2 txt2tags ] ++
@ -14,8 +14,7 @@ stdenv.mkDerivation rec {
nativeBuildInputs = [ cmake makeWrapper ];
# fatal error: ui_qgsdelimitedtextsourceselectbase.h: No such file or directory
#enableParallelBuilding = true;
enableParallelBuilding = true;
# To handle the lack of 'local' RPATH; required, as they call one of
# their built binaries requiring their libs, in the build process.
@ -25,7 +24,7 @@ stdenv.mkDerivation rec {
src = fetchurl {
url = "http://qgis.org/downloads/${name}.tar.bz2";
sha256 = "0dll8klz0qfba4c1y7mp9k4y4azlay0sypvryicggllk1hna4w0n";
sha256 = "155kz7fizhkmgc4lsmk1cph1zar03pdd8pjpmv81yyx1z0i4ygvl";
};
cmakeFlags = stdenv.lib.optional withGrass "-DGRASS_PREFIX7=${grass}/${grass.name}";

View file

@ -1,25 +1,35 @@
{ stdenv, fetchurl, libjpeg, mesa, freeglut, zlib, cmake, libX11, libxml2, libpng,
libXxf86vm }:
libXxf86vm, gcc6 }:
stdenv.mkDerivation {
name = "freepv-0.3.0_beta1";
name = "freepv-0.3.0";
src = fetchurl {
url = mirror://sourceforge/freepv/freepv-0.3.0_beta1.tar.gz;
sha256 = "084qqa361np73anvqrv78ngw8hjxglmdm3akkpszbwnzniw89qla";
url = mirror://sourceforge/freepv/freepv-0.3.0.tar.gz;
sha256 = "1w19abqjn64w47m35alg7bcdl1p97nf11zn64cp4p0dydihmhv56";
};
buildInputs = [ libjpeg mesa freeglut zlib cmake libX11 libxml2 libpng
libXxf86vm ];
libXxf86vm gcc6 ];
patchPhase = ''
postPatch = ''
sed -i -e '/GECKO/d' CMakeLists.txt
sed -i -e '/mozilla/d' src/CMakeLists.txt
sed -i -e '1i \
#include <cstdio>' src/libfreepv/OpenGLRenderer.cpp
sed -i -e '1i \
#include <cstring>' src/libfreepv/Image.cpp
substituteInPlace src/libfreepv/Action.h \
--replace NULL nullptr
substituteInPlace src/libfreepv/pngReader.cpp \
--replace png_set_gray_1_2_4_to_8 png_set_expand_gray_1_2_4_to_8
'';
NIX_CFLAGS_COMPILE = "-fpermissive -Wno-narrowing";
meta = {
description = "Open source panorama viewer using GL";
homepage = http://freepv.sourceforge.net/;
license = "LGPL";
license = [ stdenv.lib.licenses.lgpl21 ];
};
}

Some files were not shown because too many files have changed in this diff Show more