Linux distribution build system

Reproducible custom distribution build system for Linux

I have a big infrastructure consisting of several kinds of servers running Linux. For instance, database servers, load balancers, application-specific servers. There are many instances of every kind of server, and all of them need to be reproducible. Every kind of server is basically a custom distribution. Customisations include changes to the upstream packages (other upstream version, build options, patches, whatever) and, possibly, some extra custom packages. For example, I need a server running the latest OpenLDAP slapd compiled with specific options and some patches. And this is where things get complicated. Updating to the latest slapd will also require updating libraries it depends on, which means rebuilding all packages that depend on these libraries, too. That is I basically need to rebuild significant part of the distribution. I’m looking for a solution that helps automate this process.

Solution requirements

Kind of vague. I want to prepare everything necessary for building my custom distro, give it a name (e.g ldap-server) and give that name to the automated build system any time I need to reproduce the build. I think this is something Gengoo or LFS community should have. Also I’ve seen projects like ALT Linux Hasher, Fedora Mock, Debian pbuilder/sbuild but never used any of them. Any ideas? Thanks in advance!

5 Answers 5

You might also make use of the Nix package manager and Hydra build system for your task.

  • Nix is a purely functional package manager.
  • Hydra is a Nix-based continuous build system. (AFAIU, it does the rebuilding of dependent packages when necessary.)

Nix can not only track package dependencies and their modification, but also your host configuration—to allow rollbacks to consistent previous states. (That’s the idea behind NixOS, a Nix-based Linux distribution.)

Ivan, thank you very much for your attention and elaborate answers! Nix looks very interesting especially as it is weel documented. I like the idea and I’ve got an impression the implementation should be decent as well (haven’t looked at it yet though). I’d try it for some small in-house project of my own but I don’t think it’s currently possible to use it company-wide where CentOS is standardized.

My colleagues now use BuildTracker written by Andrey Scopenco. I like the general approach, which is adopted from Sergey Skvortsov’s builder, but not the implementation which is quite slipshod. The docs are almost lacking, too.

One more from the set of ALT-related/based tools: korinf (well, a nicer spelling would be «Corinth» or «Korinth» from «Κόρινθος»). It uses hasher to build packages for other target OSes, as your situation supposes. But I’m not sure on whether it in its current state is also integratable with girar-builder that maintains good repositories (takes care of broken dependencies) and with mkimage that creates complete custom distros.

Читайте также:  What is group wheel in linux

Seems like Nix is the most fundamental solution. I can see some downsides. Namely the steep learning curve (devs will have to use it everywhere instead of pip, npm etc.), the lack of official support from commercial vendors like Oracle, Atlassian etc. and also maybe experts are hard to hire. But I really like all the benefits Nix gives even w/o using NixOS.

I wont ask why you chose to maintain a custom distro for your production servers . but . I have had some experience of this kind of hackathon . and the massive headaches that go with it.

  1. To automate your build of the distro, I used an XML definition of the build order and dependencies and scripted GNU Make to build in parallel independent branches and construct the binary packages. The resulting output from the XML+shell-script+bit of python+Make/Autotools was a complete build of a special set of ‘core’ tools and then extras.
  2. The second step was installing these binaries/raw build directories into a system. I used installwatch (i think) to use inotify to keep an eye on where things were installed to. I then output XML of this along with the dependencies of any binaries.
  3. After this I had a build manifest (XML) and for each package an XML file with the details of the installed packages. I then made a tool to convert the XML and the in-place binaries into various formats (RPM etc)
  4. Now (use your imagination) I have an install script to automate build, tons of meta-data on built packages and their dependencies, and a method of turning that meta-data into deployable packages
  5. Next, I made build-scripts for various servers, from glib up 🙂 . and ran those builds. The system knew which packages/./configure’s were common and shared those packages. This left me with
    o A repo called /common
    o A repo for each build type and architecture
  6. A few scripts/rsync-over-ssh and patch management scripts and you are away.

obviously this is a very rough overview of my approach to building multiple distros for a common environment. Some packages were meta-packages that affected the source-tree (but were treated like normal packages as build time. One example was a meta-package that ran first and applied patches to the kernel).

Then there is the matter of tool-chain automation.

It all started off with LFS . but as you can see, things got a little adventurous.

Читайте также:  Совместная работа linux windows

Bottom line is, it was very fun but I just ditched it all for a BSD and Fedora.

Things like the Suse Build Service might be of interest. Farming out the stable-source-combination-finding and compilation will make things simpler! You don’t even need to build anything to do with Suse.

Thanks for your answer Aiden! Your approach is interesting. Especially as it looks very much like what my colleagues already have in FreeBSD — a piece of software that takes some XML and produces a set of ready to build ports. Later the guys use Tinderbox to automatically build package sets from this. Now we need something like this on Linux. BTW, is you stuff open source by any chance? And why did you abandon it? Seems like you put much effort on the implementation. Also, why did you choose Fedora? Thanks

@Timur — The scripts were not that well developed to be autonomous. They worked well but still required some watching. I don’t know why I abandoned it . it was going towards a Gentoo like system and I didn’t like that much. Fedora is great for my personal systems, and FreeBSD’s ports are great for production. Maybe look at a derived ports database specific for your needs, just import the kernel+basics and the FreeBSD install system?

ALTLinux girar-builder is the system (that uses hasher internally) to rebuild packages and maintain a consistent repository of packages. Hasher is a tool to isolate the build processes, so that all the requirements can be accurately «tracked» so that there are some guarantees as to the reproducibility of the build process.

Among other things, girar-builder does dependency checks when adding (updating, deleting) a newly built package to the repository, so that the new package won’t be accepted if it breaks the dependencies of other packages, unless the other dependent packages are also added to the same build task (= repo changing transaction) and rebuilt after the new package. This is a situation that can often be observed being discussed (an example with a dep broken due to the disappearance of a symbol in a shared library, an example of a package deletion) in the ALTLinux developers mailing list (the English counterpart of the list): that «NEW unmet dependencies detected». In order to proceed, the dependent packages should be added by their maintainers to that task.

girar-builder also does an installation test for the new packages, just to name another check done by git.alt (girar-bulder).

In order to be sure that building the packages can be reproduced in the current state of the repository of packages, it is being checked from time to time (quite regularly) that every package in the repository (called Sisyphus) is rebuildable at the current moment — a rebuild test status report, the logs of the last rebuild test, per package.

Читайте также:  Linux mint deepin desktop

At ALTLinux, there also tools to automate the creation of a custom distro from a distro «profile» and a package repo: altlinux.org/Mkimage .

Thanks again. Mkimage looks like almost what I needed. I think it can be adapted to my workflow where bare metal machines are PXE-booted, base OS is kickstarted, and after that the project meta-package installed which pulls all necessary packages as its dependencies. Finally, Puppet is called to configure everything, and voila! Still, for me it’s currently impractical to use ALT Linux, but it’s very interesting to see how others solve a similar task.

Just in case, a yet another answer to the similar set of problems has become available since the original question: mkimage-profiles which is based on ALT Linux distribution related toolchain but extends it with an image configuration management tool that is trying to make the occuring forks minimalistic and concise. It’s mostly formally documented in Russian by now (it was my decision for several reasons) but the code itself is pretty well-commented in English.

To get a feeling of the approach see e.g. conf.d/server.mk:

distro/.server-base: distro/.installer use/syslinux/ui/menu use/memtest @$(call add,BASE_LISTS,server-base openssh) distro/server-nano: distro/.server-base \ use/cleanup/x11-alterator use/bootloader/lilo +power @$(call add,BASE_LISTS,$(call tags,server network)) @$(call add,BASE_PACKAGES,dhcpcd cpio) distro/server-mini: distro/.server-base use/server/mini use/cleanup/x11-alterator @$(call set,KFLAVOURS,el-smp) 

There’s some support for OpenVZ template caches, VM images, ARM/PPC arches, git (as in committing the stages of the profile being generated with meaningful descriptions) and configuration tree graphing, among the rest.

PXE boot support should be pretty straightforward to implement (and get upstream) within the framework but not actually done yet — I know the bits but have to get around to them.

There’s a preliminary support for netinstall images starting from ~17MB in size (example).

I’d also be interested in your particular reasons to find ALT impractical — there are some known ones of course but yours might be new to me 🙂 PS: especially when being more or less ready to go as far as LFS.

PS2: you can try out the thing in live mode with live-builder.iso on a system with 4+ GB RAM and DHCP-enabled Internet-routed ethernet connection, just login as altlinux, cd /usr/share/mkimage-profiles and make server-mini.iso

Источник

Оцените статью
Adblock
detector