LXC template chaining for customization

LXC, the original implementation of Linux containers, has gotten very good since its 1.0 release. Compared to Docker, the features are all there except the customization provided by Docker's filesystem overlay system. While clever, Docker's system encourages downloading of untrusted code and images. For these reasons, LXC remains my favorite, and I would recommend it for production before Docker.

But how can LXC be customized easily? The template system is great for OS-level installs. The lxc-start command will fork the template as a process, supplying as arguments the necessary information for the template script to create the root filesystem. There is pretty much no documentation for this, except looking at the scripts that ship with LXC. The process tree for running a template looks like this:

lxc-create -t download -n foo -- --dist gentoo --release current --arch amd64
 \_ lxc-usernsexec -m u:0:1000:65536 -m g:0:1000:65536 -- /usr/share/lxc/templates/lxc-download --path=/home/erik/.local/share/lxc/foo --name=foo --rootfs=/home/erik/.local/share/lxc/foo/rootfs --dist gentoo --release current --arch amd64 --mapped-uid 0 --mapped-gid 0
     \_ /bin/sh /usr/share/lxc/templates/lxc-download --path=/home/erik/.local/share/lxc/foo --name=foo --rootfs=/home/erik/.local/share/lxc/foo/rootfs --dist gentoo --release current --arch amd64 --mapped-uid 0 --mapped-gid 0

But what if we want to install our apps also? It's a bad smell to fork a template script (like lxc-download) and tack our apps to the end of it.

Chaining is one solution. Let's wrap the lxc-download invocation with a custom template. We'll be limited to adding custom code immediately before or immediately after the wrapped script, but I believe that's good enough for most cases. Here's our goal:

lxc-create -t /home/erik/src/lxc-gentoo-custom/lxc-gentoo-custom -n foo -- --dist gentoo --release current --arch amd64
 \_ lxc-usernsexec -m u:0:1000:65536 -m g:0:1000:65536 -- /home/erik/src/lxc-gentoo-custom/lxc-gentoo-custom --path=/home/erik/.local/share/lxc/foo --name=foo --rootfs=/home/erik/.local/share/lxc/foo/rootfs --dist gentoo --release current --arch amd64 --mapped-uid 0 --mapped-gid 0
     \_ /bin/bash /home/erik/src/lxc-gentoo-custom/lxc-gentoo-custom --path=/home/erik/.local/share/lxc/foo --name=foo --rootfs=/home/erik/.local/share/lxc/foo/rootfs --dist gentoo --release current --arch amd64 --mapped-uid 0 --mapped-gid 0
         \_ /bin/sh /usr/share/lxc/templates/lxc-download --path=/home/erik/.local/share/lxc/foo --name=foo --rootfs=/home/erik/.local/share/lxc/foo/rootfs --dist gentoo --release current --arch amd64 --mapped-uid 0 --mapped-gid 0

To invoke this, I've simply replace my -t argument to lxc-create with the full path of my wrapper:

lxc-create -t ~/src/lxc-gentoo-custom/lxc-gentoo-custom -n foo -- --dist gentoo --release current --arch amd64

You can imagine now that further wrappers can be added. Maybe the innermost installs the OS (and comes from upstream LXC), wrapped by a script that installs common business libraries, wrapped by a script that installs the apps specific to that host.

Below is my implementation of lxc-gentoo-custom. It does nothing before the wrapped execution except parse out the arguments that it needs later. Unfortunately, most of the code in the script is to mirror the argument-parsing from the wrapped script.

After the inner script is executed, we install to the root filesystem a script /sbin/provision, and then append the lxc.hook.start to the container's config file that will cause /sbin/provision to be invoked (albeit prior to init, so we won't have access to running daemons).

#!/bin/bash

UPSTREAM_TEMPLATE_DIR="/usr/share/lxc/templates"

# echo Do pre-work
orig_params="$@"
do_create=1

options=$(getopt -o d:r:a:hl -l dist:,release:,arch:,help,list,variant:,\
server:,keyid:,keyserver:,no-validate,flush-cache,force-cache,name:,path:,\
rootfs:,mapped-uid:,mapped-gid: -- "$@")

if [ $? -ne 0 ]; then
    exit 1
fi
eval set -- "$options"

while :; do
    case "$1" in
        -h|--help)          do_create=0; shift 1;;
        -l|--list)          do_create=0; shift 1;;
        -d|--dist)          shift 2;;
        -r|--release)       shift 2;;
        -a|--arch)          shift 2;;
        --variant)          shift 2;;
        --server)           shift 2;;
        --keyid)            shift 2;;
        --keyserver)        shift 2;;
        --no-validate)      shift 1;;
        --flush-cache)      shift 1;;
        --force-cache)      shift 1;;
        --name)             LXC_NAME=$2; shift 2;;
        --path)             LXC_PATH=$2; shift 2;;
        --rootfs)           LXC_ROOTFS=$2; shift 2;;
        --mapped-uid)       LXC_MAPPED_UID=$2; shift 2;;
        --mapped-gid)       LXC_MAPPED_GID=$2; shift 2;;
        *)                  break;;
    esac
done
echo LXC_ROOTFS is $LXC_ROOTFS

# echo orig_params are "$orig_params"

set -ak # Environment pass-through
$UPSTREAM_TEMPLATE_DIR/lxc-download $orig_params
nested_exit=$?
set +ak

if [ "$do_create" = 1 ]; then
    echo Do post-work

    # Install provision script
    pushd $LXC_ROOTFS/sbin
    cat >provision <<EOF
#!/bin/bash
touch /tmp/provision
EOF
    chmod +x provision

    cat >>$LXC_PATH/config <<EOF

# On container start (before init runs), do some provisioning checks
lxc.hook.start=/sbin/provision
EOF
else
    echo Skipping provision work based on options
fi

exit 0

I hope this gives you an example of how LXC's template system can be extended with minimal effort and without forking upstream template scripts.

03/21/2015 by stasibear Permalink
containers