How it works
LXC is a lightweight “virtualization” method to run multiple virtual units (containers, similar to “chroot”) simultaneously on a single control host. Containers are isolated with Kernel Control Groups (cgroups) and Kernel Namespaces.
LXC provides an operating system-level virtualization where the Kernel controls the isolated containers. With other full virtualization solutions like Xen, KVM, or libvirt the processor simulates a complete hardware environment and controls its virtual machines.

Conceptually, LXC can be seen as an improved chroot technique. The difference is that a chroot environment separates only the file system, whereas LXC goes further and provides resource management and control via cgroups.
Benefits of LXC
- Isolating applications and operating systems through containers.
- Providing nearly native performance as LXC manages allocation of resources in real-time.
- Controlling network interfaces and applying resources inside containers through cgroups.
- All LXC containers are running inside the host system's Kernel and not with a different Kernel.
- Only allows Linux “guest” operating systems.
- LXC is not a full virtualization stack like Xen, KVM, or libvirt.
- Security depends on the host system. LXC is not secure. If you need a secure system, use KVM.
Code: Select all
$ sudo apt-get install lxc
In order to create or operate linux containers, you'll need to be root (private containers can also be created by non privileged users, but to work well you need at least kernel 3.13 - the Ubuntu 15.04 image from HardKernel comes with 3.10). In addition to root, you'll also need to have some specific kernel configurations active (if you've built your own kernel). The default kernel from HardKernel contains all the necessary configuration to run lxc out of the box. To check that your system is ready for lxc, run the lxc-checkconfig command. If your kernel does not support lxc, see Odroid Magazine January 2015 issue to find out what you need to enable in the kernel's configuration.
To create a new container, you first need to setup an initial configuration file and to select a suitable template. The templates instruct LXC how to download the necessary packages for the distribution of your choice. In Ubuntu 15.04 you get these templates by default
Code: Select all
# ls /usr/share/lxc/templates/
lxc-alpine lxc-centos lxc-fedora lxc-oracle lxc-ubuntu-cloud
lxc-altlinux lxc-cirros lxc-gentoo lxc-plamo
lxc-archlinux lxc-debian lxc-openmandriva lxc-sshd
lxc-busybox lxc-download lxc-opensuse lxc-ubuntu
We will first create a new container for Fedora Linux. We want the network configuration to go through lxcbr0 which does NAT so we will setup the configuration like this:
Code: Select all
# cat fedora.conf
lxc.utsname = fedoracontainer
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0
Code: Select all
# sed -i 's/mirrorurl="mirrors.kernel.org::fedora"/mirrorurl="mirrors.kernel.org::archive\/fedora-archive"/' /usr/share/lxc/templates/lxc-fedora
Code: Select all
# lxc-create -t /usr/share/lxc/templates/lxc-fedora -f fedora.conf -n fedoracontainer -- --release 23
After the container is created, you can safely delete the bootstrap and cache if you don't plan on installing other fedora-based containers soon:
Code: Select all
# rm -rf /var/cache/lxc/fedora/armhfp/bootstrap
# rm -rf /var/cache/lxc/fedora/armhfp/LiveOS
How about trying out Ubuntu 15.10 as a container? I've heard there are still problems if you run it as a main operating system on Odroid, but let's have a look. First, prepare the initial config file. It's similar to Fedora's, but this time we want the container to be bridged to eth0, so we'll need to create a bridge interface connected to eth0 which we'll call brlan0.
Changing your wired network connection to a bridge interface can be a bit tricky if you are doing this remotely over the network. The best way to do it and have persistency across reboots is to add this config to /etc/network/interfaces and reboot your Odroid:
Code: Select all
auto brlan0
iface brlan0 inet dhcp
bridge_ports eth0
Once the bridge interface is up and running (e.g. you have rebooted your Odroid), use this configuration to prepare the container:
Code: Select all
# cat ubuntu.conf
lxc.utsname = ubuntucontainer
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = brlan0
lxc.network.name = eth0
# lxc-create -t /usr/share/lxc/templates/lxc-ubuntu -f ubuntu.conf -n ubuntucontainer -- --release wily
If you want to bridge to your wireless adapter, the bad news is that you can't (http://serverfault.com/questions/152363 ... n0-to-eth0). This is roughly because the wireless driver can create multiple logical interfaces (such as wlan0) and you can't move the logical interface in a different namespace without moving the whole network card. However, lxc provides a mechanism to detach the whole network card from your host system and attach it to a running container:
Code: Select all
# lxc-device -n container-name add wlan0

Once the wifi card has been attached to the container, it will no longer be visible in the host OS, so you'd better have an alternate way of connecting to it.
Starting and stopping
Now that you have two containers, it's time to start them up. This can be done with the following command:
Code: Select all
# lxc-start -n fedoracontainer -d
Code: Select all
# lxc-attach -n fedoracontainer
[root@fedoracontainer ~]#

To exit the container (without stopping it), you can simply type exit at the prompt. You can also access the container via ssh from the host via the internal network.
To turn off a container you can issue the lxc-stop command:
Code: Select all
# lxc-stop -n fedoracontainer
Code: Select all
lxc.start.auto = 1
lxc.start.delay = 10
Code: Select all
# lxc-ls --fancy
NAME STATE IPV4 IPV6 GROUPS AUTOSTART
-------------------------------------------------------
fedoracontainer STOPPED - - - YES
ubuntucontainer STOPPED - - - NO
Code: Select all
# lxc-info -n fedoracontainer
Name: fedoracontainer
State: RUNNING
PID: 24396
IP: 10.0.3.186
CPU use: 41.87 seconds
Memory use: 15.09 MiB
Link: vethTSW172
TX bytes: 3.27 KiB
RX bytes: 28.89 KiB
Total bytes: 32.16 KiB

Advanced topics
The configuration shown before will get you started with lxc without adding too much complexity to your setup. However, containers have a lot of flexibility in terms of your control of resource allocation that we will briefly discuss now.
Disk space
The containers you've just created use a directory on the filesystem to store their rootfs. While this is simple to implement and understand, it provides medium I/O performance. Other options include a lvm block device, a loop block device (which can be a file or a physical storage device), brtfs filesystem or zfs. These allow you to specify a maximum size to be used and also, brtfs and zfs offer features for snapshots, deduplication and fast cloning (copy-on-write). If needed you can also limit the amount of I/O operations that the container is allowed to make in order not to starve other containers or the host.
Memory
To list the currently used memory of a running container, you can run this command:
Code: Select all
# cat /sys/fs/cgroup/memory/lxc/ubuntucontainer/memory.usage_in_bytes
Code: Select all
# lxc-cgroup -n ubuntucontainer memory.limit_in_bytes 40M
# cat /sys/fs/cgroup/memory/lxc/ubuntucontainer/memory.limit_in_bytes
41943040
Code: Select all
root@ubuntucontainer:~# free -m
total used free shared buffers cached
Mem: 40 31 8 31 0 23
-/+ buffers/cache: 7 32
Swap: 0 0 0
Code: Select all
root@ubuntucontainer:~# mount -t tmpfs -o size=50m tmpfs /mnt/ramdisk3
root@ubuntucontainer:~# dd if=/dev/zero of=/mnt/ramdisk3/1
dd: writing to '/mnt/ramdisk3/1': Cannot allocate memory
Code: Select all
lxc.cgroup.memory.limit_in_bytes = 40M
You can set specific CPU cores to a container, or allocate a number of CPU shares to that container to restrict CPU usage (by default each container gets 1024 shares):
Code: Select all
# cat /sys/fs/cgroup/cpu/lxc/ubuntucontainer/cpu.shares
1024
# echo 256 > /sys/fs/cgroup/cpu/lxc/ubuntucontainer/cpu.shares
# cat /sys/fs/cgroup/cpu/lxc/ubuntucontainer/cpu.shares
256
Code: Select all
lxc.cgroup.cpuset.cpus = 1,2
lxc.cgroup.cpu.shares = 256
In order to use specific kernel modules inside a LXC container, you first need to load that module on the host (e.g. for iptables).
Special files
Similar to the special configuration needed to attach a wifi interface to a running LXC, you can bind special files from the host to be used exclusively by the container. For instance, to be able to use a USB-to-Serial adapter in the container you could run this command in the host:
Code: Select all
# lxc-device add -n ubuntucontainer /dev/ttyUSB0 /dev/ttyS0
Use cases
Containers can be useful as test systems where you can experiment without the fear of breaking things. You can give root access to your friends and share multiple independent environments on top of the same hardware.
I learned how to use LXC and went ahead and bought a few Odroids in order to conduct network tests using multiple NICs from multiple locations. My employer was running multiple Smokeping slave instances over multiple providers to measure website response time, Youtube video download and Speedtest.net results from two independent containers running on Odroid. The need for containers allowed us to use bridged networking to access remote resources via both links simultaneously (keep different routing tables). Because the application doesn't need a lot of CPU/Memory, the Odroids were perfect for the job! My plan for the future is to get Android running inside a container - apparently it's not impossible: https://www.stgraber.org/2013/12/23/lxc ... ner-usage/, or even OpenWRT to have a router on a stick.
References:
https://www.suse.com/documentation/sles ... start.html
https://www.flockport.com/lxc-advanced-guide/