There is a persistent misconception in the infrastructure world that containers are containers. Put something in an isolated box, expose a few ports, call it done. But that framing collapses a meaningful distinction that practitioners learn, sometimes painfully, only after deploying the wrong tool for the job. Linux Containers, known as LXC, sit at a specific and deliberate point in the virtualization landscape, one that is neither a virtual machine nor a packaging format for microservices, but something architecturally distinct from both.
Understanding LXC means understanding what problem it was actually designed to solve, how its kernel-level mechanisms work, and why the difference between a system container and an application container matters far more than most introductory guides acknowledge.
The Kernel Primitives That Make LXC Possible
LXC is not a virtualization layer in the conventional sense. It does not emulate hardware, does not load a separate kernel, and does not require a hypervisor. Instead, it exposes and composes a set of Linux kernel features that have existed since kernel version 2.6.24, assembling them into a coherent, manageable environment that behaves, from the inside, like a complete Linux installation.
The two foundational mechanisms are namespaces and control groups, known as cgroups. Namespaces answer the question of what a process can see. Six distinct namespace types partition the kernel's resources: PID namespaces isolate process trees so that a container's init process appears as PID 1 within its own view; network namespaces give each container a private network stack with its own interfaces, routing tables, and firewall rules; mount namespaces define an independent filesystem hierarchy; UTS namespaces allow each container to have its own hostname and domain; IPC namespaces isolate interprocess communication channels; and user namespaces remap UID and GID ranges, making container root map to an unprivileged host user.
Cgroups answer the complementary question of how much a process can consume. CPU time, memory, block I/O, and network bandwidth are all governable through the cgroup hierarchy. A container running a memory-hungry application can be capped at a fixed limit without affecting any other workload on the host. The kernel enforces these constraints directly; there is no polling, no userspace daemon checking resource counters.
System Containers Run a Full OS Inside Isolation
LXC's primary design target is the system container. This is a container that runs a complete Linux user space, complete with an init system such as systemd or OpenRC, background daemons, SSH, package management, cron, and multiple concurrently running services. From the perspective of someone who logs into it, a system container is indistinguishable from a minimal virtual private server.
This is where LXC differs most sharply from application-oriented container runtimes. A system container is not a packaging unit for a single binary. It is a persistent, stateful environment intended to live for months or years, to be administered interactively, and to host workloads that were designed for a traditional Linux server rather than for ephemeral, single-process deployment.
Creating a basic privileged system container on Ubuntu takes three commands:
# Install LXC
sudo apt-get install lxc
# Create an Ubuntu Focal container
sudo lxc-create --name myserver --template download -- --dist ubuntu --release focal --arch amd64
# Start the container as a background daemon
sudo lxc-start --name myserver --daemon
Once running, the container's status and assigned IP address are visible immediately:
sudo lxc-info --name myserver
# Name: myserver
# State: RUNNING
# PID: 4821
# IP: 10.0.3.112
# Link: vethA3kX9c
From there, lxc-attach drops an administrator into a shell inside the container, where they find a standard init process, a full package manager, and a network stack that is entirely private to that environment. It behaves like a server. The difference is that no kernel was loaded and no hardware was emulated: the container shares the host kernel but sees none of the host's process tree, filesystem, or network interfaces.
Autostart on host boot is configured through a single line appended to the container's configuration file:
echo "lxc.start.auto = 1" >> /var/lib/lxc/myserver/config
This persistence model is fundamental to what system containers are designed for: long-running infrastructure, not ephemeral task execution.
Application Containers and Where LXC Fits into That Story
LXC can also be used in application container mode, though this is less common in modern practice. An application container in the LXC context runs a minimal environment focused on a single process or tightly related group of processes, rather than a full OS. The container's rootfs contains only what the application requires: its runtime, libraries, and configuration.
The conceptual difference is significant. A system container asks "what does a Linux server need to exist?" An application container asks "what does this specific process need to run?" The former is expansive by design; the latter is deliberately minimal.
LXC historically laid the groundwork for the application container model that became dominant elsewhere. Early versions of Docker used LXC as their container runtime before developing the libcontainer library, now known as runc. The namespace and cgroup primitives that underpin Docker, Podman, and the broader OCI ecosystem all trace their lineage directly to the mechanisms that LXC exposed to userspace first.
Privileged and Unprivileged Containers Define the Entire Security Model
The most consequential configuration decision in any LXC deployment is not which distribution image to use or how many CPU cores to allocate. It is whether to run privileged or unprivileged containers. This distinction defines the entire security model.
A privileged container maps UID 0 inside the container directly to UID 0 on the host. In other words, root inside the container is root on the host kernel. The isolation provided by namespaces and cgroups is real, but it is not a security boundary in any meaningful sense against a determined attacker. LXC's own developers are explicit about this: privileged containers are not considered root-safe, and several documented kernel exploits allow escape from them. AppArmor profiles, seccomp filters, and capability dropping reduce the attack surface, but do not eliminate it.
Unprivileged containers work differently. The user namespace remaps UID and GID ranges so that root inside the container appears as an unprivileged host user, for example UID 100000. Even if an attacker escapes the container's namespace isolation, they arrive on the host as a regular unprivileged user with no meaningful access to system resources.
Configuring the UID and GID mapping in /etc/subuid and /etc/subgid is the prerequisite:
# /etc/subuid
myuser:100000:65536
# /etc/subgid
myuser:100000:65536
The container configuration file then declares these mappings explicitly:
# /var/lib/lxc/myserver/config
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
With these mappings in place, a process running as UID 0 inside the container is UID 100000 on the host. The kernel enforces this translation transparently. The practical result is that unprivileged containers are safe by design, not by policy, which is a fundamentally stronger guarantee.
Cgroup Resource Limits Applied Directly in Container Configuration
One of the most operationally useful aspects of LXC is granular resource governance through cgroups. On systems using cgroup v2, the relevant settings are expressed directly in the container configuration file, which lives at /var/lib/lxc/<name>/config.
Limiting a container to two CPU cores and 2 GB of RAM looks like this:
# Memory limit: 2 GB
lxc.cgroup2.memory.max = 2147483648
# CPU weight (relative scheduling priority)
lxc.cgroup2.cpu.weight = 100
# Restrict to specific CPU cores (cores 0 and 1)
lxc.cgroup2.cpuset.cpus = 0-1
After editing the configuration, the container must be restarted for changes to take effect. These limits are enforced by the kernel itself, not by userspace tooling, which makes them reliable under load conditions where a misbehaving process would otherwise consume unbounded resources.
Block I/O throttling is equally configurable. If a container is running a database and the goal is to prevent it from saturating the storage device at the expense of other containers, the following setting caps write bandwidth to 100 MB/s:
lxc.cgroup2.io.max = 8:0 wbps=104857600
The 8:0 prefix identifies the device by its major and minor number, obtainable from ls -la /dev/sda.
How LXC Networking Works Across Bridges, NAT, and Macvlan
Networking in LXC is handled through virtual Ethernet pairs, known as veth interfaces, bridged to either a NAT bridge or a physical bridge on the host. The default LXC installation creates a private NAT bridge called lxcbr0, which assigns containers addresses in a private subnet and masquerades their traffic through the host's external interface.
The network configuration inside a container's config file looks like this:
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
For a system container that should appear as a first-class host on the LAN with its own IP address, the macvlan type is appropriate instead. Macvlan attaches the container's virtual interface directly to a physical NIC, making the container visible on the network as an independent machine. This is particularly useful for services like DHCP servers, monitoring agents, or any workload that expects to receive traffic directed at a specific MAC address.
Where System Containers Excel and Where They Fall Short
System containers shine brightest in a specific set of scenarios, and that specificity is worth being honest about. They are excellent for consolidating legacy workloads that were designed for bare-metal Linux servers, for providing isolated development environments that resemble production without requiring full VM overhead, and for running infrastructure services like databases, monitoring stacks, and build systems where the container is expected to be a stable, long-lived entity.
Proxmox Virtual Environment is a prominent production platform built around this model. Its container technology is LXC underneath, and administrators manage system containers the same way they would manage virtual machines: persistent, individually named, configured, and maintained.
On the other hand, system containers are a poor fit for the ephemeral, stateless deployment patterns that define modern application delivery. Spinning up a system container to run a single Go binary is like renting an entire apartment to store one jacket. The overhead is not prohibitive, but it mismatches the tool to the use case. For that pattern, application-level runtimes offer a more coherent model: immutable images, declarative configuration, and built-in registry distribution.
The choice, properly understood, is not which technology is better. It is which isolation primitive matches the workload's lifecycle. A container that will be SSH'd into, updated, monitored, and running for eighteen months belongs in the system container model. A container that will be pulled from a registry, started, serve a thousand requests, and discarded belongs somewhere else.
What LXC Reveals About the True Nature of Linux Isolation
LXC has occupied an unusual position in the container ecosystem for over a decade: technically foundational, yet frequently overlooked in conversations dominated by application runtime tooling. That invisibility is somewhat ironic, given that the namespace and cgroup primitives LXC exposed to userspace administrators first are the same primitives every other container technology builds on today.
What LXC makes legible, more clearly than any higher-level abstraction does, is the actual shape of Linux isolation. There is no magic in a container. There is a PID namespace, a network namespace, a mount namespace, a cgroup hierarchy, and a set of capabilities either granted or revoked. LXC puts those controls directly in the administrator's hands without hiding them behind a daemon or a declarative image format. That directness is both its greatest strength and the source of the learning curve that sends some practitioners toward more opinionated tools.
For teams running mixed workloads on shared physical hardware, LXC's system container model offers something that neither full virtualization nor application containers deliver cleanly: the density and startup speed of a container combined with the operational familiarity of a traditional Linux server. A system container boots in under a second, consumes no memory for a guest kernel, and presents a complete, administrable environment to whoever logs into it. That combination remains genuinely difficult to replicate with any other single technology.
The distinction between system containers and application containers is, in the end, a distinction about assumptions. System containers assume the workload was designed for a server and should feel like one. Application containers assume the workload is a single process and should be treated like one. LXC makes both possible, but it was built with the first assumption in mind, and that origin shapes everything about how it behaves, how it is configured, and where it performs best.