A freshly installed Linux system has one network stack. Every process on the machine shares it: the same interfaces, the same routing table, the same firewall rules, the same port number space. Two processes cannot both listen on TCP port 80 at the same address. A misconfigured routing rule affects every application simultaneously. For a single-purpose server running one or two services, this global shared state is perfectly acceptable. For a machine running dozens of containers, each of which expects to own its own network environment, it is a fundamental obstacle.

Network namespaces remove that obstacle. Each namespace is a complete, independent copy of the Linux network stack: its own interfaces, its own routing table, its own conntrack table, its own iptables and nftables rulesets, and its own port number space. Two processes in different namespaces can both bind to port 80 without conflict, because they are operating in entirely separate network universes. The kernel enforces this separation at the level of individual file descriptors and socket operations, with no userspace cooperation required.

Understanding how that isolation is constructed, and how namespaces are wired together to form the networking model that container platforms depend on, requires following the mechanism from its simplest form all the way to the complete multi-namespace topology that underlies every Docker or Kubernetes deployment.

What a Network Namespace Contains and Why Isolation Starts with a Loopback Interface

When a new network namespace is created, it does not inherit anything from its parent. It receives one interface and only one: a loopback device named lo, which is down by default. There are no routing entries, no firewall rules, no IPv4 or IPv6 addresses, and no visible connection to any physical or virtual hardware. The namespace is, at the moment of creation, an island with no bridges to anywhere.

This emptiness is not a limitation to work around. It is the precise property that makes namespaces useful as isolation primitives. The administrator, or more commonly the container runtime, builds exactly the network environment the workload needs by adding virtual interfaces, assigning addresses, and writing routing rules. Nothing arrives uninvited.

The kernel represents each network namespace internally as a struct net, a large data structure that contains pointers to the namespace's routing tables, interface list, protocol-specific state for IPv4, IPv6, and other protocol families, and the netfilter hook registrations that govern packet filtering within that namespace. A process belongs to exactly one network namespace at any given time. That association is stored in the process's task_struct as a pointer to the appropriate struct net. When the process opens a socket, looks up a route, or sends a packet, every operation is dispatched through the namespace it belongs to, and no other namespace's state is consulted or visible.

Creating and Inspecting Network Namespaces with ip netns

The standard tool for managing network namespaces from userspace is the ip command from the iproute2 package. Its netns subcommand covers the full lifecycle of named namespaces, which are represented as bind-mounted files under /var/run/netns/:

# Create two named network namespaces
ip netns add ns1
ip netns add ns2

# List all named namespaces on the host
ip netns list

# Verify the namespace files exist in the filesystem
ls /var/run/netns/
# ns1  ns2

# Execute a command inside a namespace
ip netns exec ns1 ip link list
# 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
#     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

# Bring the loopback interface up inside ns1
ip netns exec ns1 ip link set lo up

# Open an interactive shell inside a namespace
ip netns exec ns1 bash
# All subsequent commands in this shell run inside ns1

The ip netns exec prefix is the workhorse of namespace administration. Every command that follows it runs in the context of the named namespace, seeing only that namespace's interfaces, routes, and firewall rules. Running ip route inside the namespace shows an empty table. Running ss -tlnp shows no listening sockets. The isolation is complete and immediate.

Named namespaces, managed through /var/run/netns/, are convenient for experimentation and for tools like Open vSwitch that need to reference namespaces by name. Container runtimes typically create anonymous namespaces instead, referenced only through file descriptors, and store their bind-mount paths under /var/run/docker/netns/ or similar runtime-specific directories.

Virtual Ethernet Pairs as the Cable Between Isolated Network Stacks

A namespace in complete isolation is useful for some scenarios, but most workloads need to communicate. The primary mechanism for connecting two namespaces is the virtual Ethernet pair, universally referred to as a veth pair. A veth pair is a pair of virtual network interfaces that behave like two ends of a patch cable: anything written to one interface appears immediately at the other. The two interfaces can live in different network namespaces, which makes the veth pair the natural primitive for wiring namespaces together.

Creating a veth pair and placing one end into a namespace takes two commands:

# Create a veth pair: veth0 stays in the host namespace, veth1 goes to ns1
ip link add veth0 type veth peer name veth1
ip link set veth1 netns ns1

# Confirm: veth1 has disappeared from the host namespace
ip link list
# veth0 is visible; veth1 is not

# Confirm: veth1 is visible inside ns1
ip netns exec ns1 ip link list
# 1: lo: <LOOPBACK,UP,LOWER_UP> ...
# 2: veth1@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN ...

# Assign addresses and bring both ends up
ip addr add 10.10.0.1/24 dev veth0
ip link set veth0 up

ip netns exec ns1 ip addr add 10.10.0.2/24 dev veth1
ip netns exec ns1 ip link set veth1 up

# Verify connectivity from the host to ns1
ping -c 2 10.10.0.2

The moment ip link set veth1 netns ns1 executes, veth1 vanishes from the host's interface list. It exists now in a different network namespace and is completely invisible to any process in the host namespace unless those processes explicitly enter ns1. The kernel enforces this at the interface lookup level: the interface list that a process sees when it calls getifaddrs() or reads /proc/net/if_inet6 is filtered to contain only the interfaces belonging to its namespace.

One shortcut worth knowing is that the veth pair can be created with the peer already assigned to a namespace in a single command:

ip link add veth0 type veth peer name veth1 netns ns1

This eliminates the intermediate state where both ends briefly exist in the host namespace, which matters in scripted environments where another process might claim the interface before the second command runs.

Linux Bridges and How Multiple Namespaces Share a Single Switching Layer

Connecting each namespace directly to the host with individual veth pairs works for two namespaces but does not scale. When ten or twenty containers need to communicate with each other and with the outside world, the wiring becomes a topology problem. The solution is the Linux bridge, a software layer-2 switch that forwards Ethernet frames between all interfaces attached to it, learning MAC addresses from traffic in exactly the same way a physical switch does.

The standard multi-namespace topology for a container host uses one bridge in the host namespace and one veth pair per namespace, with one end of each pair attached to the bridge and the other end inside the container's namespace:

# Create the bridge and assign it an IP address on the container subnet
ip link add br0 type bridge
ip addr add 172.20.0.1/24 dev br0
ip link set br0 up

# Create ns1 and connect it to the bridge
ip netns add ns1
ip link add veth-host1 type veth peer name veth-ns1
ip link set veth-host1 master br0
ip link set veth-host1 up
ip link set veth-ns1 netns ns1

ip netns exec ns1 ip link set lo up
ip netns exec ns1 ip link set veth-ns1 up
ip netns exec ns1 ip addr add 172.20.0.2/24 dev veth-ns1

# Create ns2 and connect it to the same bridge
ip netns add ns2
ip link add veth-host2 type veth peer name veth-ns2
ip link set veth-host2 master br0
ip link set veth-host2 up
ip link set veth-ns2 netns ns2

ip netns exec ns2 ip link set lo up
ip netns exec ns2 ip link set veth-ns2 up
ip netns exec ns2 ip addr add 172.20.0.3/24 dev veth-ns2

# Verify: ns1 can reach ns2 through the bridge
ip netns exec ns1 ping -c 2 172.20.0.3

The bridge acts as an invisible switch between all attached veth ends. Frames from ns1 addressed to ns2 traverse the veth pair into the host namespace, where the bridge forwards them out through the second veth pair and into ns2. Neither namespace needs a route for the other; the layer-2 forwarding handles it. The bridge's own IP address, 172.20.0.1, serves as the gateway for both namespaces once default routes are added.

This is precisely the topology that Docker creates with its docker0 bridge. Every container receives a veth pair, one end inside the container's network namespace and the other attached to docker0. The bridge's address is the gateway through which container traffic reaches the host and, from there, the outside network.

Routing, NAT, and Granting a Namespace Access to the Outside World

A namespace connected to a bridge can reach other namespaces on the same bridge, but it has no path to the wider network until two conditions are met: a default route pointing to the bridge gateway, and an IP masquerade rule that rewrites the container's private source address before the packet leaves the host's physical interface.

Adding the default route inside each namespace is straightforward:

ip netns exec ns1 ip route add default via 172.20.0.1
ip netns exec ns2 ip route add default via 172.20.0.1

Without the masquerade rule, packets from 172.20.0.2 would arrive at the host's routing code, leave through the physical interface with their private source address intact, and be dropped by the upstream router, which has no route back to 172.20.0.0/24. The masquerade rule in the host's netfilter POSTROUTING chain rewrites the source address to the host's public interface address for any packet originating from the container subnet:

# Enable IP forwarding on the host (required for routing between interfaces)
echo 1 > /proc/sys/net/ipv4/ip_forward

# Masquerade outgoing traffic from the container subnet
iptables -t nat -A POSTROUTING -s 172.20.0.0/24 -o eth0 -j MASQUERADE

# Allow forwarded traffic between the bridge and the external interface
iptables -A FORWARD -i br0 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o br0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# Verify outbound connectivity from ns1
ip netns exec ns1 ping -c 2 8.8.8.8

The masquerade target relies on the connection tracking subsystem to record the original source address and restore it in the reply direction automatically, without any additional rules for return traffic. From the perspective of the remote host, the packet appears to come from the host's public IP address. The namespace's private addressing is never visible outside the machine.

Port forwarding from the host into a namespace follows the reverse path: a DNAT rule in the PREROUTING chain rewrites the destination address of incoming packets before the routing decision:

# Forward TCP traffic arriving on host port 8080 into ns1 on port 80
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 8080 \
    -j DNAT --to-destination 172.20.0.2:80

This is the mechanism behind docker run -p 8080:80: a DNAT rule is installed in the host's netfilter tables, and connection tracking handles the reverse translation for replies automatically.

How Container Runtimes Use Network Namespaces Under the Hood

When a container runtime starts a container, the sequence of operations it performs with network namespaces is methodical and precise. First, the runtime creates a new network namespace by calling clone(2) with the CLONE_NEWNET flag or by calling unshare(2) in a child process. Second, it creates a veth pair in the host namespace, moves one end into the new namespace, and attaches the other end to the appropriate bridge. Third, it assigns an IP address from its managed subnet to the in-namespace interface, adds a default route pointing to the bridge, and brings both interfaces up. The container process is then started inside the new namespace with the network environment fully configured.

Container runtimes store their namespace file descriptors as bind mounts to prevent the kernel from destroying the namespace when its last process exits. The bind-mount path keeps the namespace alive as long as the file exists, even if no process is currently inside it. This is what allows a container to be paused and resumed, and what allows network configuration to persist across container restarts.

Inspecting the network namespace of a running container requires finding the process ID of the container's main process and entering its namespace:

# Find the PID of a running container
CONTAINER_PID=$(docker inspect --format '{{.State.Pid}}' my_container)

# Enter the container's network namespace and list its interfaces
nsenter --net=/proc/${CONTAINER_PID}/ns/net ip addr
nsenter --net=/proc/${CONTAINER_PID}/ns/net ip route
nsenter --net=/proc/${CONTAINER_PID}/ns/net ss -tlnp

# Capture traffic on the container's interface from the host side
# First, find the veth peer index visible from inside the container
nsenter --net=/proc/${CONTAINER_PID}/ns/net ip link \
    | grep -oP '(?<=eth0@if)\d+'
# Then capture on the corresponding veth interface in the host namespace
tcpdump -i veth3a2b1c -nn -s 0

This approach is indispensable for debugging container networking without modifying the container image to include diagnostic tools.

Persistent Namespace Configuration and the /etc/netns Convention

Named namespaces created with ip netns add are lost when the system reboots unless the configuration is scripted into a startup service. More subtly, some programs look for configuration files like /etc/resolv.conf and /etc/hosts and find the host's versions, which may not be appropriate for the namespace's isolated network environment.

The ip netns exec command addresses this through a file resolution convention: before executing a command inside a named namespace, it checks whether /etc/netns/<name>/ exists and, if so, bind-mounts the files found there over the corresponding paths in /etc/. A namespace named isolated that needs its own DNS configuration gets it through a file at /etc/netns/isolated/resolv.conf:

# Create the namespace-specific directory
mkdir -p /etc/netns/isolated

# Write a custom resolv.conf for the namespace
cat > /etc/netns/isolated/resolv.conf << 'EOF'
nameserver 127.0.0.1
search internal.example.com
EOF

# Any command run in the namespace now sees this resolv.conf
ip netns exec isolated cat /etc/resolv.conf
# nameserver 127.0.0.1
# search internal.example.com

For systems that need namespace configurations to survive reboots, a systemd service unit that runs the ip netns and ip link setup commands at boot is the standard approach. Alternatively, tools like systemd-networkd support network namespace configuration natively through .netdev and .network unit files, which declare veth pairs, bridge attachments, and address assignments in a declarative format that the system applies automatically at startup.

Network namespaces are, at their core, a straightforward kernel primitive: an independent copy of the network stack, assigned to a process through a pointer in its task structure. What makes them genuinely powerful is not that primitive itself but the composability that follows from it. A bridge here, a veth pair there, a masquerade rule at the boundary, and an otherwise unremarkable process finds itself operating inside a private network environment that no other process on the machine can see, modify, or interfere with. That composability is precisely why every major container platform, from the simplest single-host Docker setup to the most complex multi-tenant Kubernetes cluster, builds its networking model on top of these namespaces rather than on any higher-level abstraction.