ππ Isolating Traffic & Ensuring NFS π₯οΈ Redundancy with Bonding - Link Aggregation
Published:
π Bonding - Link Aggregation cumulus linux / Isolating Traffic / NFS
π― Objective
The goal of this project is to π isolate traffic between π» Machine1 & Machine2 while ensuring both can access the ποΈ NFS server. Additionally, redundancy is implemented for the serverβs π network interfaces to enhance π fault tolerance.
π Constraints
- ποΈ Layer 2 switches (Cumulus Linux) for Switch 1 & Switch 2.
- Only Switch 3 can use Layer 3 (π‘ routing if needed).
- β No additional VMware LAN segments for isolation.
π Network Design Without Redundancy
We divided the ποΈ implementation into 2οΈβ£ parts:
- βοΈ Configuring devices to allow π₯οΈ client-to-server communication while isolating clients from each other.
- π Enabling high availability for the π NFS server.
π§ Infrastructure Choices
- π» Machine 1 & Machine 2: Separated into VLANs (Machine1 in VLAN π, Machine2 in VLAN 2οΈβ£0οΈβ£) to prevent direct communication.
- π NFS Server: Interface in π trunk mode to receive traffic from both VLANs.
- π§ Switch 1: 1οΈβ£ access port for Machine1, 2οΈβ£ trunk ports (to NFS server & Switch3).
- π§ Switch 2: 2οΈβ£ access ports for Machine2.
- π§ Switch 3: Acts as an intermediary switch.
π Switch Port Roles
- Switch1:
- π’ swp1: VLANπ (access mode)
- π swp2: Trunk to π NFS Server
- π swp3: Trunk to Switch3
- Switch2:
- π’ swp1, swp2: VLAN2οΈβ£0οΈβ£ (access mode)
- Switch3:
- π swp1: Trunk to Switch1
- π’ swp2: VLAN2οΈβ£0οΈβ£ (access mode)
β Why VLANs?
We used VLANs to ποΈ isolate π» Machine1 from π» Machine2 while keeping π§ Switch3 as a Layer 2 switch. Since they remain in the same π LAN, VLANs allow π separation without adding π inter-VLAN routing.
βοΈ Switch Configuration
π§ Switch 1
net add interface swp1 bridge access 10
net add interface swp2,swp3 bridge vids 10,20
net add bridge bridge ports swp1,swp2,swp3
net add bridge bridge vids 10,20
net commit
π§ Switch 2
net add interface swp1,swp2 bridge access 20
net add bridge bridge ports swp1,swp2
net add bridge bridge vids 20
net commit
π§ Switch 3
net add interface swp1 bridge vids 10,20
net add interface swp2 bridge access 20
net add bridge bridge ports swp1,swp2
net add bridge bridge vids 20
net commit
π NFS Server & Client Setup
π₯οΈ NFS Server Configuration
π Network Setup
Install VLAN support on the server & clients:
apt install vlan
Configure the π NFS server with π Dual IP (sub-interfaces) to support VLAN-based communication.
π¦ NFS Installation
Install required packages & set up π directory sharing:
sudo apt install nfs-kernel-server
mkdir -p /machine1 /machine2
echo "/machine1 10.0.0.1(rw,sync,no_subtree_check)" >> /etc/exports
echo "/machine2 20.0.0.1(rw,sync,no_subtree_check)" >> /etc/exports
exportfs -a
π» Client Machines Setup
πΎ Mounting NFS Shares on Clients
πΉ Machine1:
mkdir -p /mnt/machine1
mount -t nfs 10.0.0.1:/machine1 /mnt/machine1
πΉ Machine2:
mkdir -p /mnt/machine2
mount -t nfs 20.0.0.1:/machine2
π οΈ Testing
π» Machine 1 to π NFS Server
π‘ Ping Test
ping 10.0.0.1
πΎ Write Test
π Capture of TCP exchanges:
π Adding Fault Tolerance
To improve reliability, we connected the π NFS server to 2οΈβ£ network interfaces (ens34 & ens35) aggregated under π bond0 using β‘ LACP (Link Aggregation Control Protocol). This allows VLANs π & 2οΈβ£0οΈβ£ to pass through while ensuring redundancy.
π§ Switch 1 Configuration for LACP
net del bridge bridge ports swp2
net add interface swp4 bridge vids 10,20
net add bond bond0 bond slaves swp2,swp4
net add bridge bridge ports bond0
net commit
π₯οΈ Server Configuration for LACP
π Performance Testing
Run π‘ iperf tests to measure π bandwidth with & without π link aggregation:
iperf -c 10.0.0.1 -t 10
iperf -c 10.0.0.1 -P 4 -t 10
π οΈ Testing LACP Failover
With this setup, π network failures are handled smoothly without disrupting π₯οΈ client access to the π NFS server. β