View Single Post
  #1   (View Single Post)  
Old 10th January 2013
xinform3n xinform3n is offline
Port Guard
 
Join Date: Jun 2009
Posts: 15
Arrow Link aggregation performance

Hello,

I'm a little bit disappointed with network performance of OpenBSD and I'm searching other point of view.

background
I need to put traffic load on a network lab. Naturally, I thought of my favorite distribution,OpenBSD.

So I've installed two HP ML370G3 server with OpenBSD 5.2 i386.
Each server is equipped with two Intel dual NIC gigabit (plus one embedded gigabit NIC), two Xeon 3.2GHz H.T. and 12GB RAM.

I have installed two tools: iPerf and NetPerf.
I did this following experiment with this two tools, but the results are similar.

From network side, the topology is as follows:
A core network with two big chassis and two high performance access stacks (switchs).
Each access stack is connected with two 10Gbps fiber links in aggregation.

experiment 1
Server A - NIC em0 - Subnet A - @.111
Server B - NIC em0 - Subnet B - @.222

One-way test (64KB window): ~ 870Mbps

experiment 2
Server A - NIC trunk0 - Subnet A - @.111
Server A - NIC em0 - trunkport, loadbalance
Server A - NIC em1 - trunkport, loadbalance

Server B - NIC trunk0 - Subnet B - @.222
Server B - NIC em0 - trunkport, loadbalance
Server B - NIC em0 - trunkport, loadbalance

Switch side is well configured in aggregation with a hash algorithm based on src.mac+dst.mac+src.ip+dst.ip+src.port+dst.port

One-way test (64KB window): ~ 870Mbps
Two parallel one-way test (64KB window): total don't exceed ~ 850Mbps

experiment 3
Server A - NIC trunk0 - Subnet A - @.111 and @.112
Server A - NIC em0 - trunkport, loadbalance
Server A - NIC em1 - trunkport, loadbalance

Server B - NIC trunk0 - Subnet B - @.222 and @.223
Server B - NIC em0 - trunkport, loadbalance
Server B - NIC em0 - trunkport, loadbalance

Switch side is well configured in aggregation with a hash algorithm based on src.mac+dst.mac+src.ip+dst.ip+src.port+dst.port

One-way test (64KB window): ~ 870Mbps
Two parallel one-way test (64KB window, on different IP @): total don't exceed ~ 850Mbps

experiment 4
Server A - NIC trunk0 - Subnet A - @.111 and @.112
Server A - NIC em0 - trunkport, lacp
Server A - NIC em1 - trunkport, lacp

Server B - NIC trunk0 - Subnet B - @.222 and @.223
Server B - NIC em0 - trunkport, lacp
Server B - NIC em0 - trunkport, lacp

Switch side is well configured in aggregation with LACP standard.

One-way test (64KB window): ~ 870Mbps
Two parallel one-way test (64KB window, on different IP @): total don't exceed ~ 850Mbps

experiment 5
Server A - NIC em0 - Subnet A - @.111
Server A - NIC em1 - Subnet B - @.111

Server B - NIC em0 - Subnet A - @.222
Server B - NIC em0 - Subnet B - @.222

Two parallel one-way test (64KB window, on different IP @): total don't exceed ~ 850Mbps

question

Why am I blocked at ~ 1Gbps limit ?
  • Trunk driver ?
  • Kernel performance ?
  • PCI-X Bus limitation ?
  • CPU or FSB limitation ?
  • ... ?

Thanks for your help.
Reply With Quote