|
OpenBSD General Other questions regarding OpenBSD which do not fit in any of the categories below. |
|
Thread Tools | Display Modes |
|
|||
Link aggregation performance
Hello,
I'm a little bit disappointed with network performance of OpenBSD and I'm searching other point of view. background I need to put traffic load on a network lab. Naturally, I thought of my favorite distribution,OpenBSD. So I've installed two HP ML370G3 server with OpenBSD 5.2 i386. Each server is equipped with two Intel dual NIC gigabit (plus one embedded gigabit NIC), two Xeon 3.2GHz H.T. and 12GB RAM. I have installed two tools: iPerf and NetPerf. I did this following experiment with this two tools, but the results are similar. From network side, the topology is as follows: A core network with two big chassis and two high performance access stacks (switchs). Each access stack is connected with two 10Gbps fiber links in aggregation. experiment 1 Server A - NIC em0 - Subnet A - @.111 Server B - NIC em0 - Subnet B - @.222 One-way test (64KB window): ~ 870Mbps experiment 2 Server A - NIC trunk0 - Subnet A - @.111 Server A - NIC em0 - trunkport, loadbalance Server A - NIC em1 - trunkport, loadbalance Server B - NIC trunk0 - Subnet B - @.222 Server B - NIC em0 - trunkport, loadbalance Server B - NIC em0 - trunkport, loadbalance Switch side is well configured in aggregation with a hash algorithm based on src.mac+dst.mac+src.ip+dst.ip+src.port+dst.port One-way test (64KB window): ~ 870Mbps Two parallel one-way test (64KB window): total don't exceed ~ 850Mbps experiment 3 Server A - NIC trunk0 - Subnet A - @.111 and @.112 Server A - NIC em0 - trunkport, loadbalance Server A - NIC em1 - trunkport, loadbalance Server B - NIC trunk0 - Subnet B - @.222 and @.223 Server B - NIC em0 - trunkport, loadbalance Server B - NIC em0 - trunkport, loadbalance Switch side is well configured in aggregation with a hash algorithm based on src.mac+dst.mac+src.ip+dst.ip+src.port+dst.port One-way test (64KB window): ~ 870Mbps Two parallel one-way test (64KB window, on different IP @): total don't exceed ~ 850Mbps experiment 4 Server A - NIC trunk0 - Subnet A - @.111 and @.112 Server A - NIC em0 - trunkport, lacp Server A - NIC em1 - trunkport, lacp Server B - NIC trunk0 - Subnet B - @.222 and @.223 Server B - NIC em0 - trunkport, lacp Server B - NIC em0 - trunkport, lacp Switch side is well configured in aggregation with LACP standard. One-way test (64KB window): ~ 870Mbps Two parallel one-way test (64KB window, on different IP @): total don't exceed ~ 850Mbps experiment 5 Server A - NIC em0 - Subnet A - @.111 Server A - NIC em1 - Subnet B - @.111 Server B - NIC em0 - Subnet A - @.222 Server B - NIC em0 - Subnet B - @.222 Two parallel one-way test (64KB window, on different IP @): total don't exceed ~ 850Mbps question Why am I blocked at ~ 1Gbps limit ?
Thanks for your help. |
|
||||
Here's a recent thread on misc@ about 10gbit performance tests...
http://marc.info/?l=openbsd-misc&m=134313556730356&w=2 |
|
|||
Quote:
Quote:
The strangest part and the most disturbing for me is the last experiment. Two independent NIC can't break ~ 1Gbps. So I think that the root cause isn't the trunk driver. I forgot to mention that PF was disabled. In this case I'm only interested by RAW performance. Thank you jggimi the thread is very interesting. In summary:
I have also tested the Calomel.org tuning without any measurable performance enhancement. The threads say's: Quote:
|
|
|||
The size of the community on this site is quite small, & the number of people who actually attempt to answer questions is smaller still. Given the specificity of your questions, my recommendation would be to post to the misc@ mailing list as that venue is where the project's developers reside, however as was seen in the earlier thread cited, issues do remain when it comes to performance.
|
|
|||
Hi Ocicat,
That's a realistic approach. I'll do that and post update about the conduct of this new quest. Thank you to both of you for brain process time |
|
|||
Hi,
The openbsd-misc mailing wasn't helpfull. http://marc.info/?t=135886701500009&r=1&w=2 The discussion on this forum was much more useful ! So, if somebody as a good idea... |
|
|||
Given that the only mention of the version used is OpenBSD 5.2, I suspect that testing was done with 5.2-release. This particular version was tagged in CVS ~August 2012 which means that there has been five months of development since. The developers are more interested in the behaviour of the code found at the head of CVS, so I would suggest testing with -current (which is now 5.3-beta...) and compare & contrast to the results of your earlier testing. If you are unfamiliar with the differences between -release & -current, see Section 5.1 of the FAQ.
Secondly, I will caution you about confrontational wording, & suggest you consider the politics involved. Wording such as "Throughput is horrible!" may cause developers to ignore your thread as opposed to "I'm seeing a threshold at 870Mbps; does anyone have experience with tuning for a faster rate?" Thirdly, you might consider posting to tech@ as not all developers read misc@ on a regular basis, but be forewarned that posts on tech@ require concrete evidence substantiating the issue(s) described. Again, developers do not regularly frequent this site. Few members here are networking professionals, & far fewer have experience or knowledge with the implementation details associated with OpenBSD's networking stack. Last edited by ocicat; 6th February 2013 at 07:36 PM. Reason: added information about Section 5.1... |
|
||||
I have no further advice. The only simulated network load tests I've performed were in large-scale commercial IT laboratories with commercial network simulation tools. And we were not stress testing routers .. we were stress testing application server farms.
I don't know iPerf or Netperf at all, and with that ignorance I still wonder if the tests you conducted were insufficient or incomplete when compared with real workloads. |
|
|||
As a final thought on the matter, & I recognize that this may be an undesirable suggestion, but you might also consider contacting developers privately. You might also consider offering hardware on a loan basis seeing if this might be an enticement for someone to address the issue; if you were able to donate equipment, you might see a faster response. Given that few comments were made to your earlier thread on misc@, accessibility to hardware may an issue for some.
|
|
|||
In addition, other information sources:
http://marc.info/?t=112192662800002&r=1&w=2 http://marc.info/?t=129532561900001&r=1&w=2 I'll take the time to redo these tests and post the results on this forum. Thanks to both of you |
Tags |
aggregation, lacp, performance, poor, trunk |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
NetBSD on D-Link DIR-100/300/320 | Lexus45 | NetBSD General | 3 | 1st February 2011 12:24 PM |
PPPoE on D-link 2640 | vigol | FreeBSD General | 1 | 26th September 2010 03:25 PM |
zyd0: no link ........... sleeping | kallistoteles | OpenBSD Installation and Upgrading | 3 | 25th June 2010 02:38 PM |
D-link (DI-524) router | c0mrade | General software and network | 3 | 26th January 2009 08:14 AM |
kde .desktop file link doesn't act like a link when opening files | caesius | FreeBSD Ports and Packages | 3 | 14th October 2008 07:35 AM |