|
||||
You might find a discussion on 10Gb firewall performance from the misc@ mailing list helpful. It took place in January and February of last year, and begins here:
http://marc.info/?l=openbsd-misc&m=126419505806549&w=2 You don't mention the architecture you run with your Sunfire X4100 (AMD Opteron). For 4.8, the 10Gb NIC support list is the same for both i386 and amd64 (from the Project website), though capabilities differ -- the difference is in the footnotes. Review the hardware support documents for the details. For amd64: Code:
10Gb Ethernet Adapters * Intel 82597 PRO/10GbE based PCI adapters (ixgb) * Intel 82598 PRO/10GbE based PCI adapters (ix) * Neterion Xframe/Xframe-II based PCI adapters (xge) * Tehuti Networks 10Gb based PCI adapters (tht), (G) Code:
10Gb Ethernet Adapters * Intel 82597 PRO/10GbE based PCI adapters (ixgb) (A) (B) (C) * Intel 82598 PRO/10GbE based PCI adapters (ix) (A) (B) (C) * Neterion Xframe/Xframe-II based PCI adapters (xge) (A) (B) (C) * Tehuti Networks 10Gb based PCI adapters (tht), (A) (B) (C) Last edited by jggimi; 20th January 2011 at 08:16 PM. Reason: clarification |
|
|||
There are a few other drivers that show up in an apropos search as well, not sure if they're well supported yet.. or if OpenBSD's network stack is well optimized for 10G speeds.
che(4) - Chelsio Communications 10Gb Ethernet device myx(4) - Myricom Myri-10G PCI Express 10Gb Ethernet device You should see if you can replicate your setup and artificially generate the kind of traffic you'll be seeing, see how it fares.. and then gauge if a dedicated intelligent switch/bridge might perform better. |
|
||||
Thank you for the thread link and the prototyping suggestion...
is there any way to tell if a system running "PF" is "Maxxed out" - Would the load average spike or would pf or my NIC drop packets quietly? is there a PF metric I can look at to see if my bridge is performing ok? |
|
||||
Quote:
If you stay PCI-X ... seriously crack into your system's technical specifications. PCI-X is a linear bus; that is, cards on the same bus COMPETE with each other. If on the same bus, in a bridge mode, each 10GB card will CRUSH the other. In all but the first-gen servers with PCI-X, the PCI-X chip set supports four (4) independent PCI-X buses. One bus (bus[0]) necessarily connects to the NORTH/SOUTH bridge for CPU and MEMORY access. The remaining three buses are typically spread to -- (a) bus[1] on-board chips (e.g. embedded SCSI controller, if it/they exists), and (b) the physical slots in the system. If your PCI-X system includes legacy PCI-only slots, then one of these three buses is typically dedicated to these legacy PCI-only slots (bus[2]). Consequently, your PCI-X slots are typically spread over the remaining bus. You need to crack your mobo's technical spec/documention an find those two independent and non-contended slots for your 10GB cards. If you can't isolate to slots, then don't even try. IMO, though, if you need something approaching wire- (or glass-) speed, then go, go, go PCI-e. The speed through all the rest of the system will be faster too. You can get a supermicro mobo with the needed x8 (or x4) PCI-e slots for sub-CAD$200, or a genuine intel mobo for just over CAD$200. Drop a low cost E3-1200 series XEON ($200) and you're smoking. /S
__________________
Never argue with an idiot. They will bring you down to their level and beat you with experience. |
|
||||
Update: 7/29/2011
I have secured a sun fire x4170 PCI-e based machine for my next firewall. Have installed a 10gigE dual port Intel X520-DA2 adapter, which OBSD 4.9 is detecting fine in the AMD64 build. I was thinking about trying the SolarFlare card, but it doesnt look like they have a stable driver yet... theres really only a beta freeBSD driver so far. So I'll stick with the Intel card for now. I've plugged it in to the switch and am seeing traffic w/tcpdump on the 10gigE port. Havent set up the bridge yet, but am planning to put this baby between my Extreme Summit x650 24-port 10gigE switch and one of my file servers as a test. Thanks for the responses here.... there is much deep tech knowledge on this board and I am grateful for the help. Would still enjoy hearing anyone elses' stories about setting up 10gigE bridges with openbsd. Will share more as I progress on this project. Matt |
|
||||
UPDATE: 1/2012
I am up and running in production a bridging firewall with OpenBSD 5.0 on a sun fire x4170 PCI-e based machine. Im using the 10gigE dual port Intel X520-DA2 adapter. I think it is worth mentioning in this post a note I got from the folks at Calomel: ---begin calomel comment "Using ALTQ - packet queuing apparently cant work with 10gig yet, there isnt enough bandwidth or there is a bug that doesnt let you set the max bandwidth high enough... if you want to support 10G you can not use Altq. The reason is altq's bandwidth value is limited to a 32bit float value meaning you can only go up to 4294Mb/sec. Here is a link to the post we made on the Openbsd and FreeBSD mailing list about this issue: pf ALTQ bandwidth limited to a 32bit value (4294Mb) http://lists.freebsd.org/pipermail/f...ly/006203.html No solutions were purposed from the group. The only idea the pf guys had was to wait till "prio" queuing is done in pf. That will take up to a year to finish though. Secondly, ALTQ is a huge performance hit. When using anything more then 4gbit/sec we notice heavy CPU usage." --end Calomel comment |
Tags |
10 gig e, 10ge, bottleneck, bridging, pf |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
requesting help with "New" way to do Bridging in OpenBSD 4.7 | mbw | OpenBSD Installation and Upgrading | 1 | 30th May 2010 12:06 AM |