DaemonForums  

Go Back   DaemonForums > OpenBSD > OpenBSD Security

OpenBSD Security Functionally paranoid!

Reply
 
Thread Tools Display Modes
  #1   (View Single Post)  
Old 20th January 2011
mbw's Avatar
mbw mbw is offline
Port Guard
 
Join Date: May 2010
Location: Seattle, WA
Posts: 13
Default OpenBSD, PF, bridging and 10gE

Hi,

Im currently using an older sun fire x4100 with its integrated
1G network ports in a bridge and use PF to filter traffic to my 50 or
so machines in a server room. The 1G uplink to the internet is directly
connected in to the public side of the PF firewall.

I have used this setup for years and it works well.

But now we are considering upgrading the server room uplink from a 1G Cat5 cable to 10gigE Multimode fiber. If I do this and keep my same OpenBSD firewall, I am thinking that I will need to put a dual-port PCI-X 10gigE network card in there in order to bridge the 10gigE from the public
internet uplink in to the protected server room.

My question is this: Will the pci-X backplane be a bottleneck for acheiving
line rate 10gigE ? It has been suggested to me by my local network folks that my firewall may be able to handle bridging 1G traffice, but might not
be able to handle 10gE traffic... Im not sure how to gauge this.

I suppose I could, in my ignorance, throw a newer 1U system with PCI-e v2.0 and the newest dual port 10gigE card I can find at it... but it would be nice to understand what the constraints are...

Any pointers appreciated

thanks,

Matt
Reply With Quote
  #2   (View Single Post)  
Old 20th January 2011
jggimi's Avatar
jggimi jggimi is offline
More noise than signal
 
Join Date: May 2008
Location: USA
Posts: 7,975
Default

You might find a discussion on 10Gb firewall performance from the misc@ mailing list helpful. It took place in January and February of last year, and begins here:

http://marc.info/?l=openbsd-misc&m=126419505806549&w=2

You don't mention the architecture you run with your Sunfire X4100 (AMD Opteron). For 4.8, the 10Gb NIC support list is the same for both i386 and amd64 (from the Project website), though capabilities differ -- the difference is in the footnotes. Review the hardware support documents for the details.

For amd64:
Code:
10Gb Ethernet Adapters

    * Intel 82597 PRO/10GbE based PCI adapters (ixgb)
    * Intel 82598 PRO/10GbE based PCI adapters (ix)
    * Neterion Xframe/Xframe-II based PCI adapters (xge)
    * Tehuti Networks 10Gb based PCI adapters (tht), (G)
For i386:
Code:
10Gb Ethernet Adapters

    * Intel 82597 PRO/10GbE based PCI adapters (ixgb) (A) (B) (C)
    * Intel 82598 PRO/10GbE based PCI adapters (ix) (A) (B) (C)
    * Neterion Xframe/Xframe-II based PCI adapters (xge) (A) (B) (C)
    * Tehuti Networks 10Gb based PCI adapters (tht), (A) (B) (C)

Last edited by jggimi; 20th January 2011 at 08:16 PM. Reason: clarification
Reply With Quote
  #3   (View Single Post)  
Old 20th January 2011
BSDfan666 BSDfan666 is offline
Real Name: N/A, this is the interweb.
Banned
 
Join Date: Apr 2008
Location: Ontario, Canada
Posts: 2,223
Default

There are a few other drivers that show up in an apropos search as well, not sure if they're well supported yet.. or if OpenBSD's network stack is well optimized for 10G speeds.

che(4) - Chelsio Communications 10Gb Ethernet device
myx(4) - Myricom Myri-10G PCI Express 10Gb Ethernet device

You should see if you can replicate your setup and artificially generate the kind of traffic you'll be seeing, see how it fares.. and then gauge if a dedicated intelligent switch/bridge might perform better.
Reply With Quote
  #4   (View Single Post)  
Old 20th January 2011
mbw's Avatar
mbw mbw is offline
Port Guard
 
Join Date: May 2010
Location: Seattle, WA
Posts: 13
Default

Thank you for the thread link and the prototyping suggestion...

is there any way to tell if a system running "PF" is "Maxxed out" - Would the load average spike or would pf or my NIC drop packets quietly?

is there a PF metric I can look at to see if my bridge is performing ok?
Reply With Quote
  #5   (View Single Post)  
Old 22nd June 2011
s2scott's Avatar
s2scott s2scott is offline
Package Pilot
 
Join Date: May 2008
Location: Toronto, Ontario Canada
Posts: 198
Default

Quote:
Originally Posted by mbw View Post
I am thinking that I will need to put a dual-port PCI-X 10gigE network card in there in order to bridge the 10gigE from the public
internet uplink in to the protected server room.

My question is this: Will the pci-X back plane be a bottleneck for achieving
line rate 10gigE ?
On a PCI-X based mobo, the only way you'll get close to 10Gb wire-speed throughput is to have two slots on two DIFFERENT and non-contended buses -- 10GB PCI-X NIC(1) on bus(1) and 10GB PCI-X NIC(2) on bus(2).

If you stay PCI-X ... seriously crack into your system's technical specifications. PCI-X is a linear bus; that is, cards on the same bus COMPETE with each other. If on the same bus, in a bridge mode, each 10GB card will CRUSH the other.

In all but the first-gen servers with PCI-X, the PCI-X chip set supports four (4) independent PCI-X buses. One bus (bus[0]) necessarily connects to the NORTH/SOUTH bridge for CPU and MEMORY access. The remaining three buses are typically spread to -- (a) bus[1] on-board chips (e.g. embedded SCSI controller, if it/they exists), and (b) the physical slots in the system.

If your PCI-X system includes legacy PCI-only slots, then one of these three buses is typically dedicated to these legacy PCI-only slots (bus[2]). Consequently, your PCI-X slots are typically spread over the remaining bus.

You need to crack your mobo's technical spec/documention an find those two independent and non-contended slots for your 10GB cards. If you can't isolate to slots, then don't even try.

IMO, though, if you need something approaching wire- (or glass-) speed, then go, go, go PCI-e.

The speed through all the rest of the system will be faster too. You can get a supermicro mobo with the needed x8 (or x4) PCI-e slots for sub-CAD$200, or a genuine intel mobo for just over CAD$200. Drop a low cost E3-1200 series XEON ($200) and you're smoking.

/S
__________________
Never argue with an idiot. They will bring you down to their level and beat you with experience.
Reply With Quote
  #6   (View Single Post)  
Old 29th July 2011
mbw's Avatar
mbw mbw is offline
Port Guard
 
Join Date: May 2010
Location: Seattle, WA
Posts: 13
Default

Update: 7/29/2011

I have secured a sun fire x4170 PCI-e based machine for my next firewall.
Have installed a 10gigE dual port Intel X520-DA2 adapter, which OBSD 4.9
is detecting fine in the AMD64 build.

I was thinking about trying the SolarFlare card, but it doesnt look like
they have a stable driver yet... theres really only a beta freeBSD driver so far.
So I'll stick with the Intel card for now.

I've plugged it in to the switch and am seeing traffic w/tcpdump on the 10gigE port. Havent set up the bridge yet, but am planning to put this baby between my Extreme Summit x650 24-port 10gigE switch and one of my file servers as a test.

Thanks for the responses here.... there is much deep tech knowledge on this board and I am grateful for the help. Would still enjoy hearing anyone elses' stories about setting up 10gigE bridges with openbsd. Will share more as I progress on this project.

Matt
Reply With Quote
  #7   (View Single Post)  
Old 5th January 2012
mbw's Avatar
mbw mbw is offline
Port Guard
 
Join Date: May 2010
Location: Seattle, WA
Posts: 13
Default

UPDATE: 1/2012

I am up and running in production a bridging firewall with OpenBSD 5.0 on a sun fire x4170 PCI-e based machine. Im using the 10gigE dual port Intel X520-DA2 adapter.

I think it is worth mentioning in this post a note I got from the folks at Calomel:

---begin calomel comment

"Using ALTQ - packet queuing apparently cant work with 10gig yet, there isnt
enough bandwidth or there is a bug that doesnt let you set the
max bandwidth high enough...

if you want to support 10G you can not use Altq. The reason is
altq's bandwidth value is limited to a 32bit float value meaning you
can only go up to 4294Mb/sec.

Here is a link to the post we made on the Openbsd and FreeBSD mailing
list about this issue:

pf ALTQ bandwidth limited to a 32bit value (4294Mb)
http://lists.freebsd.org/pipermail/f...ly/006203.html

No solutions were purposed from the group. The only idea the pf guys
had was to wait till "prio" queuing is done in pf. That will take up
to a year to finish though.

Secondly, ALTQ is a huge performance hit. When using anything more
then 4gbit/sec we notice heavy CPU usage."

--end Calomel comment
Reply With Quote
Reply

Tags
10 gig e, 10ge, bottleneck, bridging, pf

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
requesting help with "New" way to do Bridging in OpenBSD 4.7 mbw OpenBSD Installation and Upgrading 1 30th May 2010 12:06 AM


All times are GMT. The time now is 02:06 AM.


Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content copyright © 2007-2010, the authors
Daemon image copyright ©1988, Marshall Kirk McKusick