Well, I have a problem with it, my Google-Fu is failing me and I can't seem to find anything relevant.
The load balancing is done by the OS, using equal-cost multipath routing. I do NAT using match rules:
Code:
match out log on $ext1_if from <nated> nat-to $ext1_nat
match out log on $ext2_if from <nated> nat-to $ext2_nat
ext1_if = re0 and ext2_if = re1, these are the external interfaces and there's no restriction on outgoing traffic on egress.
The problem is, packets that get NATed to $ext1_if get routed on $ext2_if. For example, I have a webserver on the internal network, port 80 on $ext1_if gets redirected to the webserver:
Code:
pass in on $ext1_if proto tcp to $www_out port $www_tcp_ports rdr-to $www_in
This works fine, but after the connection is established I'm seeing (tcpdump) incoming packets to the server on re0 (ext1) and the outgoing responses on re1 (ext2), NATed to ext1's IP address. So the server gets queried on one interface and responds on the other one. The packets get to destination fine, but performance is horrible when this happens. OpenVPN, still on an internal server, is pretty much unusable.
I've been searching for what exactly happens inside the kernel, when is routing and NATing done, but can't find anything, I suppose it happens because the connection has state established so pf doesn't see the packets anymore and OS's routing doesn't know anything about what NAT is gonna do, but I'm not sure how to fix it, short of maybe changing everything and doing the load balancing in pf, which I'm kinda reluctant to do.
Can someone point me in the right direction?