The link local (169.254.X.Y) addresses are used (in AWS's config) for BGP. The way we have it setup, a
route-based IPSEC engine can do something like this (this would require the gif interfaces):
Code:
ike esp from 169.254.249.62 to 169.254.249.61 \
local 98.xx.xx.xx peer 72.xx.xx.xx \
main auth hmac-sha1 enc aes group modp1024 lifetime 28800 \
quick auth hmac-sha1 enc aes group modp1024 lifetime 3600 \
srcid 98.xx.xx.xx \
psk "***" \
tag amazon-vpc
ike esp from 169.254.249.54 to 169.254.249.53 \
local 98.xx.xx.xx peer 72.xx.xx.xx \
main auth hmac-sha1 enc aes group modp1024 lifetime 28800 \
quick auth hmac-sha1 enc aes group modp1024 lifetime 3600 \
srcid 98.xx.xx.xx \
psk "***" \
tag amazon-vpc
This would configure two tunnels between whatever link-local addresses your AWS VPN config gave you. Then you could run OpenBGPD to receive route advertisements from the AWS VPN endpoints (typically the VPC CIDR) and advertise back to AWS your routes (your local LAN routes), and that traffic would be routed via the link-local tunnels (i.e. if you ran a traceroute, you would see the link-local ips along the path).
Unfortunately, OpenBSD doesn't have a route-based IPSEC engine, nor does it have the option for configuring route-based IPSEC, so this doesn't work. A static configuration like you outlined is your best bet (you *could* setup the link-local tunnels, but they wouldn't be very useful since OpenBSD's policy would only route to the other link-local ip of the tunnel and no further, meaning you'd have to have another tunnel in addition to the link-local tunnels and OpenBGPD wouldn't be able to failover to a different tunnel if one went down).