Home Lab DMVPN Lessons Learned

My last post was about the home-to-home DMVPN we’ve been working on. The design intent was to build a phase 1 DMVPN so spoke to spoke traffic should use the hub as a transit node. It wasn’t until we tried to forward traffic from spoke to spoke that we realized we have issues. Spoke to spoke forwarding did not work.
Spoke 1 is advertising a summary route of 172.29.0.0/16 and Spoke 2 is advertising summary 172.24.0.0/16. The hub tunnel IP is 172.16.0.1, spoke 1 is 172.16.0.2 and spoke 2 is 172.16.0.4. All spokes are behind NAT.

Issue 1: eBGP third party next-hop

When spoke to spoke traffic forwarding failed the first thing we verified is the routing table and quicky saw a potential issue. The routing table on spoke 1 toward 172.24.0.0/16 (spoke 2 LAN) does not see the hub (172.16.0.1) as the next hop. Instead, the next hop is the tunnel address of spoke 2, 172.16.0.4. We’re using eBGP so why would the hub not change the next hop? I traced this behavior down to an exception in BGP called “third party next-hop”. My interpretation of the third party next-hop logic is “if a route is received from an eBGP speaker and it is readvertised to another eBGP speaker AND both of these eBGP peers are in a common subnet, the next hop will not be changed”. I found a note about this feature in this Cisco article and further reading in RFC 2283.

Spoke 1 BGP routes:
RoutingLoop_R1#show ip route bgp | begin Gateway
Gateway of last resort is not set

B 10.10.69.0 [20/0] via 172.16.0.1, 00:16:47
B 10.55.10.0 [20/0] via 172.16.0.1, 00:16:47
B 10.55.20.0 [20/0] via 172.16.0.1, 00:16:47
B 10.55.30.0 [20/0] via 172.16.0.1, 00:16:47
B 172.24.0.0/16 [20/0] via 172.16.0.4, 00:16:47 < next hop is spoke 2’s tunnel address

Before making any changes, I tried to ping from spoke 1 to spoke 2 with an NHRP debug enabled. The output that stood out to me the most is in bold text. I edited out the NBMA addresses since these are people’s home internet addresses. Spoke 1 sent an NHRP resolution request to the next hop server (hub) to resolve spoke 2’s NBMA address. This wasn’t surprising with the preserved IP next hops. With these unexpected IP next hops and the spokes being configured for multipoint GRE I think we accidentally created a broken DMVPN phase 2 network. I think this accidental phase 2 configuration would work if the spokes were not behind NAT.

RoutingLoop_R1# ping 172.24.127.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.24.127.1, timeout is 2 seconds:

Mar 18 20:38:24.551: NHRP: NHRP could not map 172.16.0.4 to NBMA, cache entry not found
Mar 18 20:38:24.551: NHRP: MACADDR: if_in null netid-in 0 if_out Tunnel10 netid-out 1
Mar 18 20:38:24.551: NHRP: Sending packet to NHS 172.16.0.1 on Tunnel10
Mar 18 20:38:24.551: NHRP: Checking for delayed event NULL/172.16.0.4 on list (Tunnel10 vrf: global(0x0))
Mar 18 20:38:24.551: NHRP: No delayed event node found.
Mar 18 20:38:24.551: NHRP: No cache for forwarding(0)
Mar 18 20:38:24.551: NHRP: NHRP req (0x3101D408) for dest: 172.16.0.4 vrf: global(0x0)
Mar 18 20:38:24.551: NHRP: Enqueued NHRP Resolution Request for destination: 172.16.0.4 vrf: global(0x0)
Mar 18 20:38:24.555: NHRP: Adding Tunnel Endpoints (VPN: 172.16.0.4, NBMA: $Hub NBMA)
Mar 18 20:38:24.555: NHRP: Successfully attached NHRP subblock for Tunnel Endpoints (VPN: 172.16.0.4, NBMA: $Hub NBMA
)
Mar 18 20:38:24.555: NHRP: No peer data updated in NHRP subblock for Tunnel Endpoints (VPN: 172.16.0.4, NBMA: $Hub NBMA)
Mar 18 20:38:24.563:. NHRP: Checking for delayed event NULL/172.16.0.4 on list (Tunnel10 vrf: global(0x0))
Mar 18 20:38:24.563: NHRP: No delayed event node found.
Mar 18 20:38:24.563: NHRP: There is no VPE Extension to construct for the request
Mar 18 20:38:24.563: NHRP: Sending NHRP Resolution Request for dest: 172.16.0.4 to nexthop: 172.16.0.4 using our src: 172.16.0.2 vrf:global(0x0)
Mar 18 20:38:24.563: NHRP: Attempting to send packet through interface Tunnel10 via DEST dst 172.16.0.4
Mar 18 20:38:24.563: NHRP: Send Resolution Request via Tunnel10 vrf global(0x0), packet size: 105
Mar 18 20:38:24.563: src: 172.16.0.2, dst: 172.16.0.4
Mar 18 20:38:24.563: NHRP: Encapsulation succeeded. Sending NHRP Control Packet NBMA Address: $Hub NBMA
Mar 18 20:38:24.563: NHRP: 133 bytes out Tunnel10
Mar 18 20:38:24.635: NHRP: Receive Resolution Request via Tunnel10 vrf global(0x0), packet size: 125
Mar 18 20:38:24.635: NHRP: Route lookup for destination 172.16.0.2 in vrf global(0x0) yielded interface Tunnel10, prefi.xlen 24
Mar 18 20:38:24.635: NHRP: Request was to us. Process the NHRP Resolution Request.
Mar 18 20:38:24.635: NHRP: Resolution Request was NAT-ted, post-NAT NBMA: $Spoke 2 NBMA

Mar 18 20:38:24.635: NHRP: nhrp_rtlookup for 172.16.0.2 in vrf global(0x0) yielded interface Tunnel10, prefixlen 24
Mar 18 20:38:24.635: NHRP: Request was to us, responding with ouraddress
Mar 18 20:38:24.635: NHRP: Checking for delayed event 172.16.0.4/172.16.0.2 on list (Tunnel10 vrf: global(0x0))
Mar 18 20:38:24.635: NHRP: No delayed event node found.
Mar 18 20:38:24.635: NHRP: Enqueued Delaying resolution request nbma src:172.29.0.2 nbma dst:$Spoke 2 NBMA reason:IPSEC-IFC: need to wait for IPsec SAs.
Mar 18 20:38:24.635: NHRP: Can’t overwrite non-NF entry, returning
Mar 18 20:38:24.635: NHRP: Can’t overwrite non-NF entry, returning
Mar 18 20:38:26.107: NHRP: Checking for delayed event NULL/172.16.0.4 on list (Tunnel10 vrf: global(0x0))
Mar 18 20:38:26.107: NHRP: No delayed event node found.
Mar 18 20:38:26.107: NHRP: .There is no VPE Extension to construct for the request

Resolution to BGP Issue:

To override this behavior, “next-hop self” was configured on the hub for each eBGP peer. After next-hop self was configured and routes refreshed, spokes started seeing the hub as the IP next hop to reach other spokes.

I think the third party next-hop behavior could be considered an optimization in a situation where multiple eBGP speakers exist in a multiaccess broadcast segment or maybe an Internet Exchange fabric route server. I suppose the issue we’re experiencing in our DMVPN could also occur in other NBMA networks like hub and spoke frame relay or hub and spoke ATM.

After next-hop self configuration:

RoutingLoop_R1#show ip bgp | begin Network

     Network          Next Hop            Metric LocPrf Weight Path

 *>   10.10.69.0/24    172.16.0.1               0             0 65000 i

 *>   10.55.10.0/24    172.16.0.1               0             0 65000 i

 *>   10.55.20.0/24    172.16.0.1               0             0 65000 i

 *>   10.55.30.0/24    172.16.0.1               0             0 65000 i

 *>   172.24.0.0       172.16.0.1                             0 65000 65004 i

 *>   172.29.0.0       0.0.0.0                  0         32768 i

Issue 2: Spokes Were Configured for Multipoint GRE

After fixing the IP next hop issue it was realized that my original spoke design was not consistent with phase 1 design guidelines. The spokes were configured for multipoint GRE instead of the default point to point. We updated the spoke configuration for point-to-point GRE so we’re consistent with design guides. In the future I would like to revisit this and see if I can discern any operational differences with the spokes operating with mGRE vs. P2P.

Spoke Multipoint Configuration:
interface Tunnel10
description IPv4 DMVPN
bandwidth 20000
ip address 172.16.0.2 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp authentication $key
ip nhrp map 172.16.0.1 $Hub NBMA
ip nhrp map multicast $Hub NBMA
ip nhrp network-id 1
ip nhrp holdtime 400
ip nhrp nhs 172.16.0.1
zone-member security TUNNEL
ip tcp adjust-mss 1360
delay 1000
tunnel source GigabitEthernet0/0
tunnel mode gre multipoint
tunnel key 13579
tunnel protection ipsec profile DMVPN_IPSEC_PROFILE

Spoke Point-To-Point Configuration:
interface Tunnel10
description IPv4 DMVPN
bandwidth 20000
ip address 172.16.0.2 255.255.255.0
ip mtu 1400
ip nhrp authentication $key
ip nhrp map 172.16.0.1 $Hub NBMA
ip nhrp map multicast $Hub NBMA
ip nhrp network-id 1
ip nhrp holdtime 400
ip nhrp nhs 172.16.0.1
zone-member security TUNNEL
ip tcp adjust-mss 1360
delay 1000
tunnel source GigabitEthernet0/0
tunnel destination $Hub NBMA
tunnel key 13579
tunnel protection ipsec profile DMVPN_IPSEC_PROFILE

Stay Tuned!

Stay tuned for updates on our DMVPN lab. We currently have 1 spoke connecting to a second hub using IPv4 underlay with IPv6 in the overlay. We are considering a dual hub, dual stack setup with routing between the hubs. Dual hubs will give opportunities to consider routing design between the hubs and routing decision influence between hubs and spokes over multiple paths.

Leave Comment

Your email address will not be published. Required fields are marked *

Time limit exceeded. Please complete the captcha once again.