Skip to content

3 DMVPN and its USPs

rbannist edited this page Nov 13, 2017 · 13 revisions





What is DMVPN and what are its USPs?


Take 3 sites, each with a pair of Active/Active HA routers.   A small network.   When using normal IPSec site-to-site VPNs to build a full mesh of encrypted tunnels the following is needed:

  • 24 tunnel interfaces.
  • 12 subnets.
  • 24 IP addresses.

Full mesh (blue lines denote 'overlay' tunnels over the green 'underlay'):


Full mesh - Site 2 to Site 3 traffic flow (red line):


If a choice is made to build a partial mesh instead - resulting in less tunnel interfaces and therefore less subnets and IP addresses - then hair-pinning of traffic, via a central site in cases when sites with IP flows between them don't have direct site-to-site VPNs, must be accepted.

Partial mesh:


Partial mesh - Site 2 to Site 3 traffic flow:


A partial mesh results in:

  1. Extra bandwidth utilisation/a need for extra bandwidth provision at the central, 'hub', site.
    • Higher costs if charges apply.
  2. Additional latency between endpoints when traffic flows do occur.


DMVPN combines mGRE tunneling and IPsec encryption with Next-Hop Resolution Protocol (NHRP) routing in a manner that addresses the dilemma described above.   DMVPN makes building a IPSec VPN overlay network simple and scalable.


DMVPN Components

Component What it's for
mGRE tunnel interface
This allows a single GRE tunnel interface on each router to support multiple IPsec tunnels which simplifies the size and complexity of each router's configuration.   Classic GRE tunnels are point-to-point.   mGRE allows a tunnel to have multiple destinations.


Dynamic discovery of IPsec tunnel endpoints and crypto profiles


This eliminates the need to configure static maps defining every pair of IPsec peers.

NHRP
This is a Client-Server protocol.   'Hub' site routers are Servers and 'Spoke' site routers are Clients.   NHRP allows spoke routers to be deployed with dynamically assigned public IP addresses (these can be both RFC1918 and non-RFC1918 addresses).   Hub routers maintain an NHRP database of the public interface addresses of each spoke router.   Each spoke router registers its real address when it boots; when it needs to build direct tunnels with other spokes, it queries the NHRP database for real addresses of the destination spokes.   NHRP offers a routing optimisation scheme inside NBMA networks which is essentially what the underlying hub-and-spoke mGRE tunneling provides.






DMVPN USPs

Attribute Detail
On-demand full mesh connectivity with simple hub-and-spoke configuration
Massively scalable.

Lower admin costs - a dramatic simplification of the router configurations and a massive reduction in the number of VPN configuration lines.

Taking the example above, with DMVPN, the requirement would be:
  • 6 tunnel interfaces total for 1 overlay across 3 sites with HA routers in each site.

All spokes in the same subnet (i.e. 4 spokes with 2 hubs = 6 IP addresses).
The option to overlay encryption on a tunnel or not (i.e. mGRE-only or IPSec-on-mGRE tunnel).
  • Mix encrypted and non-encrypted using different topologies.


VRF-aware and routing-based failover.

Zero-touch deployment when adding remote sites
Adding new spokes to the VPN requires no changes at the hub.

A centralised configuration controls split-tunneling behaviour at a spoke.

Reduced latency and bandwidth savings
Automatic IPsec triggering for building an IPsec tunnel - dynamic spoke-to-spoke tunnels.

Support for IP Multicast over tunnels.

Extensive QoS support
Traffic shaping (per-spoke or per-spoke group).

Policing (per-spoke).

Hub-to-spoke and spoke-to-spoke policies.

QoS templates attached dynamically/automatically to tunnels as they come up.