VPC Network Virtualization
This page explains how VPC network virtualization connects instances within a VPC and between instances and the internet. It ties together VNI pools, IP pools, BGP configuration, routing profiles, and DPU behavior into a single end-to-end picture.
Refer to this page when diagnosing connectivity failures, planning a new site deployment, or explaining the system to a network team.
Related pages
- VPC Routing Profiles — profile configuration reference and troubleshooting
- VNI Resource Pools — VNI pool configuration
- IP Resource Pools — IP pool configuration
- Networking Requirements — underlay and EVPN prerequisites
- DPU Configuration — declarative DPU config flow
The Full Picture
The following diagram shows how the principal components relate. Each DPU maintains a separate VRF for every VPC hosted on its managed host. Routes flow between VRFs and the fabric via BGP EVPN.
Component Roles
Pool definitions (ranges, prefix sizes) are configured in the API server pools section; see
VNI Resource Pools and
IP Resource Pools.
Why datacenter_asn and asn are separate
datacenter_asn is the ASN component embedded in route-targets (<datacenter_asn>:<vni>). It
identifies the datacenter to the network fabric and must match the route-target values programmed
on network devices.
asn is a site-wide fallback ASN inherited from the earlier
Ethernet Virtualizer model. Each DPU is now allocated a unique ASN from the fnn-asn pool for
VRF and BGP session use. The asn field is still present and used for DPU-to-host DHCP behavior.
Do not conflate the two.
Both values must agree with the network team’s deployment. A mismatch causes route-targets that NICo programs to differ from those the network devices import, resulting in a black hole.
How a DPU Gets Its Configuration
The DPU agent polls NICo for its network configuration every 30 seconds. The response is assembled from the configured pools, routing profiles, and site settings, and contains everything the DPU needs to configure its VRFs, BGP sessions, and VTEPs.
The handler proceeds as follows:
- Load the managed host snapshot from the database. If the DPU is unknown, a
NotFounderror is returned and the DPU places itself into isolated mode. - Determine
use_admin_network. The DPU is placed on the admin network when no instance is allocated, when the instance has no interfaces configured for this DPU, or when the host is in specific transient lifecycle states. - Resolve the routing profile. When an instance is allocated, the VPC’s routing profile is
looked up in
fnn.routing_profiles. If the profile name is not defined there, the DPU configuration call fails. - Allocate or retrieve the per-VPC loopback IP from the
vpc-dpu-lopool. - Build tenant interface configs, including VLAN IDs, VNI, gateway, prefixes, and the FQDN derived from the instance hostname or its IP address.
- Assemble the response, including:
- BGP ASN (per-DPU from
fnn-asnpool) - DPU loopback IP from the
lo-ippool - DHCP server addresses and route server addresses
deny_prefixesandsite_fabric_prefixesACL datadatacenter_asn- The resolved
routing_profile(imports, exports, leak flags) additional_route_target_imports- VPC VNI
- BGP ASN (per-DPU from
The DPU applies the received configuration to HBN via NVUE and reports back to NICo. NICo marks the instance as ready only after the DPU confirms the configuration has been applied. See DPU Configuration for the full lifecycle.
Intra-VPC Connectivity
Instances in the same VPC communicate via the VPC VRF on each DPU. No operator configuration beyond VNI pool provisioning and routing profile assignment is required to enable this.
The flow:
- A tenant instance boots on a managed host. The DPU receives a configuration that places the host interface into the VPC VRF, identified by the VPC VNI.
- The host’s instance IP is advertised from the host into the DPU via a BGP session between the host OS and the DPU.
- The DPU re-advertises that host route into the fabric as a BGP EVPN type-5 prefix, tagged with
the VPC’s native route-target:
<datacenter_asn>:<vpc_vni>. - Every other DPU that has an instance in the same VPC imports that same route-target (it is the per-VPC default import) and installs the host route in its local copy of the VPC VRF.
- Return traffic follows the per-VPC loopback IP (
vpc-dpu-lopool) as the EVPN next-hop.
Site-wide additional route-target imports
fnn.additional_route_target_imports injects extra route-targets into every VPC VRF across the
entire site, regardless of the VPC’s routing
profile. This is appropriate for routes that all VPCs must be able to reach unconditionally,
such as control-plane service VIPs. The standard import for site-controller VIPs uses route-target
:50100 (see Networking Requirements).
Internet Connectivity
A default route must be present in the VPC VRF for instances to reach destinations outside the overlay. There are three mechanisms by which this can happen. The correct choice depends on the site’s network deployment model and must be agreed between the NICo operator and the network team.
Mechanism 1: Explicit route-target import in the routing profile
The routing profile specifies route_target_imports. The network advertises a default route
tagged with one of those route-targets. The DPU imports it into the VPC VRF.
Configure this in fnn.routing_profiles.<name>.route_target_imports:
This is the most explicit mechanism and is preferred when the network team operates a dedicated gateway VRF that exports a default route with a known route-target.
Mechanism 2: Default route injection via the native VPC route-target
The network advertises a default route tagged with <datacenter_asn>:<vpc_vni>. Because the VPC
automatically imports its own native route-target, the default route lands in the VRF without any
route_target_imports entry in the profile.
This mechanism relies on datacenter_asn and the VPC VNI matching what the network device
programs. It is operationally simpler for the NICo side but requires the network team to track
per-VPC VNIs.
Mechanism 3: leak_default_route_from_underlay
leak_default_route_from_underlay instructs the DPU to leak the default route from the underlay
(default VRF) directly into the tenant VRF, bypassing BGP route-target mechanics entirely.
When to use this:
- The underlay already has a default route and the site lacks a BGP gateway VRF to advertise one into the EVPN fabric.
- The deployment is small or development-grade and full EVPN route-target coordination is not warranted.
When not to use this:
- Multi-tenant deployments where internet-bound traffic must be filtered or metered per VPC.
- Sites where tenant VRF isolation from the underlay is a security requirement.
- Deployments where the underlay default route changes frequently and the tenant VRFs should not follow it automatically.
Controlled Route Leaking
The following fields in the routing profile configuration provide fine-grained control over what crosses the boundary between the tenant VRF and the underlay.
leak_tenant_host_routes_to_underlay
When true, the DPU leaks per-host routes from the tenant VRF into the underlay (default VRF).
This is required when the network must route return traffic directly to instances rather than
relying on an aggregate. Enable this when the network team cannot install a covering aggregate
for instance IPs.
tenant_leak_communities_accepted
When true, the DPU honors BGP community tags set by the host OS on routes it advertises to the
DPU. Routes carrying the accepted communities are leaked from the tenant VRF into the underlay.
This allows a sophisticated tenant application to selectively announce specific prefixes to the
fabric (for example, when running a gateway workload or BYOIP scenarios).
accepted_leaks_from_underlay
An explicit prefix list that controls which prefixes may be leaked from the underlay (default VRF)
into the tenant VRF. This is more granular than leak_default_route_from_underlay: the operator
lists specific prefixes (for example, management subnets or service VIPs) that the tenant VRF
needs, without opening the full underlay default route.
Export Route-Targets and Return-Path Reachability
When the DPU advertises routes from a VPC into the fabric, it tags them with:
- The VPC’s native route-target:
<datacenter_asn>:<vpc_vni> - Any route-targets listed in
route_targets_on_exportsfor the VPC’s routing profile
The network device VRFs must import the route-targets present on VPC routes to see those routes and return traffic to instances. If they do not, the overlay routing table appears correct from the VPC side but external hosts cannot reach instances — the same symptom as no internet access.
This is the most common cause of one-way connectivity: the instance can send packets out, but responses are dropped because the network cannot route back.
Diagnostic question to ask the network team: Is the network VRF configured to import the
route-targets listed in route_targets_on_exports for this VPC’s routing profile?
See VPC Routing Profiles — Troubleshooting: No Internet Access for the full troubleshooting procedure.
VNI Pool Selection: Internal vs. External VPCs
When a VPC is created, the VNI pool is selected automatically based on the routing profile’s
internal flag:
internal = true→ allocates from thevpc-vnipoolinternal = false→ allocates from theexternal-vpc-vnipool
The separation of pools allows operators to reserve distinct VNI ranges for internal and external VPCs, which can simplify route-target policy on network devices (for example, applying a blanket import for all external-VPC route-targets from a known VNI range).
See VNI Resource Pools for pool sizing and configuration.
Configuration Checklist for a New Site
Use this checklist when configuring VPC network virtualization for a new site. Complete each step in order and confirm with the network team before proceeding to the next.
1. Agree on BGP parameters with the network team
- Determine
datacenter_asn— the ASN used in route-target construction. - Determine VNI ranges for internal VPCs (
vpc-vnipool) and external VPCs (external-vpc-vnipool). - Determine the route-target the network will advertise for the default route.
2. Configure IP pools
- Define the
lo-ippool for DPU loopback IPs. One IP per DPU. - Define the
vpc-dpu-lopool for per-VPC DPU loopback IPs. One IP per DPU per VPC. - Define the
fnn-asnpool for per-DPU BGP ASNs. One ASN per DPU.
See IP Resource Pools for sizing guidance.
3. Configure VNI pools
- Define the
vpc-vnipool for internal VPCs. - Define the
external-vpc-vnipool for external VPCs (if the site serves external tenants).
See VNI Resource Pools for pool sizing guidance.
4. Configure API server fields
In the API server configuration file:
Replace placeholder values with site-specific values agreed with the network team. The route-target
numbers :50100, :50200, and :50500 are conventional; see
Networking Requirements for the full standard route-target table.
5. Confirm network device configuration
Have the network team confirm:
- Network device VRFs import the route-targets that external VPCs export (
route_targets_on_exports). - Network device VRFs export a default route tagged with either:
- the route-target listed in
route_target_importsfor each VPC profile that needs internet access, or <datacenter_asn>:<vpc_vni>for each external VPC (native route-target injection), or- a default route is reachable in the underlay (if using
leak_default_route_from_underlay).
- the route-target listed in
- Network device VRFs import the site-controller VIP route-target (
:50100or equivalent). - DPU loopback prefixes from the
lo-ippool are reachable from all other DPUs (full mesh or aggregate).
6. Verify end-to-end
After site bring-up:
- Create a tenant with an
EXTERNALrouting profile. - Create a VPC for that tenant.
- Allocate an instance.
- From within the instance, confirm:
- Intra-VPC connectivity to another instance in the same VPC.
- Internet reachability (for example,
curlto an external address).
- Confirm that the DPU agent health check shows all BGP sessions established.
Troubleshooting
For routing-profile-specific troubleshooting, see VPC Routing Profiles, which covers:
RoutingProfile not founderrors during VPC creation- No internet access in a VPC
- Return-path reachability failures
Common configuration mismatches to check first: