Types of VNet Peering in Azure and Their Implementation with Config and Code
A comprehensive guide to Azure VNet peering types, implementation patterns, and real-world architectures with Terraform and CLI examples for production deployments.
When you’re building cloud infrastructure at scale, network connectivity becomes one of those foundational decisions that either sets you up for success or creates technical debt that haunts you for years. I’ve spent considerable time working with Azure Virtual Network (VNet) peering across various enterprise deployments, and I’ve learned that understanding the nuances between different peering types can make or break your architecture.
Let me walk you through the different types of VNet peering in Azure, their implementation patterns, and the real-world considerations that don’t always make it into the documentation.
Understanding Azure VNet Peering
VNet peering is Azure’s native mechanism for connecting virtual networks, enabling resources in different VNets to communicate with each other as if they were on the same network. Unlike VPN gateways or ExpressRoute, peering provides low-latency, high-bandwidth connectivity using Microsoft’s private backbone network.
The key advantage? Traffic never traverses the public internet, and you get consistent network performance without the overhead of gateway devices.
Types of VNet Peering
1. Regional VNet Peering
Regional peering connects VNets within the same Azure region. This is your bread-and-butter peering type for most segmentation scenarios.
Use cases I’ve seen work well:
- Separating production and non-production environments
- Isolating different application tiers (web, app, data)
- Creating dedicated VNets for shared services (DNS, monitoring, bastion hosts)
Key characteristics:
- Lowest latency (sub-millisecond within region)
- Lowest cost compared to other options
- No gateway required
- Supports VNet-to-VNet communication at Azure backbone speeds
Here’s a visual representation of regional VNet peering:
graph LR
subgraph ProdVNet[" "]
direction TB
ProdTitle["Production VNet 10.1.0.0/16"]
ProdWeb["Web Tier 10.1.1.0/24"]
ProdApp["App Tier 10.1.2.0/24"]
ProdTitle ~~~ ProdWeb
ProdWeb ~~~ ProdApp
end
subgraph SharedVNet[" "]
direction TB
SharedTitle["Shared Services VNet 10.2.0.0/16"]
DNS["DNS Servers 10.2.1.0/24"]
Monitor["Monitoring 10.2.2.0/24"]
SharedTitle ~~~ DNS
DNS ~~~ Monitor
end
ProdVNet <==>|VNet Peering| SharedVNet
style ProdVNet fill:#00a4ef,color:#fff,stroke:#0078d4,stroke-width:3px
style SharedVNet fill:#00a4ef,color:#fff,stroke:#0078d4,stroke-width:3px
style ProdTitle fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:2px
style SharedTitle fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:2px
style ProdWeb fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style ProdApp fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style DNS fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style Monitor fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
Here’s how to set up regional peering using Terraform:
# Regional VNet Peering - Production to Shared Services
resource "azurerm_virtual_network_peering" "prod_to_shared" {
name = "peer-prod-to-shared"
resource_group_name = azurerm_resource_group.networking.name
virtual_network_name = azurerm_virtual_network.prod.name
remote_virtual_network_id = azurerm_virtual_network.shared_services.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
use_remote_gateways = false
}
# Peering must be bidirectional
resource "azurerm_virtual_network_peering" "shared_to_prod" {
name = "peer-shared-to-prod"
resource_group_name = azurerm_resource_group.networking.name
virtual_network_name = azurerm_virtual_network.shared_services.name
remote_virtual_network_id = azurerm_virtual_network.prod.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
use_remote_gateways = false
}
2. Global VNet Peering
Global peering extends connectivity across Azure regions. This is where things get interesting for multi-region architectures.
When you actually need this:
- Disaster recovery and business continuity scenarios
- Multi-region active-active deployments
- Global data replication between regions
- Cross-region microservices communication
Important considerations:
- Higher latency (based on geographic distance)
- Higher data transfer costs (inter-region egress charges apply)
- Same security and reliability as regional peering
- No bandwidth bottlenecks from gateway devices
I’ve implemented global peering for a financial services client who needed real-time data synchronization between East US and West Europe. The latency was acceptable (~80-100ms), and we avoided the complexity of site-to-site VPNs.
Here’s a visual representation of global VNet peering across regions:
graph LR
subgraph EastUS["East US Region"]
direction TB
PrimaryDB["SQL Primary
10.1.1.0/24"]
PrimaryApp["App Tier
10.1.2.0/24"]
end
subgraph WestEU["West Europe Region"]
direction TB
SecondaryDB["SQL Secondary
10.2.1.0/24"]
SecondaryApp["App Tier
10.2.2.0/24"]
end
EastUS <==>|Global Peering
80-100ms| WestEU
PrimaryDB -.->|Replication| SecondaryDB
style EastUS fill:#ffa94d,color:#fff,stroke:#fd7e14,stroke-width:3px
style WestEU fill:#74c0fc,color:#fff,stroke:#1971c2,stroke-width:3px
style PrimaryDB fill:#ffe8cc,color:#000,stroke:#fd7e14,stroke-width:2px
style PrimaryApp fill:#ffe8cc,color:#000,stroke:#fd7e14,stroke-width:2px
style SecondaryDB fill:#d0ebff,color:#000,stroke:#1971c2,stroke-width:2px
style SecondaryApp fill:#d0ebff,color:#000,stroke:#1971c2,stroke-width:2px
Implementation example with Terraform:
# Global VNet Peering - East US to West Europe
resource "azurerm_virtual_network_peering" "eastus_to_westeu" {
name = "peer-eastus-to-westeu"
resource_group_name = azurerm_resource_group.networking_eastus.name
virtual_network_name = azurerm_virtual_network.primary_eastus.name
remote_virtual_network_id = azurerm_virtual_network.secondary_westeu.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
}
resource "azurerm_virtual_network_peering" "westeu_to_eastus" {
name = "peer-westeu-to-eastus"
resource_group_name = azurerm_resource_group.networking_westeu.name
virtual_network_name = azurerm_virtual_network.secondary_westeu.name
remote_virtual_network_id = azurerm_virtual_network.primary_eastus.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
}
3. Hub-and-Spoke Model
This is the architecture pattern I recommend most often for enterprise deployments. It’s not technically a “type” of peering but rather an architectural pattern using regional or global peering.
The hub-and-spoke model provides:
- Centralized governance and security controls
- Shared services consolidation (firewalls, DNS, VPN gateways)
- Cost optimization through resource sharing
- Simplified network management
Typical hub services:
- Azure Firewall or third-party NVAs
- VPN/ExpressRoute gateways for on-premises connectivity
- Bastion hosts for secure RDP/SSH
- Centralized DNS and monitoring
Here’s a Mermaid diagram showing a hub-and-spoke architecture:
graph TB
subgraph HubVNet[" "]
direction TB
HubTitle["Hub VNet 10.0.0.0/16"]
FW["Azure Firewall 10.0.1.0/24"]
VPN["VPN Gateway 10.0.2.0/24"]
DNS["Private DNS 10.0.3.0/24"]
Bastion["Bastion Host 10.0.4.0/24"]
HubTitle ~~~ FW
FW ~~~ VPN
VPN ~~~ DNS
DNS ~~~ Bastion
end
subgraph Spoke1[" "]
direction TB
Spoke1Title["Spoke 1: Production
10.1.0.0/16"]
Prod["AKS Cluster 10.1.0.0/20"]
Spoke1Title ~~~ Prod
end
subgraph Spoke2[" "]
direction TB
Spoke2Title["Spoke 2: Staging
10.2.0.0/16"]
Stage["App Services 10.2.0.0/20"]
Spoke2Title ~~~ Stage
end
subgraph Spoke3[" "]
direction TB
Spoke3Title["Spoke 3: Shared Services
10.3.0.0/16"]
Shared["Monitoring & ACR 10.3.0.0/20"]
Spoke3Title ~~~ Shared
end
subgraph OnPremises[" "]
direction TB
OnPremTitle["On-Premises Network"]
OnPrem["Corporate Network 192.168.0.0/16"]
OnPremTitle ~~~ OnPrem
end
Prod -.->|VNet Peering| FW
Stage -.->|VNet Peering| FW
Shared -.->|VNet Peering| FW
VPN <-->|Site-to-Site| OnPrem
style HubVNet fill:#d4f5ed,color:#000,stroke:#00b294,stroke-width:4px
style HubTitle fill:#00b294,color:#fff,stroke:#007a6e,stroke-width:3px
style FW fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:3px
style VPN fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:2px
style DNS fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style Bastion fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style Spoke1 fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:3px
style Spoke2 fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:3px
style Spoke3 fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:3px
style Spoke1Title fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:2px
style Spoke2Title fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:2px
style Spoke3Title fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:2px
style Prod fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style Stage fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style Shared fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style OnPremises fill:#fff4e6,color:#000,stroke:#7fba00,stroke-width:3px
style OnPremTitle fill:#7fba00,color:#fff,stroke:#5c8700,stroke-width:2px
style OnPrem fill:#ffe8cc,color:#000,stroke:#7fba00,stroke-width:2px
Implementing hub-and-spoke with Terraform:
# Hub VNet
resource "azurerm_virtual_network" "hub" {
name = "vnet-hub-prod-eastus"
location = "eastus"
resource_group_name = azurerm_resource_group.networking.name
address_space = ["10.0.0.0/16"]
}
# Spoke VNets
resource "azurerm_virtual_network" "spoke_prod" {
name = "vnet-spoke-prod-eastus"
location = "eastus"
resource_group_name = azurerm_resource_group.networking.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_virtual_network" "spoke_staging" {
name = "vnet-spoke-staging-eastus"
location = "eastus"
resource_group_name = azurerm_resource_group.networking.name
address_space = ["10.2.0.0/16"]
}
# Hub-to-Spoke Peering with Gateway Transit
resource "azurerm_virtual_network_peering" "hub_to_spoke_prod" {
name = "peer-hub-to-prod"
resource_group_name = azurerm_resource_group.networking.name
virtual_network_name = azurerm_virtual_network.hub.name
remote_virtual_network_id = azurerm_virtual_network.spoke_prod.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = true # Hub provides gateway
use_remote_gateways = false
}
resource "azurerm_virtual_network_peering" "spoke_prod_to_hub" {
name = "peer-prod-to-hub"
resource_group_name = azurerm_resource_group.networking.name
virtual_network_name = azurerm_virtual_network.spoke_prod.name
remote_virtual_network_id = azurerm_virtual_network.hub.id
allow_virtual_network_access = true
allow_forwarded_traffic = true
allow_gateway_transit = false
use_remote_gateways = true # Spoke uses hub gateway
depends_on = [azurerm_virtual_network_gateway.hub_vpn]
}
4. Transitive Peering (and Why It Doesn’t Work)
Here’s something that trips up even experienced Azure architects: VNet peering is non-transitive.
If VNet A is peered with VNet B, and VNet B is peered with VNet C, VNet A cannot communicate with VNet C directly. This is by design for security and isolation purposes.
The Problem:
graph LR
VNetA["VNet A Spoke 1
10.1.0.0/16
App Server"]
VNetB["VNet B Hub
10.0.0.0/16
Resources"]
VNetC["VNet C Spoke 2
10.2.0.0/16
Database"]
VNetA <-->|Peering ✓| VNetB
VNetB <-->|Peering ✓| VNetC
VNetA -.->|No Route ✗| VNetC
style VNetA fill:#00a4ef,color:#fff,stroke:#0078d4,stroke-width:3px
style VNetB fill:#50e3c2,color:#000,stroke:#00b294,stroke-width:3px
style VNetC fill:#00a4ef,color:#fff,stroke:#0078d4,stroke-width:3px
linkStyle 2 stroke:#ff6b6b,stroke-width:4px,stroke-dasharray: 5 5
The Solution: You need to implement routing through a Network Virtual Appliance (NVA) or Azure Firewall in the hub to enable spoke-to-spoke communication.
graph TB
subgraph Spoke1[" "]
direction TB
Spoke1Title["Spoke 1 VNet 10.1.0.0/16"]
AppA["App Server 10.1.1.0/24"]
Spoke1Title ~~~ AppA
end
subgraph HubVNet[" "]
direction TB
HubTitle["Hub VNet 10.0.0.0/16"]
Routes["Route Table UDR"]
Firewall["Azure Firewall 10.0.1.4"]
HubTitle ~~~ Routes
Routes ~~~ Firewall
end
subgraph Spoke2[" "]
direction TB
Spoke2Title["Spoke 2 VNet 10.2.0.0/16"]
DBServer["Database 10.2.1.0/24"]
Spoke2Title ~~~ DBServer
end
AppA -->|1. Send to 10.2.x.x| Routes
Routes -->|2. Next hop| Firewall
Firewall -->|3. Forward| DBServer
DBServer -.->|4. Return| Firewall
Firewall -.->|5. Return| AppA
Spoke1 <-.->|Peering| HubVNet
Spoke2 <-.->|Peering| HubVNet
style Spoke1 fill:#00a4ef,color:#fff,stroke:#0078d4,stroke-width:3px
style HubVNet fill:#50e3c2,color:#fff,stroke:#00b294,stroke-width:3px
style Spoke2 fill:#00a4ef,color:#fff,stroke:#0078d4,stroke-width:3px
style Spoke1Title fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:2px
style HubTitle fill:#00b294,color:#fff,stroke:#007a6e,stroke-width:2px
style Spoke2Title fill:#0078d4,color:#fff,stroke:#005a9e,stroke-width:2px
style Firewall fill:#ff6b6b,color:#fff,stroke:#d63031,stroke-width:3px
style Routes fill:#ffeaa7,color:#000,stroke:#fdcb6e,stroke-width:3px
style AppA fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
style DBServer fill:#e3f2fd,color:#000,stroke:#0078d4,stroke-width:2px
# User-Defined Route for spoke-to-spoke via hub firewall
resource "azurerm_route_table" "spoke_routes" {
name = "rt-spoke-to-hub"
location = "eastus"
resource_group_name = azurerm_resource_group.networking.name
route {
name = "to-other-spokes"
address_prefix = "10.0.0.0/8" # All private RFC1918
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.hub.ip_configuration[0].private_ip_address
}
route {
name = "to-internet"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.hub.ip_configuration[0].private_ip_address
}
}
resource "azurerm_subnet_route_table_association" "spoke_prod" {
subnet_id = azurerm_subnet.spoke_prod_workload.id
route_table_id = azurerm_route_table.spoke_routes.id
}
Quick Setup with Azure CLI
For testing or quick deployments, Azure CLI provides a faster path:
# Create regional peering
az network vnet peering create \
--name peer-vnet1-to-vnet2 \
--resource-group rg-networking \
--vnet-name vnet-prod \
--remote-vnet /subscriptions/{sub-id}/resourceGroups/rg-networking/providers/Microsoft.Network/virtualNetworks/vnet-shared \
--allow-vnet-access \
--allow-forwarded-traffic
# Create reverse peering
az network vnet peering create \
--name peer-vnet2-to-vnet1 \
--resource-group rg-networking \
--vnet-name vnet-shared \
--remote-vnet /subscriptions/{sub-id}/resourceGroups/rg-networking/providers/Microsoft.Network/virtualNetworks/vnet-prod \
--allow-vnet-access \
--allow-forwarded-traffic
# Check peering status
az network vnet peering show \
--name peer-vnet1-to-vnet2 \
--resource-group rg-networking \
--vnet-name vnet-prod \
--query peeringState
Cost and Performance Comparison
Here’s a practical comparison table based on my experience with production workloads:
| Peering Type | Latency | Bandwidth | Cost (per GB) | Best For |
|---|---|---|---|---|
| Regional | Sub-1ms | Up to 100 Gbps | $0.01 ingress/egress | Same-region segmentation, high-throughput apps |
| Global (same continent) | 20-50ms | Up to 100 Gbps | $0.035 ingress, $0.035 egress | Multi-region DR, compliance requirements |
| Global (cross-continent) | 100-200ms | Up to 100 Gbps | $0.05-0.08 ingress/egress | Global distribution, geo-redundancy |
| Hub-and-Spoke (regional) | Sub-1ms + NVA overhead | Depends on NVA SKU | $0.01 + NVA costs | Enterprise governance, centralized security |
Cost optimization tips:
- Use regional peering wherever possible
- Consolidate cross-region traffic patterns
- Monitor data transfer with Azure Cost Management
- Consider ExpressRoute for predictable high-volume scenarios
Security and Governance Considerations
Network Security Groups (NSGs)
NSGs still apply even with peering. I always recommend:
resource "azurerm_network_security_group" "spoke_prod" {
name = "nsg-spoke-prod"
location = "eastus"
resource_group_name = azurerm_resource_group.networking.name
# Allow traffic from hub firewall
security_rule {
name = "allow-from-hub"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "10.0.1.0/24" # Hub firewall subnet
destination_address_prefix = "*"
}
# Deny all other VNet traffic
security_rule {
name = "deny-vnet-inbound"
priority = 4000
direction = "Inbound"
access = "Deny"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "VirtualNetwork"
destination_address_prefix = "*"
}
}
Service Endpoints and Private Endpoints
When using peering with Azure PaaS services:
- Service Endpoints: Work across peered VNets in the same region
- Private Endpoints: Recommended for cross-region scenarios and better isolation
# Private Endpoint for Azure SQL in peered VNet
resource "azurerm_private_endpoint" "sql" {
name = "pe-sql-prod"
location = "eastus"
resource_group_name = azurerm_resource_group.data.name
subnet_id = azurerm_subnet.spoke_prod_data.id
private_service_connection {
name = "psc-sql-prod"
private_connection_resource_id = azurerm_mssql_server.prod.id
subresource_names = ["sqlServer"]
is_manual_connection = false
}
private_dns_zone_group {
name = "pdnszg-sql"
private_dns_zone_ids = [azurerm_private_dns_zone.sql.id]
}
}
Route Propagation
If you’re using VPN or ExpressRoute with peering:
- Disable BGP route propagation on spoke subnets
- Force all traffic through the hub firewall
- Use Azure Route Server for dynamic routing scenarios
Best Practices and Lessons Learned
After implementing VNet peering across dozens of enterprise environments, here are my key recommendations:
1. Plan your IP address space upfront
- Use non-overlapping CIDR blocks across all VNets
- Reserve address space for future growth
- Document your IP allocation strategy (I use Terraform locals for this)
2. Implement hub-and-spoke by default for enterprise workloads
- Centralized security and governance
- Easier to manage at scale
- Better cost optimization through shared services
3. Always peer bidirectionally
- Peering is not automatic in both directions
- Use Terraform
depends_onto manage dependencies - Test connectivity from both sides
4. Monitor and alert on peering status
- Use Azure Monitor to track peering state
- Alert on
peeringStatechanges - Log all peering modifications
5. Use Azure Policy for governance
{
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Network/virtualNetworks/virtualNetworkPeerings"
},
{
"field": "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/allowForwardedTraffic",
"notEquals": "true"
}
]
},
"then": {
"effect": "deny"
}
}
6. Test failover scenarios
- Validate that your peering configuration supports your DR strategy
- Document expected behavior during regional outages
- Practice breaking and re-establishing peering
7. Consider bandwidth and latency requirements
- Measure actual performance with iPerf or similar tools
- Don’t assume peering will solve all performance issues
- Global peering has physics-based latency constraints
8. Secure spoke-to-spoke communication properly
- Never rely on peering alone for security
- Implement NSGs, Azure Firewall rules, and NVAs
- Log and audit all inter-VNet traffic
Key Takeaways
VNet peering is powerful but requires thoughtful design:
- Regional peering is your default for same-region connectivity – low latency, low cost, high performance
- Global peering enables multi-region architectures but comes with cost and latency tradeoffs
- Hub-and-spoke is the enterprise-standard pattern for governance, security, and operational efficiency
- Transitive routing doesn’t work – you need NVAs or Azure Firewall for spoke-to-spoke communication
- Security layers still apply – NSGs, firewalls, and private endpoints are essential
- Cost management matters – monitor cross-region data transfer and optimize traffic patterns
The key to successful VNet peering implementations is understanding your workload requirements, planning IP addressing carefully, and implementing proper security controls from day one. I’ve seen organizations struggle with peering retrofits because they didn’t consider growth or security boundaries upfront.
Start with a well-designed hub-and-spoke architecture, use Infrastructure as Code for consistency, and you’ll have a network foundation that scales with your cloud adoption journey.