Skip to main content

Networking in VCF 9: Centralized vs Distributed Connectivity

 

Networking in VCF 9: Centralized vs Distributed Connectivity

With the release of VMware Cloud Foundation 9 (VCF 9.0), VMware has redefined the private cloud networking model by introducing both centralized and distributed connectivity. This dual approach provides flexibility for organizations to choose between traditional edge-based routing and modern, host-level distributed networking.


Why New Networking in VCF 9?

Prior to VCF 9, NSX networking was largely edge-centric—requiring dedicated Edge clusters to handle north-south traffic. This created scaling and operational overheads. VCF 9 introduces a cloud-like networking abstraction with:

  • Native VPCs (Virtual Private Clouds) for tenant isolation.

  • Transit Gateways (TGW) to interconnect VPCs and external networks.

  • Simplified bootstrapping – NSX VIBs are embedded in ESXi for easier enablement.

  • Improved lifecycle and visibility via integrated VCF Operations.

These enhancements align with VMware’s goal of offering public cloud simplicity with private cloud control.


Core Building Blocks

Virtual Private Cloud (VPC)

Each VPC provides isolated networking for workloads. Users can create subnets, security groups, and routing policies.

Transit Gateway (TGW)

TGWs act as the backbone of connectivity, linking multiple VPCs and connecting to external or on-prem networks.

External Connectivity Models

VCF 9 offers two approaches for north-south traffic:

  • Centralized (CTGW) – traffic flows through NSX Edge clusters.

  • Distributed (DTGW / Edgeless) – routing happens at the ESXi host level.




Centralized (CTGW) Connectivity

In the centralized model, external traffic exits via NSX Edge VMs hosting Tier-0 Gateways.

Workflow:

  1. VPC traffic routes to TGW.

  2. TGW connects to Tier-0 gateway on Edge Cluster.

  3. Tier-0 peers (BGP/static) with physical routers.

Advantages:

  • Centralized control and policy enforcement.

  • Ideal for complex routing and multi-WAN setups.

  • Easier integration with existing NSX Edge-based services (NAT, LB).

Limitations:

  • Higher latency (additional Edge hop).

  • Resource overhead of Edge clusters.

  • Possible bottlenecks under heavy traffic.

Illustration:




Distributed (DTGW / Edgeless) Connectivity

The distributed model removes the dependency on Edge VMs. Each ESXi host becomes capable of directly routing external traffic.

Workflow:

  1. TGW maps to a VLAN connected to the physical network.

  2. Each host handles north-south routing directly.

  3. Network services (NAT, firewall) are distributed at the host level.

Advantages:

  • Lower latency and improved performance.

  • No need for dedicated Edge nodes.

  • Simpler and more scalable for lightweight or edge deployments.

Limitations:

  • Less centralized visibility.

  • Some advanced services may still need Edge nodes.



Comparison Table






End-to-End Flow Example

  1. Project Admin defines a Transit Gateway (TGW).

  2. VPC Admin creates VPCs and subnets.

  3. VPC traffic flows to TGW.

  4. TGW connects to either:

    • Tier-0 Gateway (centralized)

    • VLAN uplink (distributed)

  5. External IPs and BGP peering complete the setup.

Visual Flow:

VCF9 Networking Overview


When to Choose Each Model

Hybrid adoption is also supported—some domains can run centralized edges while others leverage distributed exits.


Extract:

  • VCF 9 unifies NSX networking into a modern, VPC-driven model.

  • Distributed networking simplifies infrastructure for smaller or edge environments.

  • Centralized networking remains relevant for policy-rich or complex routing needs.

  • Together, they deliver flexibility, scalability, and cloud-like networking agility.


Resources good to have on this


Thanks for the reading . Happy learning 😊

Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with...

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0 ...