Skip to main content

VCF9 : VMware Identity Broker (VIDB) in VCF 9.x: Architecture, Design, and Real-World Behavior

Introduction

With the evolution of VMware Cloud Foundation (VCF) 9.x, Broadcom introduced several foundational platform changes aimed at improving security, scalability, and lifecycle consistency across private cloud environments.

One of the most critical yet frequently misunderstood components is VMware Identity Broker (VIDB).

This article provides an end-to-end, practical understanding of VIDB, covering:

  • Why VIDB exists and the problem it solves

  • How VIDB works internally

  • Where VIDB is deployed in VCF

  • High availability and security design

  • Multi-site architecture (Site 1 / Site 2)

  • Embedded or on HA-Cluster?

  • Operational behavior and lifecycle management

  • Common misconceptions and pitfalls

  • FAQ explanations

This guide is written for architects, consultants, and advanced VCF practitioners who want clarity—not marketing.


What Is VMware Identity Broker (VIDB)?

VMware Identity Broker (VIDB) is a centralized identity federation and trust-broker service introduced with VCF 9.x.

In simple terms:

VIDB acts as a secure intermediary between your enterprise identity provider (AD / SAML / OIDC) and VCF platform services, ensuring those services never directly integrate with enterprise identity systems.

VIDB is not optional, not add-on, and not configurable outside supported workflows. It is a core VCF platform service.

Figure:1

Why VIDB Was Introduced (The Real Problem It Solves)

Before VIDB (Legacy Model)

  • Each VCF component integrated directly with Active Directory

  • Tight coupling between identity and platform services

  • Complex upgrades and rollback risks

  • Inconsistent authentication behavior

  • Expanded security attack surface

After VIDB (VCF 9.x Model)

  • Single, centralized identity entry point per VCF instance

  • Decoupled identity architecture

  • Platform-managed lifecycle

  • Consistent authentication and RBAC behavior

  • Stronger security boundaries

VIDB is not a feature—it is a foundational platform service.

 
Figure:2

Where Is VIDB Deployed?

VIDB is deployed:

  • Only in the Management Domain

  • Automatically during VCF bring-up or convergence

  • Managed exclusively by VCF Lifecycle Manager

  • Never deployed in workload domains

Treat VIDB exactly like SDDC Manager—a protected, platform-level service.

Figure:3

Each VIDB deployment consists of:

  • Three stateless VIDB nodes

  • One Virtual IP (VIP)

  • Three IP's for nodes


Why Four IP Addresses?

VIP:  Stable endpoint for all VCF services
Node 1:  Active VIDB instance
Node 2:  Redundant instance
Node 3:  Redundant instance

Figure:4

This design enables:

  • No single point of failure

  • Transparent failover

  • Rolling upgrades

  • Zero-downtime maintenance


Shall I deploy VIDB as Embedded or with HA-Cluster.....

Deploying VIDB as Embedded or with HA-Cluster depends where you are deploying. Let's discuss:

Embedded :  Its good for POC / Lab and very small environment where one can tolerate a simple availability model.

HA Cluster:  When identity is mission-critical and you need higher resiliency + maintenance flexibility.


Authentication & Token Flow (Step-by-Step)


                                                      Figure:5

 

  1. User accesses a VCF UI or API

  2. Request is redirected to the enterprise IdP

  3. IdP authenticates user (password + MFA)

  4. Assertion is returned to VIDB

  5. VIDB validates trust and policies

  6. VIDB issues a short-lived token

  7. VCF service consumes the token

  8. RBAC is enforced locally

Tokens are short-lived, site-local, and instance-scoped


Multi-Site Design (Site 1 / Site 2 Best Practice)

                                                                 Figure:6

Scenerio

  • Site 1 → VCF Instance A

  • Site 2 → VCF Instance B

  • Shared enterprise identity provide.

Correct Architecture:

  • One VIDB per VCF instance

  • No cross-site VIDB usage

  • No stretched identity services

Why This Matters:

  • Failure isolation

  • Independent upgrades

  • Low-latency authentication

  • Fully supported design

VIDB is never stretched across sites


Security Benefits of VIDB

  • No credentials stored in VCF services

  • Certificate-based trust

  • TLS-secured communication

  • Short-lived tokens

  • Reduced attack surface

This design aligns VCF identity with zero-trust principles.


Lifecycle Management

VIDB lifecycle is:

  • Fully automated

  • Rolling and non-disruptive

  • Managed entirely by VCF LCM

  • Not manually patchable or configurable

Each VCF instance upgrades its VIDB independently.


Frequent Asked Question (FAQ)

Where is VIDB Deployed?

VIDB is deployed in the Management Domain and managed as a platform service by VCF. It is not supported in workload domains.

Why does VIDB require three nodes?

Three stateless nodes provide high availability, prevent split-brain scenarios, and allow rolling upgrades without downtime.

Why four IP's?

Three IPs for the nodes and one VIP as a stable access endpoint for all VCF services.


Where is MFA enforced?

At the enterprise identity provider. VIDB only brokers trust and tokens.


What happens if a VIDB node fails?

No impact. Traffic continues through the VIP to remaining healthy nodes.

 

Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with...

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0 ...