Skip to main content

Quick view on NSX Multi-Tenancy

 NSX-T brings an evaluation into SDN space whether it's networking, security or even monitoring the environment.

During its long journey starting from acquiring this product from Nicira Network by VMware to date, we have seen several enhancements evolving into this product.

From NSX-V to NSX-T and now rebranded to NSX starting from version 4. x this product is all set on the customer expectation whether it's a startup or a multi-billion Fortune 500 organization.

In this article, we will discuss one of NSX new offerings in NSX ver 4.1 which is NSX Project or multi-site tenancy.

Before starting into this let's draft a hypothetical or fictitious scenario...

In an organization called Virtualvmx, there were 3 tenancies:

  • Alpha
  • Beta
  • Gama
All the above 3 tenants have some compliance guidelines for their organization where one tenant should not expose its networking component inside NSX with other tenants like Layer 2 networking which includes segment, security policies, T1 routers and so on.

Before NSX 4.1..x we had no such capabilities as all tendencies were exposed to each other with their networking components like segments, T1, DFW policies, segment profiles, etc.

Starting from NSX 4.1.x we can accomplish this requirement with the offering of NSX Project.

Using NSX Project one can isolate all its securing networking components from one tenant to another in a single NSX Deployment.

In NSX 4.1.x we can have multi-tenancy created for all 3 tenancies under NSX Project to accomplish the isolation of networking & security for individual tenancy.

Under multi-tenancy, each tenant can isolate their L2 networking with other tenants. However, L3 networking which includes T0 routers (Edge nodes)  going to be shared with other tenants or could be dedicated to individual tenancy as per requirement.

Once you start creating projects inside NSX for individual tenancy, at that stage there will be 2 views on NSX which are:

  • Default view.
  • Project view.


Default view:


This is the section that is governed by NSX Enterprise administrator or other security role which is generally not assigned to individual tenancy.

In this view, the Enterprise administrator has the ability to modify T0 routers, Edge nodes, transport zones, and so on. In a nutshell, Default space is that space that is not assigned to any project.

The below picture shows the view of the default section.
In this view, the Enterprise administrator has the full privilege to add/remove or modify any L2 or L3 components inside NSX deployment.


Default View in NSX Manager

                                                                    
Now, From the Default view you can create multiple projects as mentioned below:


To create a new project. Go to > Manage Projects.
Also, you have to assign RBAC policies to the project which is associated with the project.

Here you could associate the shared T0 gateway/ Edge Cluster used by other projects or you can decide dedicated T0 for individual projects.

In this scenario, we have created 2 projects. Alpha & Beta.  Both the projects are assigned to individual users through RBAC policy and assigned with a Project Admin role.


Project View

Now we will try to log in with the newly created user " Beta" which we have assigned to Beta users giving project admin role.

Here when we log in on NSX using Beta project credentials, only the project-specific view is displayed. as shown in the below screenshot. Observe that the "System" tab is not visible to the Project Admin as that functionality of managing entities under "System" is privileged to Enterprise admin only.


At present in the current version, NSX Project only supports 1 edge cluster configured on the "default" overlay-transport zone. Custom transport zone is currently not supported. 

Having said that, compute and edge transport nodes going to be configured with the transport node name "nsx-overlay-transportzone" which is the default in NSX.


The above described is just a 30,000-foot view of the NSX Project. In a nutshell, this can be opted by those who use single NSX deployment being shared with multiple tenancies where they want to isolate networking & security elements from one tenant to another.






























Comments

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)