Skip to main content

Posts

IDS/IPS (Intrusion Detection System) & (Intrusion Prevention System)

 IDS (Intrusion Detection System) As its name suggest that it's designed to detect malicious or suspicious activity in the network by scanning data packets and monitoring the network traffic. It detects packet forwarding if its a good or bad packet where bad packet determines of malicious threats or any kind of risk. It generates logs to identify suspicious activity. It can not prevent malicious threats or attacks from inside the environment or outside, the aim behind the design the IDS to give warnings of that suspicious or malicious activity or threats to the system administrators or security/network admin. It continuously monitors and analyzes the incident, violations, and threats which may be breaking the network security. Credit: pngio.com IPS (Intrusion Prevention System) Its is designed to prevent the malicious or suspicious threat and activities which are detected by IPS in the network. Its design to block suspicious and malicious activities and threats before it develops a

NSX-T Manager Node Recovery

In the NSX-T environment, there were scenarios where it's required to bring down the manager node instances off from the cluster due to several abnormal reasons. Scenarios like if there were some issues during the upgrade of the manager node instance or having any abnormal circumstances where is node unable to recover from NSX-T Manager UI.  To recover/replace the node from the manager cluster its require to attempt with the manual process . Let's discuss the manual path to recover/Replace a Manager Node in the Cluster. 1) Login to NSX-T manager using CLI 2) Use command ' get cluster status ' This command will list all the NSX-T manager/controllers nodes into the cluster. Find the UUID of the existing node and Cluster to identify the node which requires recover/replace. 3) Now that we have identifying the manager node ID from the above command, its time to detect the node from the cluster.  Using detach node command "node id" will remove the node from the clus

NSX-T Datacenter Firewall

In NSX-T we have two types of firewall which we will discuss into this post. 1) Distributed firewall 2) Gateway firewall Lets talk about one by one.. 1) Distributed firewall: A distributed firewall hosted at the host (hypervisor) level which is kernel-embedded statefull firewall. This kind of firewall mostly used in between the transport nodes or you can say within in east-west network. Basically distributed firewall helps protecting the virtual machine at the virtual machine level from the hacking attack. Many people have a question like , if we have perimeter firewall at the physical layer to protect the network then why we require a firewall (distributed firewall) at the VM level......   To answer this question, Yes many of you are correct that perimeter firwall is there to protect the network at the top level. However, there are some attach which directly attach at the VM level like attach from USB drive, phishing emails and advertisements attracts.   To p

Dockers.. Basic commandlets

In this article we will go through some of the basic commands used in dockers. So lets get started. 1) docker ps This command is used to list all the running containers ie: $ docker ps CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                        NAMES 4ba5baace270        couchbase           "/entrypoint.sh couc…"   8 seconds ago       Up 5 seconds        8091-8096/tcp, 11207/tcp, 11210-11211/tcp, 18091-18096/tcp   naughty_hopper 6c1773f25479        nginx               "nginx -g 'daemon of…"   5 minutes ago       Up 5 minutes        80/tcp                                                       compassionate_dijkstra 2) docker ps -a This command list all the container into the docker, whether its in running, stopped or exited. $ docker ps -a CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS     

Removing NSX-T manager extension from vCenter

In NSX-T starting from ver 2.4 NSX-T appliance got decoupled from vCenter where now its not mandatory to run NSX-T on vCenter platform only. Now NSX-T can be managed through standalone ESXi host, KVM or through container platform. As in version 2.4 there is still an option available to connect vCenter to NSX-T using Compute Manager. Here in this blog we will learn how we can unregister and register NSX-T extenstion from vCenter in case of any sync or vCenter connectivity issue with NSX-T. Lets get started.. 1) Login to NSX-T UI Go to -> System ->Compute Manager Here, vCenter is showing in Down status where the status is showing as "Not Registered" 2) When we click on "Not Registered" option its states below error. 3) When try to click on Resolve option its states below. At this stage if the Resolve option doesn't work then its require the remove the NSX-T extenstion from vCenter. To remove the NSX-T e

Secret in Kubernetes

Secret in Kubernetes Secrets in Kubernetes are sensitive information like Shh keys, tokens, credentials etc. As in general its must require to store such kind of secret object in encrypted way rather than plantext to reduce the risk of exposing such kind of information to unauthorised species. A secret is not encrypted, only base64-encoded by default. Its require to create an EncryptionConfiguraton with a key and proper identity. All secret data and configuration are stored onto etcd which is accessible via API server. Secret data on nodes are stored on tmpfs volumes. Individual secret size is limited to 1MB in size. The larger size limit is discouraged as it may exhausted apiserver and kubelet memory.  To use secret its require that pod needs to reference with secret. A secret can be used in 2 ways with pod: as file in a volume mounted on one or more containers, or use by kubelets while pulling images from the pod. There are two steps involved in setting up secret

Project Pacific VMware

Project Pacific Project Pacific is a re-architecture of vSphere with Kubernetes as its control plane. To a developer, Project Pacific looks like a Kubernetes cluster where they can use Kubernetes declarative syntax to manage cloud resources like virtual machines, disks and networks. To the IT admin, Project Pacific looks like vSphere – but with the new ability to manage a whole application instead of always dealing with the individual VMs that make it up. Project Pacific will enable enterprises to accelerate development and operation of modern apps on  VMware vSphere  while continuing to take advantage of existing investments in technology, tools and skillsets. By leveraging Kubernetes as the control plane of vSphere, Project Pacific will enable developers and IT operators to build and manage apps comprised of containers and/or virtual machines. This approach will allow enterprises to leverage a single platform to operate existing and modern apps side-by-side. The

vMotion

vMotion VMware vMotion enables the live migration of running virtual machines from one physical server to another with zero downtime, continuous service availability, and complete transaction integrity. It is transparent to users. vMotion advantage: Automatically optimize and allocate entire pools of resources for maximum hardware utilization and availability. Perform hardware maintenance without any scheduled downtime. Proactively migrate virtual machines away from failing or underperforming servers. Virtual machine and its host must meet resource and configuration requirements for the virtual machine files and disks to be migrated with vMotion in the absence of shared storage. vMotion in an environment without shared storage is subject to the following requirements and limitations: The hosts must be licensed for vMotion. The hosts must be running  ESXi  5.1 or later. The hosts must meet the networking requirement for vMotion. See  vSphere vMotion Net

Enhanced vMotion

Enhanced vMotion (EVC) vSphere Enhanced vMotion is a feature through which workload can be live migrated from one ESXi host to another ESXi host which are running on different CPU generation but with same cpu vendor. EVC in vSphere was introduced in vSphere 5.1 using vMotion and Storage vMotion terminology. EVC can be enabled at the vSphere ESXi Cluster and on VM's. Figure 1 VMware EVC Mode works by masking unsupported processor having different generation of same vendor and presenting a homogeneous processor to all the vm's in a cluster. The benefit of EVC is that you can add ESXi host consist of latest processors to exising cluster without incurring any downtime. The VMware Compatibility Guide is the best way to determine which EVC modes are compatible with the processors used in your cluster.  Below in  figure 1 demonstrates how to determine which EVC mode to use given 3 types of Intel processors. https://www.vmware.com/resources/compatibility/s