At several EUC projects we have a testing Workspace ONE environment(s) where desktop image and application packaging takes place, and a production Workspace ONE environment where only tested and approved items from the test are released. The environments are separated into vCenter/NSX managers. Actually, the production ones are even more separated in management and two or more desktop pods. All with their own NSX managers, and with there own management and rules. There is a need for a way to synchronize the approved NSX DFW rule set from testing to production, and between the production pods, without too much effort or human interference. We couldn’t find a cmdlet that does this all, so I wrote up the following script to synchronize the NSX configuration between pods: PowerNSX DFW Synchronization Script. We also have the same need for other projects, and I think it will benefit the next iteration of the NSXHorizonJumpstart I was working on earlier. You can go and grab the first version of the PowerNSX DFW synchronization script at https://github.com/Paikke/NsxSynchronization. In the remainder of this blog post, I will explain some further this script.
Almost a month from now, but I’m very proud and honored to be on the agenda at this years NLVMUG UserCon on March 20th with a Horizon and NSX Secure Desktop session. Go and say hi to me in Dexter 17 – 18 at 10:20.
While you will be at the NLVMUG, also take a peek at the awesome speaker’s list or the agenda, for some other speakers and their awesome sessions. Though choices ;). The keynotes will be done by no other than Pat Gelsinger –Pat Gelsinger, CEO VMware for the Opening keynote, and Joshua McKenty –Vice President, Global Ecosystem Engineering, Pivotal will do the afternoon keynote.
This year there is a pre-con day on March the 19th. Details for the pre-con can be found over here. The pre-con is also pretty packed with a Hackathon (I can only say I had a very good experience at the VMworld Hackathon), VCDX Workshop, and multiple R&D tracks.
In this blog post, I would like to write up the procedure for setting up an NSX Edge Load Balancer for a VMware Identity Manager cluster. Like discussed in this post and this post, NSX Edge Load Balancers will be all over the place in a Workspace ONE platform environment. And this is one place…
I am also working on testing the available persistence configuration of Workspace ONE NSX Edge load balancers (heads up to a blog post), and adding to the NSXHorizonJumpstart script more Workspace ONE firewall sections and load balancing configuration. If only my new year’s resolution was to grow four extra brains and hands, this would be published a little faster….
Identity Manager, Hmmmm?
For the Workspace ONE user access or identity management service, VMware Identity manager (IDM) is needed. And not just user access, also the application catalog. It is the layer your users sessions will hit first (well after enrolling their devices). And with that presumably, some availability requirements, insert a cluster of IDM ergo a High Available IDM. A cluster of IDM is a minimum of 3 nodes and this needs a load balancer. But how? Well after the first node is deployed, you will configure IDM to have an external database, for active-active on an MSSQL Always-On. When that is running an identity source should be configured, for example, connected to an Active Directory. And have a load balancer setup and FQDN filled. Then after a correct configuration shut down the node and clone to an identity manager cluster.
Need some more information on the steps than the above TL;DR? Read on
On Tuesday 21st of November ITQ will host her Digital Transformation Event: Transform! This jam-packed event will have several ITQ, VMware, Pivotal and IBM speakers covering a range of topics such as End User Computing (EUC), (Cloud Native) Development, Software-Defined Data Center (SDDC), Hybrid Cloud and IT Transformation Services (ITTS). For more details on the awesome sessions, take a peek at the Transform! agenda.
There is still room so register here. Hope to see you there!
And now for a shameless plug of my own session.
The team that brought you PowerNSX just released the ‘Minimum Viable Product’ (MVP) of project Magpie. The ‘Multi Access General Purpose Infrastructure Explorer’ is specifically designed to be modular, and by this allowing new tools to be rapidly integrated into Magpie. I instantaneously have a small flashback to the Webcommander days, but that can just be me. In short, Project Magpie is an appliance containing various tools and utilities to support NSX. This appliance will serve as a framework for various tools to support operation and management of VMware (NSX) deployments. In short, the v0.1 Magpie contains release contains the following features:
- Multi-user Support – Create accounts for all users that require access to PowerNSX.
- PowerCLI and PowerNSX modules included and ready to go.
- Web-based User Interface – Access PowerNSX and PowerCLI from just via a web browser.
- Hosted documentation – You can access a searchable PowerNSX documentation that is updated from the Internet (Github). The documentation requires a working Internet connection, no Internet no documentation.
- PowerNSX SSH access – Access the PowerNSX/PowerCLI environment via an SSH terminal to the appliance.
- Photon OS – The appliance is built upon VMware’s lightweight Photon OS.
Pretty awesome as for NSX who does not use PowerNSX? And this is just the first initial release. Let us take her out for a spin and play!
In this blog post, I want to describe the manual steps on how to deploy and configure an NSX load balancer for the Platform Service Controllers (PSC). Hey wait, weren’t you doing PowerNSX automation stuff before? Yes and I still mean to do so. But with automation comes checking if the procedure actually works before attempting to automate that procedure. Garbage in is a lot of garbage out with automation….
Implementing NSX for desktop, whether for micro-segmentation or Load Balancing, takes time and effort to design and implement, that’s why I started the HorizonJumpstart to help with a starting point and hopefully some guidance. This post is about the Load Balancing part and the start-up of some additions to NSXHorizonJumpstart to include NSX Edge Gateway Load Balancers.
A few blog posts ago (https://www.pascalswereld.nl/2017/08/24/nsx-for-desktop-jumpstart-microsegmentation-with-horizon-service-installer-fling/) I wrote about using the Horizon Service Installer fling for adding Horizon services to NSX for Desktop. From that blog post, I have been continuing to evolve the services file with services, sections, and rules that will normally appear in an EUC solution with VMware products. I tried to maintain the services yml file to keep on working with the fling. Currently you still can, however I don’t know how long this will be.
And this is because of another part I am working on, using PowerNSX for adding the services file to the NSX environment, and in turn, replace the need of the fling. You can read about me starting this at the post PowerCLI Collection: PowerNSX Desktop Jumpstart and process YAML (yml) config file. And this blog post is about explaining the first version to reach feature parity to the Horizon Service installer fling. The NSXHorizonJumpstart script now reads and adds to the complete yml file to NSX services, service groups, security groups and adds the Firewall sections with the firewall groups.
You can find both the services file as the current version of the script from the master branch at: https://github.com/Paikke/NSXHorizonJumpstart.
In my last blog post (https://www.pascalswereld.nl/2017/08/24/nsx-for-desktop-jumpstart-microsegmentation-with-horizon-service-installer-fling/) I wrote about using the Horizon Service Installer fling for adding Horizon services to NSX for Desktop. From that blog post I have been evolving the services file with services and rules that will normally appear in an EUC solution with VMware products. Not just sticking with Horizon 7, but also getting App Volumes, UEM, UAG, and infrastructure components in the picture. And I will be continuing to evolve the services.
Another part I am working on is using PowerShell/PowerNSX for adding the services file to the NSX environment, and in turn, replace the need of the fling. And this blog post is about explaining the current structure from reading the yml file and using this information to check and add to NSX. For now, the services yml file will be maintained to keep on working with the fling.
We fortunately see a lot more NSX with EUC deployments. Used for microsegmentation of the virtual desktop infrastructure, virtual desktop security protection and load balancing of the workspace components (see my previous post here: https://www.pascalswereld.nl/2017/06/09/euc-layers-horizon-connectivity-from-nsx-load-balancers-with-love/).
I want to focus a bit on the microsegmentation and mainly on the NSX service profiles, groups and standard set of rules for EUC with VMware Horizon. Currently neither NSX for Desktop as Horizon ships with a prepared set to use. Well the Horizon suite does not ship with NSX in any form, what is still a miss in my humble opinion. It can be a little difficult I know.
This blog post will try to focus on the expected to be part of your desktop environment and Horizon components and their NSX rules. Focussing on static Horizon services, static Infrastructure services and dynamic applications based on group membership. And using a fling to get them in your environment. I also have added more services and rules to the fling configuration file, and put up a github project to manage these changes. You can download an updated yml file from there, details a little later on so do read or scroll ahead ;). This is a work in progress as I am also just working on it in my current project.
Another layer that will hit your end users is the connectivity from the client device to the EUC solution. No intermitted errors allowed in this communication. Users very rarely like connection server are not reachable pop-ups. Getting your users securely and reliably connected to your organization’s data, desktops, and applications while guaranteeing connection quality and performance is key for any EUC solution. For a secure workspace protecting and reacting to threats as they happen even makes software-defined networking more important for EUC. The dynamic software is required. And that all for any place, any device, and anytime solution. And if something breaks well….
One of the first things we talk about is the need for reliable load balance several components as they scale out. And for not getting into all the networking bits in one blog post, I am sticking with load balancing for this part.
As Horizon does not have a one package deal with networking or load balancing, you have to look use an add-on to the Horizon offering or outside the VMware Product suite. Options are:
- interacting with physical components,
- depending on other infrastructure components such as DNS RR (that is a poor man’s load balancing) preferably with something extra like Infoblox DNS RR with service checks,
- using virtual appliances like Kemp or NetScaler VPX. VPX Express is a great free load balancer and more.
- Specific Software-Defined Networking for desktops, using NSX for Desktop as an add-on. Now instantly that question pops up why isn’t NSX included in, for example, Horizon Enterprise like vSAN? I have no idea but probably has something to do money (and cue Pink Floyd for the earworm).
And some people will also hear about the option of doing nothing. Nothing isn’t an option if you have two components. At a minimum, you will have to do a manual or scripted way of redirecting your users to the second component when the first hits the load mark, needs maintenance or fails. I doubt that you or your environment will remain long loved when trying this in a manual way…..
The best fit all depends on what you are trying to achieve with the networking as a larger picture or for example load balancing specifically. Are you load balancing the user connections to two connection servers for availability, doing tunneled desktop sessions, or doing a cloud pod architecture over multiple sites and thus globally. That all has to be taken into account.
In this blog post, I want to show you using NSX for load balancing connection server resources.
Horizon Architecture and load balancers
Where in the Horizon architecture do we need load balancers? Well, the parts that connect to our user sessions and a scaled out for resources or availability. We need them in our local pods and global load balancers when we have several sites.
- Unified Access Gateway (formally known as Access point)
- Security Server (if you happen to have that one lying around)
- Workspace ONE/vIDM.
- Connection Servers within a Pod, with or without CPA. However, with CPA we need some more than just local traffic.
- AppVolumes Managers.
And maybe you have other components to load balance, such as multiple vROPS analytical nodes for user interface load not hitting one node. As long as the node the Horizon for adapter connects to or from is not load balanced.
To improve the availability of all these kind of components, a load balancer is used to publish a single virtual service that internal or external clients connect to. For example, for the connection server load balanced configuration, the load balancer serves as a central point for authentication traffic flow between clients and the Horizon infrastructure, sending clients to the best performing and most available connection server instance. I will keep the lab a bit simple by just load balancing two connection server resources.
Want to read up more about load balancing CPA? EUC Junkie Bearded VDI Junkie vHojan (https://twitter.com/vhojan) has an excellent blog post about CPA and impact of certain load balancing decisions. Read it here https://vhojan.nl/deploy-cpa-without-f5-gtm-nsx/.
For this one here, on to the Bat-Lab….
Bat-Labbing NSX Edge Load Balancing
Let’s make the theory stick and get it up and running in a Horizon lab I have added to Ravello. Cloned from an application blueprint I use for almost all my Horizon labs and ready for adding a load balancing option NSX for Desktop. The scenario is load balancing the connection servers. In this particular example, we are going to one-armed. this means the load balancer node will live on the same network segment as the connection servers. Start your engines!
Deploying NSX Manager
How do your get NSX in Ravello? Well either deploy it on a nested ESXi or import method to deploy NSX directly on Ravello Cloud AWS or GC. I’m doing the last. As you did not set a password you can log in to the manager with user admin and password ‘default’.
That is the same password you can use to go to enable mode, type enable. And if you wish config t for configuration mode. Flashback to my Cisco days :))….In configuration mode, you can set hostnames, IP and such via CLI.
But the easiest way is to type setup in basic/enable mode. Afterwards, you should be able to login via the HTTPS interface. Use that default password and we are in.
Add a vCenter registration for allowing NSX components to be deployed. On to the vSphere Web Client. Add this point you must register an NSX license else you will fail to deploy the NSX Edge Security Gateway Appliance.
Next prepare the cluster for a network fabric to receive the Edges. Goto Installation and click the Host Preparation tab. Prepare hosts in your cluster you want to deploy to (and have licensed for VDI components or NSX for Desktop is no option). Click on actions – install when you are all set.
For this Edge Load Balancer services deployment, you don’t need a VXLAN or NSX Controller. So for this blog part, I will skip this.
Next up deploying an NSX Edge. Go to NSX Edge and client on the green cross to add. Fill in the details, configure a minimum of one interface (depending on the deployment type) as I am using a one-arm – select the pools, networks and fill in the details. In a production, you would also want some sort of cluster for your load balancers, but I have only deployed one for now. Link the network to a logical switch, distributed vSwitch or standard vswitch. I have only one, so the same network standard vSwitch. Put in the IP addresses. Put in a gateway and decide on your firewall settings. And let it deploy the OVA.
If you forgot to allow for nested in the /etc/vmware/config and get You are running VMware ESX through an incompatible hypervisor error. Add vmx.allowNested = “TRUE” to that file on the ESXi host nested on Ravello. Run /sbin/auto-backup.sh after that. If you retry the deployment this will normally work.
We have two connection servers in vTestLab
Go back to the vSphere web client and double-click the just created NSX edge. Go to Manage and tab Load Balancer. Enable the Load Balancer.
Create an Application Profile. For this configuration, I used an SSL pass-through for HTTPS protocol with SSL-Session persistence in the below example. The single threaded NSX reallyt realy suitable for SSL offloading here. But I should have read the documentation a bit better as source IP is documented. Testing shows the source IP persistence works better. Probably SSL sessions are reinitiated somewhere along the line, and SSL-sessionid gives you a new desktop more often than with source IP.
For this setup, you can leave the default HTTPS service monitor. Normally you would also want to have service checks on for example the Blast gateway (8443) or PCoIP (4172) if components use this.
Next setup your pool to include your virtual servers (the connection servers) and the service check, monitor port, and connections to take into account.
Next up create the virtual server with the load balancing VIP and match that one to the just created pool.
After this look at the status and select pool
Both are up.
You can now test if an HTTPS to 10.0.0.12 will show you the connection server login page.
Connected. Using HTML Access will fail with an error connecting to the connection server (Horizon 7.1) as I did not change the origin checking. You can disable this protection by adding the following entry to the file locked.properties (C:\Program Files\VMware\VMware View\Server\sslgateway\conf) on each connection server:
balancedHost=URL via loadbalancer such as vdi.euc.nl
Restart the VMware Horizon View Connection Server service.
And of course, you would add a DNS record to 10.0.0.12 to let your users use the connection to the connection servers, like vdi.vtest.lab. And use an SSL certificate with that name.
Now the last check if the load balancing is working correctly. I kill off one of the connection servers.
And let see what the URL is doing now:
Perfect the load balancer connects to the remaining connection server. This time for the admin page.
This concludes this small demonstration of using NSX for Load Balancing Horizon components.
– Happy load balancing the EUC world!