Locate UX with Liquidware Stratusphere – Part two Let’s see how this thing works

In the previous post, we discussed the why in getting User Experience insights, and where to use Stratusphere UX to fulfill this.

In this part, I am going more into the what and how part by taking Stratusphere UX for a spin. We will start with the what by doing a little architecture as a starting point.

What is needed?

The architecture of Stratusphere (FIT/UX included) has four components: Stratusphere Hub Appliance, Stratusphere Database Appliance, Stratusphere Collector Appliance, and Connector ID (CID) Key. In short, the components do the following:

    • Stratusphere Hub (required), a pre-configured virtual appliance that provides the central policy management, policy distribution, data collection, reporting and alerting system for Stratusphere, as well as the user, interface for the Stratusphere user.
    • Stratusphere database (required), co-installed on the Hub or separate preconfigured virtual appliance. This component provides the central data storage for the Stratusphere product line. The separated appliance is needed when having a large number of desktop deployments (500+ desktops, see the sizing guide for details).
    • Statusphere Collector appliance, again co-installed on the Hub for small environments or a separated preconfigured virtual appliance that provides the ability to collect CID Key data and network monitoring data (in small environments both forms of data) for Stratusphere.
  • Desktop agent, or the Connector ID Keys aka CID. Windows, MacOS and Linux Distribution in-guest agent for getting those in-guest insights about the system, applications, and user activities. The CID’s are managed by the Hub and can be installed manually, via software distribution mechanisms or integrated into the master image for virtual desktops.

Depending on the to-investigate environment the hub, database, and collector can be shared in one appliance or can be separated. Use the calculator to determine what sizing and infrastructure architecture is appropriate for your use case. The calculator can be found at http://www.liquidware.com/products/stratusphere-sizing-guide.

The following architecture diagram shows the components interacting (as taken from http://www.liquidware.com/content/pdf/documents/support/Liquidware-Stratusphere-Architecture-Overview.pdf)

Stratusphere Architecture

Set the VDI profile for your assessment aka environment

Stratusphere will self learn and adapt the values to the assessed environment over time. However, the VDI profile is highly customizable to your needs. You can set everything according to what you (or your customer) find important for a successful UX. Start thinking about these figures when starting to define the use cases to be investigating. Speak to key users and application owners to find what is important for the user community and what values/metrics the application are dependent on. Try to find some common ground. If you put the weight of CPU higher than the weight of memory, and the applications are memory resource intensive you will have false information in your report. At that time you can work on a, or some, profile that you would wish to report on.

If this information beforehand is not clear (and this happens a lot), start with the defaults and take some more time to analyze the data.

When all is running: go to Stratusphere UX part, Diagnostics, and VDI UX Profile. You will see a screen similar to this one.

VDI UX Profile

So, now we have some information about what. Let’s see about the how part and take it for a spin.

On to the Bat..erm..Demo Lab

We have a Horizon Enterprise 7.5 demo lab environment, with Instant Clone and full clone Windows 10 desktop pools. And UEM, NSX, among others. I will be deploying the Stratusphere virtual appliance here. Beforehand, use the Liquidware Statusphere sizing calculator (http://www.liquidware.com/products/stratusphere-sizing-guide) to size the appliances for the environment. This is a demo lab so I won’t require a lot.

Use the following table for the OVF deployment links:

Stratusphere Hub Appliance:

http://download.liquidwarelabs.com/6.0.1/stratusphere_hub.ovf

Stratusphere Database Appliance*:

http://download.liquidwarelabs.com/6.0.1/stratusphere_database.ovf

Stratusphere Collector Appliance*:

http://download.liquidwarelabs.com/6.0.1/stratusphere_collector.ovf

After the ovf deployment checks the configuration, and start the appliance to boot up and do all the initial configurations. When the console shows, do the ALT-F2 combination.

Login with ssconsole and sspassword. From the menu, set the network configuration accordantly, set the FQDN, callback (fqdn preferable) and so on. Write configuration to finish the configuration.

Stratusphere Console

And with this, the appliance is good to go. Point your browser to the address shown (or set, as I was a bit fast with the capture of above image) and login with the default ssadmin and sspassword combination and you are ready to rock. I have connected to vCenter and Active Directory for the virtual machine and user side setup.

Add clients to the environment

For Statusphere to collect some data we should add some clients and components to the environment. Next, to the valuable client information, we can also add a directory and virtual infrastructure data. Do this as well if you are investigating virtual desktop UX. Add clients by downloading and installing the CID. And let Stratusphere collect some data. For our Full Clone desktops, I have used the groups’ policy standard which I have saved to a location that can be accessed via the network (a share). Use the ADMX template that can be downloaded here also.

Create a computer policy with this template. Set the Hub Address to enabled and fill in the FQDN/IP (do this for 32 and 64-bit if you have both.  And set the software settings installation package part pointing to the share.

For our instant clone desktop pools, I just installed the Windows version manually in the master image. When sealing the template run install-location\Liquidware Labs\Connector ID\admin scripts\ VMwareView_MasterImagePrep.bat. This stops the services and sets to manual. After seal create snapshot and push this image to the pool. In the post, customization run the post worker script or something else that sets the Liquidware services to auto and starts them (in order connector, UI and update). This will create a unique computer ID so we will have.

As you see in the above list, our demo environment currently contains one full clone, the base image, and three instant clone desktops.

Collect information and assess those insights

Allow some time for these products to collect and interpret the data. For Stratusphere and all similar, I advise for a minimum of a business cycle. On average a business cycle is 4 weeks or a month. The bare minimum is a few hours to two weeks. However, these results are with a little ‘it depends’. After a few hours, the first data will be there for the operators to take a look at. For example, you can use this dataset to check if the expected desktops are checking in. For assessing and analysing the data, use more time. The more measurements the more reliable the data is. Preferably plan for the period where mission or business critical processes are happening.

If you have a clear understanding, you can use the data to report on. And in a later phase, you can also integrate with systems management products or IT help desk systems via SNMP and RSS feeds.

You can find your UX profile back at for example the UX score. The UX score is a composite metric of resource consumption, and the ability to identify resource constraint limits and sort them using a weighted rating system of – Good, Fair, Poor (or A+, A, A-, B+, B-, C as labeled) depending on the UX score/profile. When your UX profile has a good match, your VDI’s will have good labels. Easy to spot.

What’s Next?

Keep using Stratusphere UX to fully enable your organization in their digital workspace management with performance monitoring. Use Stratusphere when doing platform (re-)design, lifecycle management, and optimization. Check what impact security changes have on the UX of your users desktops, before you implement these changes. Integrate with your helpdesk system and other operational management solutions. And in many other areas.

Is there any UX insight out there? Gain Visibility with Liquidware Stratusphere UX – Part One

This is the first blog post of a series into the Liquidware products after joining the Liquidware Tech Insiders program and have Liquidware be a blog partner. A little disclaimer though, opinion(s) expressed on this blog are mine.

I want to take you on a walkthrough for one of the critical components of a digital workspace, User Experience. In this series I will discuss the why’s and whats (this part), and the how Liquidware Stratusphere UX can be of great help by showing you how the product works and how to look for influences on UX versus the configuration and security of the digital workspace, and to measure this (the following parts). Now let’s start with some basic define the definition.

Why and what is that User eXperience (UX)?

UX, or better User eXperience, is the most important part for the success factor of a digital workspace environment. And to make things more complicated, the biggest part is managing human perception and expectations. If there is a penalty that worsens UX experience when not expected, its pick-axes, torches and you got a revolt at your hands. Farewell productivity. End-users expect the same user experience than that of working directly on the high-end device (like the ones they buy for personal use), the one-to-one mapping or fast interaction and performance. Digital workspace UX from the user perspective is very simple, the end users need their digital workspace and applications look super duper awesome and respond in warp speed fashion while offering the highest hyper full-very high-K definition quality for graphics and audio…period. For businesses that brings some complications. Mostly the offset between needed investments for that superduper digital workspace and available budget. And something to make this measurable, so a) we know the KPI business can steer to/upon and B) know where possible bottlenecks occur or will occur in the near future. And maybe C) to prove that there is nothing wrong and running within parameters, but this is again a lot in managing human perception.

Continue reading Is there any UX insight out there? Gain Visibility with Liquidware Stratusphere UX – Part One

vROPS for Horizon: Getting more with NVIDIA vGPU insights

With multimedia requirements being more of a commodity for virtual desktop use cases, with previously being just a few users needing multimedia, hardware graphics acceleration is used for the complete virtual desktop estate. With that, we also need more insights into how the (virtual) GPUs are behaving. Standardly vROPS for Horizon does not give you insights into the GPU performance. The only GPU related performance metrics are for example in-guest (perfmon, GPU-z or GPU sizer), GPU hardware stats with nvidia-smi or looking at other desktop metrics (display protocol session details or compute resource usage). But with the release of the NVIDIA Virtual GPU Management Pack, we also have the option of getting these insights in vROPS for Horizon. This management pack will bring a new level of visibility into the health, performance, and efficiency of your virtual desktop estate with NVIDIA virtual GPU right down to the application level.

The NVIDIA Virtual GPU Management Pack supports all of NVIDIA’s virtual GPU products, including the Quadro Virtual Data Center Workstation (Quadro vDWS), the NVIDIA GRID Virtual PC (GRID vPC) and the NVIDIA GRID Virtual Apps (GRID vApps) products.

Let us see how we can make the vROPS for Horizon NVIDIA vGPU Management Pack work.

Continue reading vROPS for Horizon: Getting more with NVIDIA vGPU insights

Migrating Horizon Databases

We have several components in a Horizon environment that utilize databases, and there are also quite a few situations when those use external databases. With external databases, it is often that organizations are using Microsoft SQL Server databases. And with external databases, like any others btw, requirements might change or lifecycle management of MSSQL or underlying Windows requires the databases to be migrated. And with that…. what better way to write this all down in a post.

Before starting your migration be sure to do an Interoperability check with your to be solution. Horizon, or other VMware products for that fact, versions don’t always have the newest support from other vendors. This takes some testing and certifications and might take a while. But after all is checked, and also with other components that might consume these, we will start the migration.

Continue reading Migrating Horizon Databases

vROPS: Upgrading vROPS for Horizon 6.5 and vROPS 6.6

As announced at https://blogs.vmware.com/euc/2017/09/vrealize-operations-for-horizon-published-apps-6-5.html vROPS for Horizon 6.5 was released on 21 September. Next, to some expected improvements, there are two bonuses to this upgrade:
– one, you can upgrade to vROPS 6.6 which was not supported with vROPS for Horizon 6.4.
– two, you can use NVidia Virtual GPU Management Pack to get some long wished insights of GPUs in the Horizon environments. This one I will described in a later blog post.
– And maybe three, support for the current App Volumes versions and Unified Access Gateways. They were working in vROPS for Horizon 6.4, but not with supported versions.

The starting point to go to vROPS for Horizon 6.5 is either green-fielding to vROPS for Horizon version 6.5, in which you don’t need this blog post; or starting with a current version of vROPS for Horizon 6.4 and want to upgrade. Upgrading to vROPS for Horizon 6.5 is step one, upgrading to vROPS 6.6 is optional but highly recommended. Both will be described in this blog post.

Continue reading vROPS: Upgrading vROPS for Horizon 6.5 and vROPS 6.6

Horizon 7.2: With a little helpdesk from my friends

On June 20th the latest version of Horizon was released, namely Horizon 7.2. The highlights of this release include the added Horizon Help Desk tool, and general availability of Skype for Business enhancements in the Horizon environment to enable Horizon users to use Skype in a production environment. You can for example find the VMware Virtualization Pack for Skype in the Horizon Agent installer.

Both features are what organizations often asked about, so it is good that these are included in this release. Other somewhat important are the usual upgrade release updates, scale and product interoperability improvements. As expected and delivered, nothing fancy here.

Helpdesk Login

Continue reading Horizon 7.2: With a little helpdesk from my friends

vRealize Log Insight broadening the Horizon: Active Directory integration deploy VMware Identity Manager

At a customer I am working on the design of vRealize Log Insight. With the authentication objective we can choose from the sources local, Active Directory or VMware Identity Manager. In the latest release (4.5) it is clearly stated that authentication configuration of Active Directory directly from Log Insight is depreciated.

Deprecated vRLI

Edit: Unlike some previous information going around, Active Directory from Log Insight directly is still supported. Quote from updated VMware Knowledge base article: Although direct connectivity from VMware vRealize Log Insight to Active Directory is still supported in Log Insight 4.5, it may be removed in a future version.

But I think it will still be very beneficial to move to vIDM sooner then later.

Continue reading vRealize Log Insight broadening the Horizon: Active Directory integration deploy VMware Identity Manager

VCAP-DTM Deploy Prep: Horizon Lab on Ravello Cloud importing OVA

In my last post I was writing about creating a lab for your VCAP-DTM prep. Read it here VCAP-DTM Deploy Prep: La La Land Lab and Horizon software versions. In that post I mentioned the cloud lab option with Ravello Cloud that I’m using myself. With appliances the are some o did you look at this moments while deploying them on Ravello Cloud. There are two or three appliances to take care of depending on your chosen architecture: vROPS, vIDM and VCSA. Two of those you can also do on a VM, vCenter on Windows and vROPS on Windows or Linux. For vROPS, 6.4 is the last version with a Windows installer.

I personally went with one vCenter on Windows combined with composer (Windows only), so I will skip that one. For vIDM you will have to use the OVA.

Okay, options for OVA’s and getting them deployed: 1) directly on Ravello or 2) use nested hypervisor to deploy to, or 3) use a frog-leap with a deployment on vSphere and upload those to Ravello. The first we are going to do as the second creates a dependency with a nested hypervisor, wasting resource on that layer, getting the data there, traffic data flow, and for this lab I don’t want the hypervisor to be used other than for composer actions required in the objectives. The third, well wasn’t there a point to putting labs in Ravello Cloud.

Now how do I get my OVA deployed on Ravello?

For this we have the Ravello import tool where we can upload several VM’s, disks and installers to the environment. We first need to have the install bits for identity manager and vROPS downloaded from my.vmware.com.

In Ravello Cloud go to Library – VM – +Import VM. This will either prompt you to install Ravello Import Tool (available for Windows and Mac) or start the import tool.
In the Ravello import tool click on Upload (or Upload a new item). This will open the upload wizard. Select the Upload a VM from a OVF, OVA or Ravello Export File source. And click start to select your OVA location.

Grab Ravello Import Wizard - VM from OVA

Select the vIDM OVA and upload.

Grab - Ravello Upload There she goes

But are we done?
No grab vROPS as well.

Grab - Ravello Upload vROPS as well.png

If the upload is finished we will need to verify the VM. As part of the VM import process, the Ravello Import Tool automatically gets the settings from the OVF extracted out of the OVA. Verify that the settings for this imported VM matches its original configuration or the one you want to use. You can verify at Library – VM. You will see your imported VM’s with a configuration icon. Click your VM and select the configuration, go through the tabs to check. Finish.

It normally imports the values from the OVF, it will sometimes screw up some values. When you have multiple deployment options like vROPS you will have to choose the default size. vROPS import will be set either to extra small deployment 2vCPU 8GB or very large. Or use the one you like yourself. Same goes with the External Services. I won’t put them in (yet). Checking the settings from the OVA yourself up in the next paragraph.

Now how do I get the information to verify to?

You can from the sizing calculations done in designing the solution ;). But an other wat is to look in the OVA. OVA is just an archive format for OVF and VMDK’s that make up the appliance.

We need something to extract the ova’s. Use tar on any Linux/Mac or 7Zip on a Windows. I am using tar for this example on my mac. First up getting vIDM in running my test lab.

Open a terminal and go to the download location. Extract the ova with tar xvf. xvf stands for verbosely extract file followed by the filename. Well not in that order, but that’s the way I learned to type it ;).

That give us this:

Capture - tar - ova

Here we see the appliance has four disks, system, db, tomcat and var vmdks.

If we look in the OVF (use VI) file, at the DiskSection we will see need to have system in front and bootable. Followed by DB, Tomcat and last var.

Still in the OVF file, next up note the resource requirements for the vIDM VM. We need that figures later on to configure the VM with the right resources. In the VirtualHardwareSection you will find Number of virtual CPUs and Memory Size sections. We will need 2 vCPUs and 6 GB of vRAM (6144). And one network interface, so reserve one IP from your lab IP scheme. Okay ready and set prepping done.

Deploying a VM from the Library

Go to the application you want to add the VM to. Click the plus sign and select the imported VM from the list. In the right pane customize the name, network, external settings and all the things you like to have set.

GRab - Ravello Add imported VM to App

Save and update the Application.

Wait for all the background processes to finish, and the VM is deployed and starts. Open a console to check if the start-up goes accordingly. And it will not 😉 When you have opened a console you will notice a press any key message that the appliance fails to detect VMware’s Hypervisor and you are not supposed to run the product on this system. When you continue the application will run in an unsupported state. But we are running in a lab and not production.

IF YOU ARE READING THIS BLOG AND (MERELY) THINK ABOUT RUNNING PRODUCTION ON RAVELLO OR RUNNING PRODUCTION WITH THE IMPORTED VIDM LATER ON, GO QUIT YOUR JOB AND GO WALK THE WALK OF SHAME FOREVER.

Grab - Ravello Press Key

Press any key if you can find the any key on your keyboard. And yes you will have to do this all the time you start-up. Or use the procedure highlighted at this blog post https://www.ravellosystems.com/blog/install-vcenter-server-on-cloud/  to change /etc/init.d/boot.compliance (Scroll to 4 action 2 in the post, or to MSG in the file). Do it after you have configured the VM and the required passwords. But sssst you didn’t hear that from me…..

Back to the deployment and configure the VM with hostname, DNS and IPv4. Save and restart network. After this the deployment will continue with the startup.

And now you have a started appliance. We need the install wizard for IDM. Go to the vIDM URL that is shown on the blue screen in the console. For example, https://hostname.example.com. If this is the first time it will start the install wizard. Put in the passwords you want, select your database and finish.

After that you are redirected to the login screen. Log on with your login details and voila vIDM is deployed.

Grab - Ravello vIDM

Bloody Dutch in the interface, everything on my client is English except for the region settings. Have the “wrong” order in Chrome and boom vIDM is in Dutch. For the preparation and the simple fact that I cannot find anything in the user interface when its in Dutch I want to change this. Change the order in Chrome://settings – advanced settings – Languages – Language and input Settings button – drag English in front of Dutch to change the order. Refresh or click on a different tab and voila vIDM talks the language required for the VCAP-DTM or to find stuff…

Grab - Ravello vIDM English

Aaand the same goes for vROPS?

You can do the same with the vROPS deployment. Ravello doesn’t support the ovf properties normally used for setting vROPS appliance configuration. You miss that nifty IP address for the vROPS appliance. At the same time you have the issue that vROPS doesn’t like changes too much, it breaks easily. But follow more or less the same procedure as vIDM. For vROPS set the Ravello network to DHCP. Put in a reservation so the IP is not shared within your lab and is shown with the remote console. The IP reservation is used in the appliance itself. It is very important that an IP is set correctly on first boot, else it will break 11 out of 10 times. I have also noticed that setting a static IP in Ravello is not copied to the appliance, use a DHCP for vROPS works more often.

And now for vROPS:

  • Press any key to continue the boot sequence.
  • The initial screen needs you to press ALT+F1 to go to the prompt.
  • the vROPS console password of root is blank the first time you logon to the console. You will have to set the password immediately and it’s a little strict compared to for example the vIDM appliance.
  • the appliance (hopefully) starts with DHCP configured. And you can open a session to the hostname.
  • [Optional if you don’t trust the DHCP reservation] Within vROPS appliance. Change the IP to manual to stay fixed within vROPS so it will not break when changing IP’s. Use the IP it received from the DHCP, do not change or you will have to follow the change IP configuration procedure for master IP (see a how to blog post here: http://imallvirtual.com/change-vrops-master-node-ip-address/):

Changing vROPS DHCP to static:
Run /opt/vmware/share/vami/vami_config_net. Choose option 6 and put in your values, choose option 4 and put yours in and change hostname etc……

Next reboot the appliance and verify the boot up and IP address is correct. If you get to the initial cluster configuration your ready and set.

Other issues failing the deployment are resolved by redeploying the VM, sometimes by first re-downloading and re-importing the OVA in Ravello.

Grab - vROPS First Start

Do choose New installation and get it up for the VCAP-DTM objectives.

If you happen to have enough patience and your application is not set to stop during the initial configuration, you will have a vROPS appliance to use in your Horizon preparations.

So appliances are no issue for Ravello?

Well I do not know for all appliances, but for Horizon the appliance only components that are needed for a VCAP-DTM lab can be deployed on Ravello.

 

-Happy Labbing in Ravello Cloud!

 

Sources: ravellosystems.com, vmware.com

EUC Toolbox: Helpful tool Desktop Info

As somebody who works with all different kinds of systems from preferably one client device, from the intitial look, all those connected desktops look a bit the same. I want a) to see on what specific template am I doing the magic, b) directly see what that system is doing and c) don’t want breaking the wrong component. And trust me the latter will happen sooner then later to us all.

dammit-jim

Don’t like to have to open even more windows or search for metrics in some monitoring application as it does not make sense at this time? Want to see some background information on what the system you are using is doing, right next to the look and feel of the desktop itself? Or keep an eye on the workload of your synthetic load testing? See what for example the CPU of your Windows 7 VDI does at the time an assigned AppStack is direct attached? And want to keep test and production to be easily kept apart in all those clients you are running from your device?

Desktop Info can help you there.

Desktop Info you say?

Desktop Info displays system information on your desktop in a similar way to for example BGinfo. But unlike BgInfo the application stays resident in memory and continually updates the display in real time with the interesting information for you. It looks like a wallpaper. And has a very small footprint of it’s own. Fit’s perfectly for quick identification of test desktop templates with some realtime information. Or keeping production infrastructure servers apart or….

And remember it’s for information. Desktop Info does not replace your monitoring toolset, it gives the user information on the desktop. So it’s not just a clever name……..

How does it work?

Easy, just download, extract and configure how you want Desktop Info to show you the …well.. info. For example put it in your desktop template for a test with the latest application release.

It can be downloaded at http://www.glenn.delahoy.com/software/files/DesktopInfo151.zip. There is no configuration program for Desktop Info. Options are set by editting the ini file in a text editor such as Notepad or whatever you have lying around. The ini file included in the downloaded zip shows all the available options you can have and set. Think about the layout, top/bottom placement, colors, items to monitor and WMI counters for the specific stuff. Using Nvidia WMI counters here to see what the GPU is doing would be an excellent option. Just don’t overdo it.

In the readme.txt that is also included in the zip there is some more explanation and examples. Keep that one closeby.

capture-basicinformation

Test and save your configuration. Put Desktop Info in a place or tool so that it is started with the user session that needs this information. For example in a startup, shortcut or as a response to an action.

Capturing data

You have the option to use Desktop Info with data logging for references. Adding csv:filename to items will output the data to a csv formatted file. Just keep in mind that the output data is the display formatted data.

– Enjoy!

vROPS – survive the Kraken – Endpoint Operations Example

Guess who’s back, back again…

Next to doing End User Computing engagements, where user experience, performance and capacity management is also an integral part, I occasionally am involved in separate operations management engagements. And with VMware often vRealize Operations Manager, or vROPS, shows its face. As some would have noticed I mentioned vROPS in articles on this blog before. This time I am going to dive a little into growing some more tentacles and getting some more insights besides our old vSphere friend.

Show how does this getting more insights work again?

Great you asked! First get your vROPS up and running, configured, customized and showing the insight information from vSphere you wanted to see shown up in the right places. Not there yet? Well stop reading here and directly go to jail. Do not pass go. Stop it and slowly turn away from the keyboard. As mentioned, it can be very helpful to have more insights before you create something that jumps back at you and eats you…

Still reading, okay I guess your ready, just curious or thinking ahead. Like the vSphere adapter that is standardly included into vROPS, you can add solutions (or adapters) from management packs or are they called extensions (still following?) to collect information from other sources. Most of the time ‘the other’ data sources are management components for these specific components or layer. For example, getting EUC information from Horizon into vROPS, use vROPS for Horizon and connect to a broker agent on the connection server (management layer) and an agent in the desktop or published application. And what that name does not show on first glance, vROPS for Horizon can also bring insights from XenApp and XenDesktop.

Anyhow, why would I need this isn’t the vSphere adapter showing everything from my virtual infrastructure you ask. Well no, not everything. The vSphere adapter creates visibility for the vSphere layer, and that is the hypervisor and management. And information from the hypervisor and management about storage, networking and virtual machines BUT only from the view of vSphere. Storage, yes datastore but not how your storage infrastructure or vSAN are behaving, networking, yes vSwitches but not how your network devices or NSX are behaving and VM’s yes virtual machines but not what is happening in guest. And so on. You can, but you need solutions for that. And size accordingly. And customized dashboards or reports that actually show something of interest. And o yes the correct vROPS edition license.

Getting in guest insight via Endpoint Operations Management

In the old days or before vROPS 6.1 when you wanted to get in guest metrics for applications, middleware and database you could get the Hyperic beast out. With the 6.1 release of vROPS, VMware merged some of the Hyperic solution into vROPS. This would make it a lot easier to get a view through vROPS management interface all the way up or down to services, processes and the application layer. However, you would still have to do a lot of customizing to show something interesting.

servicesdashboard

Fortunately the solution exchange shows more and more applications services being integrated with vROPS via the Endpoint agent, for example:

  • Active Directory
  • Exchange
  • MSSQL Server
  • IIS
  • Apache Tomcat
  • ProgresQL
  • vCenter

Visit VMware solution exchange for the latest versions. Note that the vCenter Endpoint Operations Solution shows up as a standard management pack, but vROPS needs an advanced edition license to get Endpoint integration shown, it is not quite open on that.

Yeah Yeah enough show an example please and getting me some in guest metrics recipe

What ingredients do we need?

1 tablespoon of vROPS evaluation or a minimum of advanced edition
1 teaspoon of Endpoint Operations Management Solution
2 drops of Endpoint Agent deployed on a virtual machine
1 gr of user with permission to registered configured on vROPS
100ml of Solution Exchange Application layer something specific (or your own build something specific)

Stir and let it rest for a while.

vROPS you probably have in a test setup or can deploy as an ova in a PoC. Just a little warning upfront if you are not in a test or PoC setup, solutions/management packs are added to vROPS easily. Removing them is not an easy task.

You will need a minimum of one node, a remote collector as a data node is preferable. The Endpoint Operations Management solution is installed with vROPS and needs no specific configuration of the solution itself. The agents are downloaded at my.vmware.com. There are Linux and Windows platform versions, with or without JRE, installation packages or just the data bundles. Use what you like or fits with your application provisioning. I go for the JRE bundles.

And yes I hear you, another agent?!? Yes unfortunately currently you still need the Endpoint Agent. A big ass agent/VMware tools integration is not yet there, we need a little patience for that.

For the user create an Endpoint Management role with permissions in Administration – Manage Agents and Environment – Inventory Trees. Add this role to the user you are planning to use. The user is added to every agent.

If you have a firewall or other ACL’s in between your endpoint agents and the vROPS remote collector or data node(s), open up HTTPS (443) from endpoint group range to the remote collector or data node(s).

Manually Installing vRealize End Point Operations Agent

Manually installing and updating the vRealize End Point Operations Agent only is needed for VM’s that are not deployed via automation, where there is no application provisioning like SCCM or have an issue where a reinstall is needed. Yes you can also use the MSI or RPM, but with the files you will get a little insight (you see what I’m doing?) on how the agent works.

Note: Preferable the agent is not installed in a template. When a need arises that EP Ops Agent is installed in a system that is cloned, do not start EP Ops or remove EP Ops token and data/ prior to cloning. If started a client token is created and all clones will be the same object in vROPS.

Windows 64-bit Agent

You will need an installation user with permissions to put files, change owner/permissions on the server, install a service and start the service.

Copy the following files from a central file repository:

  • Copy and extract the softwarepackages/vRealize-Endpoint-Operations-Management-Agent-x86-64-win-.zip. Place the files in for example D:\Program Files\epopsagent
  • Edit the agent.properties file in the /conf and put in the following as a minimum:
    • setup.serverIP=data node or LB VIP to connect to
    • setup.serverLogin=User with role to register agent on vROPS
    • setup.serverPword=Password
    • setup.serverCertificateThumbprint=SSL Certificate thumbprint of the node to connect to (the one you entered above)

Note on the Password: this password can be added plaintext. When the agents is installed and started for the first time. The password is encrypted. The key is stored in the agent.scu file in the conf/ directory. You can use the agent.properties and the scu file to distribute from a central location and copy these in the conf/ directory. (Linux uses a different scu file, but the agent.properties can be the same)

  • open Command Prompt
  • go to /bin
  • run epops-agent.bat install
  • run epops-agent.bat start

ep-agentbat

Linux Agent

For the Linux agent use the same flow as the Windows Agent. Just a few differences:

  • Copy and extract the tarbal to the extract location, for example /opt/vmware/epops-agent
  • Copy the files to conf/
  • Go to bin/
  • ep-agent.sh start (no need for install)

Monitoring specific Windows Service or Linux Process

Current configuration of the agent does not include an autodiscovery of Windows Services or Linux Processes.

The reason this is not done is that there currently that all services is certainly not an option from a monitoring standpoint. It has more use to monitor specific groups of Windows services or processes that actually contribute or have a direct relation with a hosted service that is needed to be monitored.

monitor-windows-service

Follow these steps to monitor specific Windows Service/Linux Multiproces:

  • go to environment – Operating Systems – Operating System World – Windows / Linux
  • Select VM hostname
  • Actions – Monitor OS Object – Monitor Windows Service
  • Fill in the details, the service_name must match the Windows Server name.

service-details

Note: for autodiscovering services when agent.properties value autodiscovery is true, services are discovered by their Windows service name. As the services don’t have a servername in there, all service that are the same are named the same but in will the inventory hierarchy will have a different node. In the services view all services are without the node hierarchy: for example when monitoring Windows Time Service from 3 hosts will create three times a Windows Time Service is this view. You can change the service before services are discovered so that the servername is included in the display name. Please see Microsoft documentation on changing service names.

Adding a monitoring Solution

And installing a solution that monitors a service via the Endpoint Agent will show you a combination of nice metric additions. Or at least some additional pointers on how to get and in some cases display application insights you can use for your own.

All these packs can be downloaded from the solution exchange and are *.pak files. These are installed in vROPS Administration – Solutions – Add Solution and follow the details there.

Be sure you download the for operations packs, there are also for Hyperic versions still around. The latter you don’t need.

– Happy fishing

Sources: pubs.vmware.com, blogs.vmware.com, solutionexchange.vmware.com