Product Evaluation: Inuvika Open Virtual Desktop (OVD)

Occasionally I get a request, or some urge bubbles in me, to look at vendor X with its product Y. And there is nothing wrong with that as I like to keep a broader view on things and not just betting on one horse.

And so a request from Inuvika did find me asking to look at their evolution of the open virtual desktop (OVD) solution. Okay using virtual desktop and application delivery triggers will get my attention for sure. Kudos for that. On top of that the name Inuvika gets my curiosity running in again a somewhat higher gear. No problem, I will take a peek and see if I can brew up a blog article at the same time. At the same time was almost a year ago…..But still wanting to take that peek. You will probably figure out that letting  you read about OVD is a little bit overdue. Sorry for the delay….

A little notice up front: this blog post is my view only and not paid for, pre-published or otherwise influenced by the vendor. Their opinion might differ. Or not.

Wait what… Inuvika you say?

Yes Inuvika (ĭ-noo′vĭk-ă). If you open up your browser you could learn that the company name is based on a Canadian town Inuvik where it can be very cold. And that for 30 days in the year the sun doesn’t rise above the Horizon (*wink* *wink*). In such a place you will need a strong community and a collaborative approach to be able to be living in harse an environment. Their product strategy is the same. Offering an open source solution and collaborative with the community out there (however the separate community version and site is dead).
Inuvika mothership is based in Toronto, so hopefully that doesn’t lose a bit of the magic just introduced ;). But where ever they are based, it does not change the approach of Inuvika.

Main thing, the guys and gals from Inuvika is where you can get the Open Virtual Desktop from. Go to https://inuvika.com/downloads to download your version. Or take a peak around the site.

Open Virtual Desktop sounds interesting enough, show me

Glad you asked. Let’s find out. We have the option to use a trail version for evaluation purposes, enterprise license or the cloud version. I like it when we can find out a little about the bits and bytes ourselves. So I will be downloading OVD. But first up some architecture to know what screw and bolts we need, or can opt out from.

Architecture

The following diagram has been taken from the architecture and system requirements document and show the components and the network flow for the system.

OVD-Architecture Overview

The OVD Roles:

  • The OVD Session Manager is first required component. The OSM will be installed prior to the other components. As the master of puppets it’s the session broker, administration console and centralized management of the other OVD components.
  • The OVD Application Server is one of the Slaveservers that will communicate with OSM. The OAS is the component that serves the application and desktops pools to the users. Accessed from either the web portal or the OVD Enterprise client. OAS is available in a Linux or Windows flavor. OAS can be pooled together and load balanced from the OSM. However you will need Enterprise for that as Foundation is limited to one application server (seriously just one?).
  • The OVD Web Access. OWA is responsible for managing and brokering Web sessions.Now where did we see that abbreviation before… Either using Java (going away in a next release) or HTML5, SSL tunneled if required. If using OVD clients only this is component is not needed. OWA will also offer an API (Javascript) to integrate OVD with other web-based applications.
  • The OVD File Server. The OFS component offers a centralized network file system to the users of the OAS’ses keeping access to the same data not depending on the OAS the user is on. Data can be user profiles, application data or other company data. The data is only accessible from the OAS sessions and is not published in another way like a contentlocker or dropbox.
  • ESG (hey wait no O something something). The Enterprise Secure Gateway is used as a unified access layer for external, but optionally also internal connections. ESG tunnels all the OVD connections between the client and itself, over a HTTPS session. So from any location, users that have access to HTTPS (443), will also be able to start a OVD session. If not using ESG tunnels OVD client will need to have HTTPS and RDP open to the OAS. Require the Enterprise license.
  • Further 2.3.0 brings a tech preview to OWAC. Web Application Connector to offer SSO integration as an identity appliance.

All components run on a Linux distribution supporting the flavors RHEL, CentOs or Ubuntu LTS. The only component where Windows will be used is when OAS is offering Windows desktops or Windows-based applications on RDS services. Supported RDS OS versions are Windows 2K8R2, W2012 and W2012R2. Isn’t it time for Windows 2016 by now?

In the OVD architecture we see sorts of familiar components that we see in similar virtual desktop solutions, only with a bit of a different naming. In a first overview the OVD architecture seems like what we are used to, no barriers here to cross.

In a production environment the Inuvika OVD installation will use several servers all doing their specific role. Some roles you will always see in a OVD deployment. Others are optional or can be configured to run together with other roles. And with external dependencies entering the mix with load balancers in front of OWA for example. Small shops will have some roles combined while having a smaller amount of OAS times n.

It all depends on the environment size and requirements you have for availability, scalability, resilience, security and so on.

Into the Bat-lab

Come on Robin to the Bat Cave! I mean the test lab. Time to see that OVD in action and take it for a spin. Lab action that is, however Inuvika also offers access to a hosted demo platform if you don’t have a lab or test environment lying around. From the download page https://inuvika.com/downloads you can download the Demo Appliance or register for the OVD Full installation. I will use the demo appliance for this blog post. As I would probably also would be installing multiple roles on the same virtual machine. The Demo Appliance is a virtual machine with the following OVD roles installed:

  • OVD Session Manager (OSM)
  • OVD Web Access (OWA),
  • OVD Application Server for Linux (OAS)
  • OVD File Server (OFS).

I will be using my Ravello Cloud vTestlab to host the OVD. So first I have to upload the OVA into the Ravello library. Once available in Ravello I can create a lab environment. I can just import the OVD, but I also want to see some client and AD integration if possible. I added my vTestlab domain controller and Windows 10 Clients in to the mix.

Invuvika Demo Lab

Let’s see if I can use them both, or I am wasting CPU cycles in Ravello. Good thing April is half through and I still have 720 CPU hours remaining this month, so not much of a problem in my book.

When starting the OVD demo appliance it will start with the Inuvika Configuration Tools. Choose your keyboard settings (US). And presto the appliance starts up with the IP I configured while deploying the application.

OVD - Demo Console after start

Here you can also capture the login details for the appliance: inuvika/inuvika. The default user for the administration console is admin/admin. Open up a browser and point to the FQDN or IP for web access. HTTP://<your appliance>/. Here we are greeted by a page where we can start a user sessions, open the administration console, documentation, the installer bits for the Windows AS and the clients.

The user sessions offered in the demo appliance are based on the internal users and internal Ubuntu Desktop and applications. The client can be set to desktop mode, which is a virtual desktop with the applications published to the user. Or can be portal mode, where the user is presented with a portal (so it’s not just a clever name) with all its application entitlements. The client starts with Java to allow for redirecting drives. Using HTML5 will not allow a drive to be redirected. The Demo appliance is populated with demo users where the password is the same as the user name. Just add cholland with password cholland in the client screen and you will be presented with a user session.

OVD Web login.png

And see the portal with the users entitlement and the file browser for data exchange between sessions.

OVD Demo - Client Portal

Start up a Firefox browser session and open my blog. Yup all works.

OVD - Client Firefox Blog

For using the Enterprise Client the demo appliance needs to be switched to Enterprise. And you need a license for that! Via the admin console you need to set the system in maintenance mode. Via the appliance console after logging in you get the menu where you can choose option 3 Install OVD Enterprise. After this you can set the system back to production, are greeted by a subscription error and via Configuration – Subscription Keys you can upload the license File. When a valid license is installed you can now run the Enterprise client for your evaluation. The client options are the somewhat similar as with the web client. Besides adding the site name in the client instead of a browser URL.

OVD Ent Client Login

We also have the administration console. While this has a bit more options and I am not trying to rewrite the documentation, I will show some of the parts. Basic try out the options yourself to see what the differences are.

We are greeted with an index page with an environment overview and user/applications publications. These will be the main actions when using the product. Of course we also have some menu options for reporting and configuration.

OVD - Admin Index

Let see if we can get some AD users in and entitle them to the demo. Seems like a lot of organization have their identity source already in place, and Microsoft is something used there. Configuration option seems like a logical part to start. And here we have the domain integration settings. Currently it is set to the internal database. Let get some information in the Microsoft option to see if we get the AD part in.

OVD - Configuration

I am using the internal users to keep it simple and leave in the support for Linux. This is a demo, not production.

When the information is done and added push the test button to see if the LDAP connect and bind works. Save when all green. Problems here? Go to status – logs to see wtf is happening. Main issues can be DNS, time offset or the standard account not having to correct information or UPN in the domain. The OVD Linux bind command is trying [email protected] hardcoded.

And viola Administrator from the vTestlab domain has a session connected:

OVD - Administrator Session

My opinion about OVD

It works out of the box with any HTML5 Browser. Or you can of course use the Enterprise client, but this will required an Enterprise license and RDP or i-RDP to the client desktops (or ESG to be SSL tunneled).

[Edit] I most correct my previous version that Inuvika is using RDP as an enterprise display protocol.  That is not entirely true. OVD uses RemoteFX with the Enterprise Desktop Client and Windows Application Servers. RemoteFX is a set of technologies on top of RDP that enhances the visual experience significantly in comparison with the older RDP (the non-RemoteFX). Indeed better for the user experience, how much better we will leave up to the users. For Linux Application Servers there is not yet RemoteFX support, this is forthcoming.
[Close Edit]

For HTML browser user connections, or using the Enterprise client in combination with the ESG, OVD utilizes HTTPS (tcp/443) and thus is roadwarior friendly. With roadwarrior friendly I mean a service that is firewall friendly and makes hotel, Starbucks cafe or airport WiFi a place to use the environment without blockages, changing ports, VPN tunnels or not be able to use the service remotely from that location.

For IT Operations the administration console is in a single console. No scattering consoles or admin tools all over the place. And no dependencies, like the disliked flash plugin for some other solution out there ;). Further the expected components are there in a logical location.

Cross publishing apps between distributions is a very nice feature. Windows in Linux or Linux with Windows apps, great. Or add web applications to the mix. Furthermore Inuvika is not bound by a stack choice or hypervisor. VMware vSphere yes, Nutanix (Nutanix Ready AHV) yes, KVM, etc yes.

The use cases, applications and desktops still have to be assessed and designed accordingly. And these will be the most important bits for the users. This is what wins or breaks an EUC environment. I won’t see a lot of users now on Windows-based desktops and applications, going to Linux desktop and apps without more or less resistance and opposition. That Windows will be in there for now. But this is the same for the other vendors, not much difference here.

I personally don’t know what the user experience is when doing your day-to-day working throughout the business cycle. I haven’t come across Inuvika OVD in the wild.

One of the strong points of going open source is that the product will be improved by the contributions of the community (if there still is a community version….). That will mitigate some of the above. But also will require the OVD community to have a footprint of some sort for the required input and change. If the community is too small it will not be able to help Inuvika and the OVD user base.

I think cost wise it will be interesting for some shops out there looking to replace their EUC solutions and in the mean time look for ways to cut costs. These shops probably already have some issues and bad experience with their current solution along the way. I do not think organizations happy with VMware Horizon or Citrix will be lining up to replace their EUC with Inuvika. Yet ..that is.
This is a fast world, and it is interesting to see that there are vendors thinking outside of the paved roads. It makes their but also other solutions a better place for the users. It’s the community and open source that is really interesting here. So just give it a go and see for yourself. Don’t forget to share your experience with the community.

– Happy using your OVD from Inuvika!

Sources: inuvika.com.

EUC Layers: Dude, where’s my settings?

With this blog post I am continuing my EUC Layers series. As I didn’t know that I started one there is no real order to follow. Other that it seems to be somewhat from the user perspective, as that seems a big part in End User Computing. But I cannot guarantee that will be the right order at the end of things.

If you would like to read back the other parts you can find them here:

For this part I would like to ramble on and sing my song about an important part for the user experience, User Environment Management.

User Environment

Organisations will grant its users access to certain workspaces, an application, a desktop and or parts of data required or supporting the users role within the business processes. With that these users are granted access to one or more operating systems below that workspace or application. This organization would also like to apply some kind of corporate policy to ensure the user works with the appropriate level(s) of access for doing their job and keeping organizations data secure. Or in some cases to comply with rules and regulations and thus making the users job a bit difficult at the same time.

On the other side of the force, each user will have a preferred way of using the workspace and will tend to make all sorts of changes that enable these users to work efficiently as human possible. An example of these changes are look and feel options and e-mail signatures.

The combination of the organization policy and the user preferences is the User Environment Layer, also called persona also called user personality.

Whether a user is accessing a virtual desktop or a published application, the requirement for a consistent experience for users across all resources is one of the essential objectives and requirements for End User Computing solutions. If you don’t have a way of managing the UE, you will have disgruntled users and not much of a productive solution.

Dude

Managing the User Environment

Managing the User Environment is complicated as there are a lot of factors and variables in the End User environment. Further complexity is added by what will be needed to be managed from the organization perspective and what does your users expect.

Next to this yet an other layer is added to this complexity, the workspaces are often not just one dominating technology, but a combination of several pooled technologies. Physical desktops pools, Virtual desktops pools, 3D engineering pools, application pools and so on.

That means that a user does not always log on to the same virtual desktop each time, or log on to a published application on another device still wanting to have the same settings to the application and the application on the virtual desktop. A common factor is that the operating system layer is a Windows-based OS. Downside is, several versions and a lot of application options. We should make sure that user profiles are portable in one way or another from one session to the next one.

It is absolutely necessary that using different versions pooled workspaces that the method of deploying applications and settings to users is fast, robust and automated. From the user context and operational management.

Sync Personality

User Environment Managers

And cue the software solutions that will abstract the user data and the corporate policies from the delivered operating system and applications. And manage centrally.

The are a lot of solutions that provide a part of the puzzle with profile management and such. And some will provide a more complete UEM solution like:

  • RES ONE Workspace (previously known as RES Workspace Manager),
  • Ivanti Environment Manager (previously known as AppSense Environment Manager),
  • LiquidLabs Profile Unity,
  • VMware User Environment Manager (previously known as Immidio).

And probably some more…

Which one works best is up to your requirements and the fit with the rest of the used solution components. Use the one the fits the bill for your organisation now and in a future interaction. And look for some guidance and experience from the field via the community or the Intarweb.

User Profile Strategy

All the UEM solutions offer an abstraction for the Windows User Profile. The data and settings normally in the Windows User Profile are captured and saved to a central location. When the user session is started on the desktop, context changes, application starts or stops, or sessions are stopped, interaction between (parts of) the central location and the Windows Profile is done to maintain a consistent user experience across any desktop. Just in the time when they are needed, and not bulk loaded on startup.

The Windows Profile itself comes in following flavours:

  • Roaming. Settings and data is saved to a network location. Default the complete profile is copied at log in and log out to any computer the user starts the session. The bits that will be copied or not can be tweaked with policies.
  • Local. Settings and data is saved locally to the desktop. This remains on the desktop. When roaming settings and data are not copied and a new profile is created with a new session.
  • Mandatory. All user sessions use a prepared user profile. All user changes done to the profile are delete when user session are logged off.
  • Temporary. Something fubarred. This profile only comes in to play when an error condition prevents the user’s profile from loading. Temporary profiles are deleted at the end of each session, and changes made by the user to desktop settings and files are lost when the user logs off. Not using this with UEM.

The choice of Windows profile used with(in) the UEM solution often depends on to be architecture and the phase you are doing, starting point and where to go. For example starting with the bloating and error prone roaming profiles, UEM side-by-side for capturing the current settings and moving to clean mandatory profiles. Folder Redirection in the mix for centralized user data and presto.

Use mandatory as de facto wherever possible, it is a great fit for virtual desktops, published applications and host/terminal servers in combination with a UEM solution.

The User Profile strategy should also include something to mitigate against the Windows Profile versions. OS versions are incorporated with different profile versions. Without some UEM solution you cannot roam settings between a V2 and V3 profile. So when migrating or moving between different versions is not possible without tooling. The following table is created with the information from TechNet about User Profiles.

Windows OS User Profile Version
Windows XP and Windows Server 2003 First version without .
Windows Vista and Windows Server 2008 .V2
Windows 7 and Windows Server 2008 R2 .V2
Windows 8 and Windows Server 2012 .V3 (after the software update and registry key are applied)
.V2 (before the software update and registry key are applied)
Windows 8.1 and Windows Server 2012 R2 .V4 (after the software update and registry key are applied)
.V2 (before the software update and registry key are applied)
Windows 10 .V5
Windows 10, 1703 and 1607 .V6

Next to that UEM offers to move settings for the user context from Group Policies and login/logoff scripts, again lowering the amount of policies and scripts at login and logoff. And improving the user experience by lowering those waiting times to actually having what you need just in the time you need it.

And what your organization user environment strategy is, what do you want to manage and control, what to capture for users and applications, and what not.

VMware User Environment Manager

With VMware Horizon often VMware UEM will be used. And what do we need for VMware UEM?

In short VMware UEM is a Windows-based application, which consists of the following main components:

  • Active Directory Group Policy for configuration of the VMware User Environment Manager.
  • UEM configuration share on a file repository.
  • UEM User Profile Archives share on a file repository.
  • The UEM agent or FlexEngine in the Windows Guest OS where the settings are to be applied or captured.
  • For using UEM in offline conditions and synchronizing when a the device connects to the network again.
  • UEM Management Console for centralized management of settings, policies, profiles and config files.
  • The Self-Support or Helpdesk Tool. For resetting to a previous settings state or troubleshooting for level 1 support.
  • The Application Profiler for creating application profile templates., Just run your application with Appliction profiler and Application Profiler automatically analyzes where it stores its file and registry configuration. The analysis results in an optimized Flex config file, which can then be edited in the Application Profiler or used as is in the UEM environment.

UEM will work with the UEM shares and engine components available to the environment. With the latest release Active Directory isn’t a required dependency with the alternative NoAD mode. The last three are for management purposes.

All coming together in the following architecture diagram:

UEM Architecture

That’s it, no need for further redundant application managers and database requirements. In fact UEM will utilize components that organization already have in place. Pretty awesomesauce.

I am not going to cover installation and configuration of UEM, there are already a lot of resources available on the big bad web. Two excellent resources are http://www.carlstalhood.com/vmware-user-environment-manager/ or https://chrisdhalstead.net/2015/04/23/vmware-user-environment-manager-uem-part-1-overview-installation/. And of course VMware blogs and documentation center.

Important for the correct usage of UEM is to keep in mind that the solution works in the user context. Pre-Windows Session settings or computer settings will not be in UEM. And it will not solve application architecture misbehaviour. It can help with some duct tape, but it wont solve an application architecture changes from version 1 to version 4.

VMware UEM continually evolves with even tighter integration with EUC using VMware Horizon Smart Policies, Application Provisioning integrations, Application authorizations, new templates and so on.

Happy Managing the User Environment!

Sources: vmware.com, microsoft.com, res.com, ivanti.com, liquidwarelabs.com

EUC Toolbox: O Sweet data of mine… Mining data with Lakeside Software’s SysTrack

I have already covered the importance of insights for EUC environments in some of my blog post. TL;dr of those is: if not having some kind of insight your screwed. As I find this a very important part of EUC and EUC projects, and see that insights are often lacking when I enter the ring…. I would like to repeat myself: try to focus on the insights a bit more, pretty please.

Main message: to successfully move and design a EUC solution, assessment is key as designing, building and running isn’t possible without visibility.

Assessment Phase

The assessment phase is made up of gathering information from the business, such as objectives, strategy, business non-functional and functional requirements, security requirements, issues and so on. And mostly getting questions from the business as well. This gathering part is made up in getting the information in workshops with all kinds of business and user roles, having questionnaires and getting your hands on documentation regarding the strategy and objectives, and some current state architecture and operational procedures explanation and documentation. Getting and creating a documentation kit.

The other fun part is getting some insights from the current infrastructure. Getting data about the devices, images, application usage, logon details, profiles, faults etc. etc. etc.

Important getting this data is the correlation of user actions with the subjects. It is good to know that when a strategy is to move to cloud only workspaces, that there will be several thousands steps between how a user is currently using his tool set to support the business process and the business objective. An intermediate step of introducing an any device desktop and hosted application solution is likely to have a higher success rate. Or wanting to use user environment management, and the current roaming profiles as bloated to 300GB. But I will try to not to get ahead with theories, get some assessment data.

Data mining dwarfs

Mining data takes time

Okay out with this one. Mining, or gathering, data takes time and therefore a chunk of project time and budget. Unfortunately with a lot of organisations it is either unclear what this assessment will bring, there are costs involved and a permanent solution is not in place. Yes there are sometimes point-in-time software and application reports that can be made from centralized provisioning solution, but these often miss the correlation of that data to the systems and what the user is actually doing. And the fact that Shadow IT is around.

Secondly knowing what to look for in the mined data to answer the business questions

But we can help, and timing costs is more of a planning issue for example not being clear on the efforts.

The process: Day 1 is installation. After a week a health check is done to see if data is flowing in the system. Day 1+14 initial reports and modeling can be started. Day 30+ and a business cycle is mined. This means enough data point have been captured, desktops that not often are connected have had their connection and thus agents, and variation to have a good analytics. Month start and month closing procedures have been captured. What about half-year procedures? No there not in a 30 days assessment when this period doesn’t include that specific procedure. Check with the business if those are critical.

Assess the assessment

What will be your information need and are there any specific objectives from the business they would like to see? If you don’t know what you are looking for, the amount of data will be overwhelming and it will be hard to get some reports out. Secondly try to focus on what and how an assessment tool can do for you. Grouping objects in reports that don’t exist in the current infrastructure or the organization structure will need some additional technical skills or need to be place in the can’t solve the organization with one tool category.

Secondly check the architecture of the chosen tool and how it fits in the current infrastructure. You probably need to deploy a server, have a place where its data is stored and need some client components identified and deployed. Check whether the users are informed, if not do it. Are there desktops that not always connect to the network and how are these captured. Agents connect to the Master once per day to have their data mined.

Thirdly check if data needs to remain in the organization boundaries, or that it can be saved or exported to a secure container outside the organization. To analyze and report it will be beneficial for time lines if you could work with the data offsite, saves a lot of traveling time throughout the project.

Fourthly what kind of assessment is needed. Do we need a desktop assessment, server assessment, physical to virtual assessment or something else. What kind of options do we have in gathering data, do we need agents, something in the network flow etc. etc. This kinda defines our toolbox to use. Check if vendors and/or community is involved in the product this can prove to be very valuable for the right data and interpretation of data in reports. Fortunately for me the tool for this blog post SysTrack can be used for all kinds of assessments. But for this EUC toolbox I will focus on the desktop assessment part.

SysTrack via Cloud

VMware teamed up with Lakeside Software to provide a desktop assessment tool free for 90 days called the SysTrack Desktop Assessment. It will collect data for 60 days and keep that data in the cloud for an additional 30 days. After 90 days access to the data will be gone. The free part is that you pay with your data. VMware does the vCloud Air hosting and adding the reports, Lakeside adds the software to the mix and viola magic happens. The assessment can be found at: https://assessment.vmware.com/. Sign up with an account and your good to go. If you work together with a partner be sure to link your registration with that partner so they have access to you information. When registration is finished your bits will be prepared. The agent software will be linked to your assessment. Use your deployment method of choice to deploy agents to the client devices, physical or virtual as long as its Windows OS. Agents need to connect to the public cloud service to upload the data to the SysTrack system. Don’t like all your agents connecting to the cloud, you can use a proxy where your clients connect to and the proxy connects to the cloud service. Check the collection state after deploying and a week from deploying. After that data will so up in the different visualizers and overviews.

SDA - Dashboard

If you have greyed out options, be patience there is nothing wrong (well yet). These won’t become active until a few days of data have been collected to make sure representative information is in there before most of the Analyze, Investigate and report options are shown.

Have a business cycle in and you can use the reports for your design phase. The Horizon sizing tool is an XML export that you can use in the Digital Workspace Designer (formerly known as Horizon Sizing Estimator) find it at https://code.vmware.com/group/dp/dwdesigner. Use the XML as a custom workload.

SDA- User Visualizer

SysTrack On site

Okay, now for the on site part. You got a customer that doesn’t like its data on somebody else her computer, needs more time, needs customizations to reports, dashboards or further drill down options -> tick on site deployment. It needs more preparations and planning between you and the customer. If the cloud data isn’t a problem, let your customer start the SDA the have some information before having the onsite running, mostly it will take calls and operational procedures before a system is ready to install.

Architecture

Okay so what do we need? First get a license from Lakeside or your partner for the amount of desktop you want to manage. You will get the install bits or the consultant doing the install will bring them.

Next the SysTrack Master server. Virtual or physical. 2vCPU and 8GB (with Express use 12GB) to start with, grows when having more endpoints. Use the calculator (Requirements generator) available on the Lakeside portal. Windows server minimum 2008R2 SP. IIS Web Roles, .Net Framework (all), AppFabric and Silverlight (brr). If you did not setup the pre-requisites this will be installed by the installer (but it will take time). That is… not .Net Framework 3.5 as this is a feature on servers where you need some additional location of source files. Add this feature to the system prior installation. And while you are at it install the rest.
For a small environment or without non persistent desktops a SQL Express (2014) can be included in the deployment. Else use an external database server with SQL Server Reporting Service (SSRS) setup. With Express SRSS is setup by the way.

Lakeside Launch

You need a SQL user (or the local system) with DBO to the new created SysTrack database and a domain user with admin rights to reporting service and local admin on the Windows server. If you are not using a application provisioning mechanisme or desktop pool template, you can push or pull from the SysTrack Master. For this you need a AD user with local admin rights to the desktops (to install the packages) and File and Print Services, Remote management and Remote Registry. If SCCM or MSI installation in the template is used, you won’t require local admin rights, remote registry and such.

If there is a firewall between the clients (or agents or childs) and master server be sure to open the port you used in the installation, default you need 57632 TCP/UDP. And if there is something between the Master Server and Internet with the registration, you will need to activate by phone. Internet is only used with license activation though.

And get a thermos of coffee, it can take some time.

To visualise the SysTrack architecture we can use the diagram from the documentation (without the coffee that is).

SysTrack Architecture

Installation is done in four parts, first the SysTrack Master Server (with or without SQL), secondly the SysTrack Web Services, thirdly the SysTrack Administrative tools and when 1-3 are installed and SysTrack is configured, you can deploy the agents.

  • SysTrack Master Server is for the Master for the application intelligence storing the data from childs (or connecting to data repository), configuration, roles and so on.
  • SysTrack Web Services is for Front end visualizers and reporting (SSRS on SQL server).
  • SysTrack Administrative Tools for example the deployment tool for configuration.

You gotta catch them all.

SysTrack Install menu

And click on Start install.

The installers are straightforward. Typical choices are the deployment type, full or passive. Add the reporting service user that was prepared (you can do this later as well). Database type, pre-existing (new window will open for connection details) for an external database or the express version. Every component will need its restart. After restarting the Master Setup the Web Services installer will start. After this restart, the Administrative Tools don’t start automatically. Just open the Setup and tick the third option and start the install.

Open the deployment tool. Connect to the master server. Add your license details if this is a new installation. Create a new configuration (Configuration – Alarming and Configuration). Selecting Base Roles\Windows Desktop and VMP will work for a good start in desktop assessments. Set your newly created as default, or change manual in the tree when clients have been added to the tree. And push the play button when ready to start or receive clients. Else nothing will come in.

vTestlab Master Deployment Tool

Now deploy the agent via MSI. The installation files are on the Master Server in the installation location: SysTrack\InstallationPackages. You have the SysTrack agent (System Management Agent 32-bit) and the prerequisite C Redistributable’s VC2010.
With MSI deployments you add the master server and port to the installer options. If the Master allows clients to auto add themselves to the tree which is the default case with version 8.2, they will show up.

“Normally” the clients won’t notice the SysTrack agent deployed. There is not a restart required for the agent installation.
For strict environments you can have a pop-up in Internet Explorer about LSI Hook browser snap in. You can suppress this by adding the CLSID of LSI Hook to the add-on list with a value of 1. Or you can edit your configuration and change Web browser plugins to false. This in turn will mean that web data from all browser is not collected by SysTrack.

Configuration Web Browser Plugin.png

In any case be sure to test the behaviour in your environment before rolling out to a large group of client.

Conclusion

While the cloud is deployed within a snap and data is easily accessed within the provided tools and reports fit the why of the assessment, there is a big but.. Namely that a big chunk of organisations don’t like this kind of data to go into the cloud, even when the user names are anonymized. Pros for the on site version is that it will give you more customizations and reporting possibilities. Downside is that SysTrack onsite is Windows based and the architecture will require Windows licenses next to the Lakeside license. All the visualisers and tools can be clicked and drilled down from the interface, but it feels a little like several tools have been duck-taped together. You can customize whatever you want, dashboards, reports and grouping. You would need a pretty skill set including how to build SQL queries, SSRS reports and the SysTrack products themselves. And what about the requirement for Microsoft SilverLight, using a deprecated framework Tsck tsck. Come on this is 2017 calling….

But in the end it does not matter if SysTrack from Lakeside Software or for example Stratusphere FIT from Liquidwarelabs is used, that is your tool set. The most important part is to know what information is needed from what places and know thy ways to present these. Assess the assessment, plan some time and get mining for diamonds in your environment.

– Happy Mining!

Sources: vmware.com, lakesidesoftware.com

VCAP-DTM Deploy Prep: Horizon Lab on Ravello Cloud importing OVA

In my last post I was writing about creating a lab for your VCAP-DTM prep. Read it here VCAP-DTM Deploy Prep: La La Land Lab and Horizon software versions. In that post I mentioned the cloud lab option with Ravello Cloud that I’m using myself. With appliances the are some o did you look at this moments while deploying them on Ravello Cloud. There are two or three appliances to take care of depending on your chosen architecture: vROPS, vIDM and VCSA. Two of those you can also do on a VM, vCenter on Windows and vROPS on Windows or Linux. For vROPS, 6.4 is the last version with a Windows installer.

I personally went with one vCenter on Windows combined with composer (Windows only), so I will skip that one. For vIDM you will have to use the OVA.

Okay, options for OVA’s and getting them deployed: 1) directly on Ravello or 2) use nested hypervisor to deploy to, or 3) use a frog-leap with a deployment on vSphere and upload those to Ravello. The first we are going to do as the second creates a dependency with a nested hypervisor, wasting resource on that layer, getting the data there, traffic data flow, and for this lab I don’t want the hypervisor to be used other than for composer actions required in the objectives. The third, well wasn’t there a point to putting labs in Ravello Cloud.

Now how do I get my OVA deployed on Ravello?

For this we have the Ravello import tool where we can upload several VM’s, disks and installers to the environment. We first need to have the install bits for identity manager and vROPS downloaded from my.vmware.com.

In Ravello Cloud go to Library – VM – +Import VM. This will either prompt you to install Ravello Import Tool (available for Windows and Mac) or start the import tool.
In the Ravello import tool click on Upload (or Upload a new item). This will open the upload wizard. Select the Upload a VM from a OVF, OVA or Ravello Export File source. And click start to select your OVA location.

Grab Ravello Import Wizard - VM from OVA

Select the vIDM OVA and upload.

Grab - Ravello Upload There she goes

But are we done?
No grab vROPS as well.

Grab - Ravello Upload vROPS as well.png

If the upload is finished we will need to verify the VM. As part of the VM import process, the Ravello Import Tool automatically gets the settings from the OVF extracted out of the OVA. Verify that the settings for this imported VM matches its original configuration or the one you want to use. You can verify at Library – VM. You will see your imported VM’s with a configuration icon. Click your VM and select the configuration, go through the tabs to check. Finish.

It normally imports the values from the OVF, it will sometimes screw up some values. When you have multiple deployment options like vROPS you will have to choose the default size. vROPS import will be set either to extra small deployment 2vCPU 8GB or very large. Or use the one you like yourself. Same goes with the External Services. I won’t put them in (yet). Checking the settings from the OVA yourself up in the next paragraph.

Now how do I get the information to verify to?

You can from the sizing calculations done in designing the solution ;). But an other wat is to look in the OVA. OVA is just an archive format for OVF and VMDK’s that make up the appliance.

We need something to extract the ova’s. Use tar on any Linux/Mac or 7Zip on a Windows. I am using tar for this example on my mac. First up getting vIDM in running my test lab.

Open a terminal and go to the download location. Extract the ova with tar xvf. xvf stands for verbosely extract file followed by the filename. Well not in that order, but that’s the way I learned to type it ;).

That give us this:

Capture - tar - ova

Here we see the appliance has four disks, system, db, tomcat and var vmdks.

If we look in the OVF (use VI) file, at the DiskSection we will see need to have system in front and bootable. Followed by DB, Tomcat and last var.

Still in the OVF file, next up note the resource requirements for the vIDM VM. We need that figures later on to configure the VM with the right resources. In the VirtualHardwareSection you will find Number of virtual CPUs and Memory Size sections. We will need 2 vCPUs and 6 GB of vRAM (6144). And one network interface, so reserve one IP from your lab IP scheme. Okay ready and set prepping done.

Deploying a VM from the Library

Go to the application you want to add the VM to. Click the plus sign and select the imported VM from the list. In the right pane customize the name, network, external settings and all the things you like to have set.

GRab - Ravello Add imported VM to App

Save and update the Application.

Wait for all the background processes to finish, and the VM is deployed and starts. Open a console to check if the start-up goes accordingly. And it will not 😉 When you have opened a console you will notice a press any key message that the appliance fails to detect VMware’s Hypervisor and you are not supposed to run the product on this system. When you continue the application will run in an unsupported state. But we are running in a lab and not production.

IF YOU ARE READING THIS BLOG AND (MERELY) THINK ABOUT RUNNING PRODUCTION ON RAVELLO OR RUNNING PRODUCTION WITH THE IMPORTED VIDM LATER ON, GO QUIT YOUR JOB AND GO WALK THE WALK OF SHAME FOREVER.

Grab - Ravello Press Key

Press any key if you can find the any key on your keyboard. And yes you will have to do this all the time you start-up. Or use the procedure highlighted at this blog post https://www.ravellosystems.com/blog/install-vcenter-server-on-cloud/  to change /etc/init.d/boot.compliance (Scroll to 4 action 2 in the post, or to MSG in the file). Do it after you have configured the VM and the required passwords. But sssst you didn’t hear that from me…..

Back to the deployment and configure the VM with hostname, DNS and IPv4. Save and restart network. After this the deployment will continue with the startup.

And now you have a started appliance. We need the install wizard for IDM. Go to the vIDM URL that is shown on the blue screen in the console. For example, https://hostname.example.com. If this is the first time it will start the install wizard. Put in the passwords you want, select your database and finish.

After that you are redirected to the login screen. Log on with your login details and voila vIDM is deployed.

Grab - Ravello vIDM

Bloody Dutch in the interface, everything on my client is English except for the region settings. Have the “wrong” order in Chrome and boom vIDM is in Dutch. For the preparation and the simple fact that I cannot find anything in the user interface when its in Dutch I want to change this. Change the order in Chrome://settings – advanced settings – Languages – Language and input Settings button – drag English in front of Dutch to change the order. Refresh or click on a different tab and voila vIDM talks the language required for the VCAP-DTM or to find stuff…

Grab - Ravello vIDM English

Aaand the same goes for vROPS?

You can do the same with the vROPS deployment. Ravello doesn’t support the ovf properties normally used for setting vROPS appliance configuration. You miss that nifty IP address for the vROPS appliance. At the same time you have the issue that vROPS doesn’t like changes too much, it breaks easily. But follow more or less the same procedure as vIDM. For vROPS set the Ravello network to DHCP. Put in a reservation so the IP is not shared within your lab and is shown with the remote console. The IP reservation is used in the appliance itself. It is very important that an IP is set correctly on first boot, else it will break 11 out of 10 times. I have also noticed that setting a static IP in Ravello is not copied to the appliance, use a DHCP for vROPS works more often.

And now for vROPS:

  • Press any key to continue the boot sequence.
  • The initial screen needs you to press ALT+F1 to go to the prompt.
  • the vROPS console password of root is blank the first time you logon to the console. You will have to set the password immediately and it’s a little strict compared to for example the vIDM appliance.
  • the appliance (hopefully) starts with DHCP configured. And you can open a session to the hostname.
  • [Optional if you don’t trust the DHCP reservation] Within vROPS appliance. Change the IP to manual to stay fixed within vROPS so it will not break when changing IP’s. Use the IP it received from the DHCP, do not change or you will have to follow the change IP configuration procedure for master IP (see a how to blog post here: http://imallvirtual.com/change-vrops-master-node-ip-address/):

Changing vROPS DHCP to static:
Run /opt/vmware/share/vami/vami_config_net. Choose option 6 and put in your values, choose option 4 and put yours in and change hostname etc……

Next reboot the appliance and verify the boot up and IP address is correct. If you get to the initial cluster configuration your ready and set.

Other issues failing the deployment are resolved by redeploying the VM, sometimes by first re-downloading and re-importing the OVA in Ravello.

Grab - vROPS First Start

Do choose New installation and get it up for the VCAP-DTM objectives.

If you happen to have enough patience and your application is not set to stop during the initial configuration, you will have a vROPS appliance to use in your Horizon preparations.

So appliances are no issue for Ravello?

Well I do not know for all appliances, but for Horizon the appliance only components that are needed for a VCAP-DTM lab can be deployed on Ravello.

 

-Happy Labbing in Ravello Cloud!

 

Sources: ravellosystems.com, vmware.com

EUC Toolbox: Don’t wanna be your monkey wrench, use Flings

To remind some of whom have had previous experience with flings, or to explain flings to newbies if there still are any, in a few words Flings are apps and tools built by VMware engineers that are intended to be played with and explored. Even more, they are cool ideas worked out in cool apps and tools. Which are not only to play with but are very useful.
And, with no official production support from VMware.
This doesn’t mean the fling will tear a hole in the space-time continuum or your environment will randomly blow up at places, just be a little cautious when using a fling untested in production. Like with everything in production. Not official supported doesn’t mean the engineers stopped working on the products as soon as it is published on the Flings page. They do often respond to comments and with updates to make their cool ideas even better. And at times a fling makes it to the product like the vSphere HTML5 Web Client or ViewDBChk in Horizon.

Tools?

home_improvement

Anyway. Below is a list of my five most used EUC flings. Because well… it is an often overheard question: what do you or other customers use? And a listing disclaimer, don’t stop at number five, there are other very cool flings out there and new emerging ones coming. So keep an eye out. Hey I won’t stop at 5 either…..

VMware OS Optimization Tool aka OSOT

Guest OS systems are often designed for other form factors than virtual machines thus being very bloaty to include every variable choose and iniminie little device supported. When running these in virtual machines we have to optimize the OS so it won’t waste resources on unneeded options, features or services. Optimize to improve performance. One of these use cases is Horizon VDI or published. But personally I would like to see server components a bit more optimized as well.

With VMware OS Optimization Tool you can use templates to analyze and optimize Windows templates. Use the provided templates, make your own or use the public templates to share knowledge with the community. Made an oops and there is a rollback option.

OSOT.png

Get the VMware OS Optimization Tool here: https://labs.vmware.com/flings/vmware-os-optimization-tool.

Horizon Toolbox

The Horizon Toolbox is een set of helpful extensions to the Horizon Administrator page. The tools are provided at a Tomcat Web portal that is installed next to the Horizon Administrator. There the downside is visible straight away, yet another portal/console in the spaghetti western of the Horizon suite consoles. But the extensions for operations and no flash are worth it.

The Horizon Toolbox adds:

  • Auditing of user sessions, VM snapshots and used client versions.
  • Remote assistance to user sessions.
  • Access to the desktops VM remote console.
  • Power policies for Horizon pools.

Get the Horizon Toolbox here: https://labs.vmware.com/flings/horizon-toolbox-2.

VMware Access Point Deployment Utility

When we have use cases that need external access we have a design decision to use the Access Point in the DMZ to tunnel those external access sessions. The Horizon Access Point is an appliance that is deployed via a OVF. With the deployment you can use several methods to add the configuration options to the appliance, Web client, ovftool and Powershell for example. Another option is to use the Access point Deployment Tool fling. Especially when redeploying the appliance is faster than debugging or reconfiguring.

The VMware Access Point Deployment utility is a wrapper around ovftool. The utility let’s you input configuration values in a human friendly interface and PEM certificate format. It will create the ovf string, and will execute that string and deploy and configure Access Point. It will export the certificate and keys to the required JSON format. And it allows your input to be saved to XML and imported at a later time. This minimizes the amount of re-input required, and in result the amount of failures with reconfiguration or redeployment.

Get the VMware Access Point Deployment Utility here: https://labs.vmware.com/flings/vmware-access-point-deployment-utility.

App Volumes Backup Utility

App Volumes Appstacks are read only VMDK’s that are stored on a datastore and attached to a user sessions or desktop VM that has the App Volumes agent running. When we need to back up the appstacks we have the option to use a backup solution that backs up the datastore. But not all backup solutions have this option. A lot of VADP compatible backups look at the vCenter inventory to do their backup. Appstacks, and writeable volumes for that matter, are not available as direct selectable objects in the vCenter inventory. The Appstacks are only attached when a session or desktop is active, and non persistent desktop are not in the backup in the first place.

App Volumes Backup Utility to the rescue. In short what this tool does is connect App Volumes and vCenter, create a dummy VM object and attach the App Stack and writable volumes VMDK’s to that VM. And presto backup tool can do its magic. A little heads up for writable volumes, be sure to include pre and post actions to automatically detach, and re-attach any writable volumes which are in use while the backup is running. Utility for that is included in the fling.

Get the App Volumes Backup Utility here: https://labs.vmware.com/flings/app-volumes-backup-utility.

VMware Logon Monitor

VMware Logon Monitor fling monitors Windows 7 and 10 user logons. It reports a wide variety of performance metrics. It is firstly intended to help troubleshoot slow logon performance. But it can also be used for insights if you happen to miss vROPS for Horizon for example. Or when you want to find out how your physical desktop is doing in this same process when assessing the environment.

Some of the metrics categories include logon time, shell load, profile, policy load times, redirection load times, resource usage and the list goes on and on and on. VMware Logon Monitor also collects metrics from other VMware components used in the desktop. This will provide even more insight in what is happening during the logon process. For example what is that App Volumes AppStacks adding to the logon process……

Install Logon Monitor in your desktop pool and let the collection of metrics commence. Note that the logs are locally stored and not on a central location. The installer will create and start VMware Logon Monitor service.

logonmonitor

VMware Logon Monitor will log to C:\ProgramData\VMware\VMware Logon Monitor\Logs.

Get the VMware Logon Monitor here: https://labs.vmware.com/flings/vmware-logon-monitor.

And there’s more where that came from…..

And probably some that make your order of appearance a little bit different. Just take a look a https://labs.vmware.com/flings/?product=Horizon+View for the Horizon View tagged flings. And be sure to also check without this tag as for example the App Volumes related flings are not in this tag listing.

– Enjoy the flings!

Sources: labs.vmware.com/flings

EUC Toolbox: Helpful tool Desktop Info

As somebody who works with all different kinds of systems from preferably one client device, from the intitial look, all those connected desktops look a bit the same. I want a) to see on what specific template am I doing the magic, b) directly see what that system is doing and c) don’t want breaking the wrong component. And trust me the latter will happen sooner then later to us all.

dammit-jim

Don’t like to have to open even more windows or search for metrics in some monitoring application as it does not make sense at this time? Want to see some background information on what the system you are using is doing, right next to the look and feel of the desktop itself? Or keep an eye on the workload of your synthetic load testing? See what for example the CPU of your Windows 7 VDI does at the time an assigned AppStack is direct attached? And want to keep test and production to be easily kept apart in all those clients you are running from your device?

Desktop Info can help you there.

Desktop Info you say?

Desktop Info displays system information on your desktop in a similar way to for example BGinfo. But unlike BgInfo the application stays resident in memory and continually updates the display in real time with the interesting information for you. It looks like a wallpaper. And has a very small footprint of it’s own. Fit’s perfectly for quick identification of test desktop templates with some realtime information. Or keeping production infrastructure servers apart or….

And remember it’s for information. Desktop Info does not replace your monitoring toolset, it gives the user information on the desktop. So it’s not just a clever name……..

How does it work?

Easy, just download, extract and configure how you want Desktop Info to show you the …well.. info. For example put it in your desktop template for a test with the latest application release.

It can be downloaded at http://www.glenn.delahoy.com/software/files/DesktopInfo151.zip. There is no configuration program for Desktop Info. Options are set by editting the ini file in a text editor such as Notepad or whatever you have lying around. The ini file included in the downloaded zip shows all the available options you can have and set. Think about the layout, top/bottom placement, colors, items to monitor and WMI counters for the specific stuff. Using Nvidia WMI counters here to see what the GPU is doing would be an excellent option. Just don’t overdo it.

In the readme.txt that is also included in the zip there is some more explanation and examples. Keep that one closeby.

capture-basicinformation

Test and save your configuration. Put Desktop Info in a place or tool so that it is started with the user session that needs this information. For example in a startup, shortcut or as a response to an action.

Capturing data

You have the option to use Desktop Info with data logging for references. Adding csv:filename to items will output the data to a csv formatted file. Just keep in mind that the output data is the display formatted data.

– Enjoy!

Digital Workspace Transformation: information security

Yes…. it has been a while since I posted on this blog, but I’m still alive 😉

For a 2016 starter (what?!? is it June already), I want to ramble on about information security in the digital workspace. With a growing number of digital workspace transformations going on, information security is more important than ever. With the growing variety of client endpoints and methodes of access in the personal and corporate environments, users are becoming increasingly independent from the physical company locations. Making it interesting how to centrally manage storage of data, passwords, access policies, application settings and network access (just examples, not the complete list). For any place, any device, any information and any application environments for your users (or do we want any user in there), it is not just a couple of clicks of this super-duper secure solution and were done.

encrypt-image300x225
(image source blogs.vmware.com)

Storing data on for example Virtual Desktop servers (hello VMware Horizon!) in the data center is (hopefully) a bit more secure than storing it locally on the user’s endpoint. At the same time, allowing users to access virtual desktops remotely puts your network at a higher risk then local only. But it’s not all virtual desktops. We have mobile users who will like to have the presentations or the applications directly on the tablet or handheld. I for instance, don’t want to have to open a whole virtual desktop for just one application. You ever tried a virtual desktop on a iPhone, it is technical possible yes, but works crappy. Erm forgot my Macbook HDMI USB-C converter for this presentation, well I send it to your gmail or dropbox for access with the native mobile apps at your conference room. And the information is gone out of the company sphere…..(a hypothetical situation of course..)

Data Leak

Great ideas all those ways to be in and out of company information. But but but….. these also pose some challenges to which a lot of companies have not started thinking about. Sounds a bit foolish as it is probably the biggest asset of a company, information. But unfortunately it’s a fact (or maybe it could be just the companies I visit). Sure these companies have IT departments or IT vendors who think a bit about security. And in effect mostly make their users life’s miserable with all sort of technical barriers installed in the infrastructure. In which the users, business and IT (!) users, will find all sorts of ways to pass these installed barriers. Why? First of all to increase their productivity while effectively decreasing security, and secondly they are not informed about the important why. And then those barriers can be just a nuisance.

Break down the wall

IT’s Business

I have covered this earlier in my post (https://www.pascalswereld.nl/2015/03/31/design-for-failure-but-what-about-the-failure-in-designs-in-the-big-bad-world). The business needs to have full knowledge of their required processes and information flows, that support or process in and out information for the services supporting the business strategy. And the persons that are part of the business and operate the services. And what to do with this information in what different ways, is it allowed for certain users to access the information outside of the data center and such. Compliancy to for example certain local privacy laws. Governance with policies and choices, and risk management do we do this part or not, how do we mitigate some risk if we take approach y, and what are the consequences if we do (or don’t).

Commitment from the business and people in the business is of utmost importance for information security. Start explaining, start educating and start listening.
If scratch is the starting point, start the writing first on a global level. What does the business mean by working from everywhere everyplace, what is this digital workspace and such.  What are the risks, how do we approach IAM, what do we have for data loss protection (DLP), is it allowed for IT to inspect SSL traffic (decrypt, inspect and encrypt) etc. etc.
Not to detailed at first it is not necessary, as it can take a long time to have a version 1.0. We can work on it. And to be fair information security and digital workspace for a fact, is continue evolving and moving. A continual improvement of these processes must be in place. Be sure to check with legal if there are no loops in what has been written in the first iteration.
Then map to logical components (think from the information, why is it there, where does it come from and where does it go, and think for the apps, the users) and then when you have defined the logical components. IT can then add the physical components (insert the providers, vendors, building blocks). Evaluate together, what works, what doesn’t, what’s needed and what is not. And rave and repeat…..

Furthermore, a target for a 100% safe environment all the time will just not cut it. Mission Impossible. Think about and define how to react to information leaks and minimize the surface of a compromise.

Design Considerations

With the above we should have a good starting point for the business requirement phase of a design and deploy of the digital workspace. And there will also be information from IT flowing back to the business for continual improvement.

Within the design of an EUC environment we have several software components were we can take actions to increase (or decrease, but I will leave that part out ;-)) security in the layers of the digital workspace environment. And yes, when software defined is not a option there is always hardware…
And from the previous phase we have some idea what choices can be made in technical ways to conform to the business strategy and policies.

If we think of the VMware portfolio and the technical software layers were we need to think about security, we can go from AirWatch/Workspace ONE, Access Point, Identity Manager, Security Server, Horizon, AppVolumes to User Environment Management. And And….Two-Factor, One Time Password (OTP), Microsoft Security Compliance Manager (SCM) for Windows based components, anti-virus and anti-malware, networking segmentation and access policies with SDDC NSX for Horizon. And what about Business Continuity and disaster recovery plans, and SRM, vDP.
Enterprise Management with vROPS and Log Insight integration to for example SIEM. vRealize for automating and orchestrating to mitigate work arounds or faults in manual steps. And so on and so on. We have all sorts of layers where to implement or help with implementing security and access policies. And how will all these interact? A lot to think about. (It could be that a new blog post series subject is born…)

But the justification should start at the business… Start explaining and start acting! This is probably 80% of the success rate of implementing information security. And the technical components can be made fit, but… after the strategy, policies, information architecture are somewhat clear….

And the people in the business are supporting the need for information security in the workspace. (Am I repeating myself a bit 😉

Ideas, suggestions, conversation, opinions. Love to hear them.

Get up, stand up and get your Windows XP out of there

April 8, 2014.

Ring a bell? No? Really? That is a date that will have to ring a bell. You are either on the ah don’t worry I’ve got my things sorted, or it will send a shiver down your spine.
Have been lying under a rock or went on a travel to the end of the universe? Well this 2014 april day is the day that Microsoft will end support (to be precise End of extended support) for Windows XP. See this nice matrix that Microsoft have up their site: http://windows.microsoft.com/en-us/windows/lifecycle.

Ah but that is no problem for us because we have a perfect running system and never had to worry about Microsoft support. Okay, well congratulations on your perfect operating environment. But it will grind to a halting stop. Why? As you probably need applications on there, and the suppliers of those will also stop (or already stopped) support for their products on Windows XP. And if that doesn’t get you started what do you think about: attacks to vulnerabilities. You got a bunch of Cyber attackers waiting around and they will be able to target vulnerabilities in Windows XP without fear these flaws will be patched. And there will not be anything you can do to protect yourself besides upgrading to a newer operating system.

[Edit] Okay apparently not anything. Your not completly on your own. Microsoft announced on 15 january 2014 it will extend it’s antimalware support of Windows XP through 14th july 2015 (http://blogs.technet.com/b/mmpc/archive/2014/01/15/microsoft-antimalware-support-for-windows-xp.aspx). This means anti malware signatures and engine updates for the essentials suite. The EOL of Windows XP still stays on the 8 april 2014. If you have essentials in your environment you will have some support, but keep in mind this doesn’t fix vulnerabilities in the OS itself. The effectiveness of anti malware/virus protection or what ever solutions are limited on an out dated OS. You maybe have a little more protection then nothing, but be aware that this doesn’t give a false oh I’m okay. The urge to move from Windows XP still exists.
[/Edit]

What are your upgrade/migration paths?

Got your attention? Well there is still some time to get your plans and start moving. To help you get started I will highlight some of the paths you can explore to get that pesky Windows XP out of there.

– Assess. Check your environment, what is out there? Check what applications you have, requirements, what hardware there is, how your distribution is done are there any central management solutions out there?
Involve your users. This is key! These are to ones that are using the environment, these are the ones that will use the target environment, these know their applications. These are also the ones that test and accept the new environment. Don’t got them in your project? Well you are bound to fail. Go back a few steps and include those users!
Tools, use the assessment tools out there. You got the Microsoft assessment and planning toolkit, Flexera or AppDNA for example. Use System Center Configuration Manager for deployment, well use it to gather the information from your clients. At this time you should have a good starting point on what is in your environment. Check with your suppliers of applications what is their support with newer OS versions, do you need a new version no problem. Add it to your new deployment and migrate the data as described by these suppliers. Make sure you start collecting your application installation disks plus any necessary product keys. Check with your business what there plans are, are you currently in a fat client environment, maybe this is the time to move to a VDI or hybrid environment (I would say the perfect time, but this is up to your organization).

– Pick a new OS version. You got a picture of your environment, what is lying around and how your suppliers support is for new OS’ses. Take your pick. Part of making your migration plan is to pick a new OS, as this influences the way to go. As the most Windows XP users will want to go to a newer version of Windows, decide on going to Windows 7 or Windows 8/8.1. You will have to do a new install as there is no direct upgrade path. You will have to take care of the personal settings currently on the system. There is no direct interchangeability between the Windows XP user settings and Windows 7 for example. You probably have a good view of your application support, check which OS has to most support. And for Windows 7 be aware that mainstream support will also stop in 2015, is Windows 8.x a better option when your application suppliers support these.

– Virtual Desktops. You will have a picture of your business strategy and your current vs future hardware and application support. This is probably a good time to start thinking about VD as your target. [Edit] To clearify a bit on the ways to deliver a VD (as the comments show this was needed); Depending on the type of users in your organizations landscape, this can be either a shared, a one-to-one or a hybrid virtual desktop environment. What is the difference? A shared VD is actually a desktop that is being shared by serveral user on one server (with same Server OS install base, same resources). Where an one-to-one is a desktop where one user connects. And yes these desktops also run on a shared hypervisor host, but seperated from the other user desktops. Changes to the desktops don’t influence the other users. In most organization it will either be a large portion of shared, or hybrid shared with a small portion of one-to-one. But here you again must decide what is best for your organization. There is no one size fits all organizations, there is an design choice that can be easily expanded to an other solution when needed. [/Edit] Is your hardware still able to support Windows 7 and Windows 8, the need for VD is a little lesser. But when you will have to invest a lot in new hardware a VD is the perfect place to go to. It will make a central managed environment with upgrade methods for future OS life cycles.

– Legacy Applications. There may be a custom application that won’t work on newer versions of Windows. Okay, but here we also have options other then leaving Windows XP out there. There is application virtualization for example. Sandbox this application in a previous Windows support mode. ThinApp or App-v for example. No application virtualization initiatives yet? There is also a some virtualization support in Windows 7 as are there ways to run virtual machines on your desktop. A virtualization feature called Windows XP Mode is included in Windows 7 Professional as are VMware workstation or such as available. Just run your legacy application in the Windows XP VM and work on a plan to replace these later on.

– Persona migration. Users of Windows XP will have set there applications and workspace to their needs. Preferably they want a seamless move to a new OS with the possibility to retain these settings. As we got the support for applications, we need to think about a way to get those settings to the new OS. What options do we have here? We can virtualize the profile via RES Workspace Manager for example. This decouples (or abstract) the profile and settings from the Windows profile (which has changed from XP to 7, so again no direct way). Deploy at the current Windows XP base and gather the settings needed. When going to Windows 7/8 these settings we be applied there as well. There is a little catch to this method (and to be clear for all migration options), non of the solutions are publishing settings from application that changed over time. Your Office 2003 settings will not be straight applied on Office 2010, some conversions will be needed.
An other option is to use Windows Easy Transfer to transfer your settings from XP to Windows 7. Use your network or a USB hard disk to save the settings and sneaker migrate them to the new system. 32-bit to 64-bit will be harder to migrate, but there there will be some backup restore options. And other option is to use the layer management upgrade approach.

– Design with workspace layers. Make your design one that will be easily upgraded in the future. OS Life Cycles will be shorter, Windows 9 is already rumored to be released in 2015. By treating your OS as one of your workspace layers, you will be able to easier to migrate in the future. These layers will be transported, migrated and recovered more easier. Loose your corporate notebook, well here is a VD image with your persona, data and application layers restored from a previous (central) point. And off you go! Decouple those layers and these are easier managed. What are workspace layers? You got your hardware layer, driver layer, base image (OS) layer, one or more application layers and the user layer. Your corporate data will be in several layers, but if you have good insights and a working data management (not often seen to be honest) you can even have a data layer. With these layers you can have different owners, managers and responsibilities (IT vs User vs Business).

– Desktop Layer management. VMware Horizon Mirage is a layered image management solution that separates the desktop into logical layers that are owned and managed by either the IT organization or the end user (persona/applications). You can update the IT-managed layers while maintaining end-user files and personalization. With the centralized management provided by Horizon Mirage, you can perform all of the snapshot, migration, and recovery tasks remotely. This will significant reduce the manual migration process steps and accelerate the migration project. This significantly decreases IT costs. When setup and captured correctly this is the preferred tool to do a online and seemless migration of Windows XP to Windows 7.

– But my business wants to go even further and include all those buzzwords like BYOD, mobility and such. Yup, and why are your still stuck with Windows XP? Take the simple approach, first get your infrastructure up to the right OS and running. Take one step at a time instead of a giant leap. Yes, of course you will have to design with the future in mind (VDI with workspaces will open the environment to mobility) but you first have to get this big change a successful one. Let the infrastructure sink in, get your issues out. And let the organization get used to this change. After this, and design in mind, it should be a nice easy project to add mobility to your upgraded environment.

—-

For those on Windows XP. It was time to act, and there is still time to act. But you will have to do this now!

Else it is Tick Tick Boom! (just to get some more earworm out of my head)

– Happy migration!