Product Evaluation: Inuvika Open Virtual Desktop (OVD)

Occasionally I get a request, or some urge bubbles in me, to look at vendor X with its product Y. And there is nothing wrong with that as I like to keep a broader view on things and not just betting on one horse.

And so a request from Inuvika did find me asking to look at their evolution of the open virtual desktop (OVD) solution. Okay using virtual desktop and application delivery triggers will get my attention for sure. Kudos for that. On top of that the name Inuvika gets my curiosity running in again a somewhat higher gear. No problem, I will take a peek and see if I can brew up a blog article at the same time. At the same time was almost a year ago…..But still wanting to take that peek. You will probably figure out that letting  you read about OVD is a little bit overdue. Sorry for the delay….

A little notice up front: this blog post is my view only and not paid for, pre-published or otherwise influenced by the vendor. Their opinion might differ. Or not.

Wait what… Inuvika you say?

Yes Inuvika (ĭ-noo′vĭk-ă). If you open up your browser you could learn that the company name is based on a Canadian town Inuvik where it can be very cold. And that for 30 days in the year the sun doesn’t rise above the Horizon (*wink* *wink*). In such a place you will need a strong community and a collaborative approach to be able to be living in harse an environment. Their product strategy is the same. Offering an open source solution and collaborative with the community out there (however the separate community version and site is dead).
Inuvika mothership is based in Toronto, so hopefully that doesn’t lose a bit of the magic just introduced ;). But where ever they are based, it does not change the approach of Inuvika.

Main thing, the guys and gals from Inuvika is where you can get the Open Virtual Desktop from. Go to to download your version. Or take a peak around the site.

Open Virtual Desktop sounds interesting enough, show me

Glad you asked. Let’s find out. We have the option to use a trail version for evaluation purposes, enterprise license or the cloud version. I like it when we can find out a little about the bits and bytes ourselves. So I will be downloading OVD. But first up some architecture to know what screw and bolts we need, or can opt out from.


The following diagram has been taken from the architecture and system requirements document and show the components and the network flow for the system.

OVD-Architecture Overview

The OVD Roles:

  • The OVD Session Manager is first required component. The OSM will be installed prior to the other components. As the master of puppets it’s the session broker, administration console and centralized management of the other OVD components.
  • The OVD Application Server is one of the Slaveservers that will communicate with OSM. The OAS is the component that serves the application and desktops pools to the users. Accessed from either the web portal or the OVD Enterprise client. OAS is available in a Linux or Windows flavor. OAS can be pooled together and load balanced from the OSM. However you will need Enterprise for that as Foundation is limited to one application server (seriously just one?).
  • The OVD Web Access. OWA is responsible for managing and brokering Web sessions.Now where did we see that abbreviation before… Either using Java (going away in a next release) or HTML5, SSL tunneled if required. If using OVD clients only this is component is not needed. OWA will also offer an API (Javascript) to integrate OVD with other web-based applications.
  • The OVD File Server. The OFS component offers a centralized network file system to the users of the OAS’ses keeping access to the same data not depending on the OAS the user is on. Data can be user profiles, application data or other company data. The data is only accessible from the OAS sessions and is not published in another way like a contentlocker or dropbox.
  • ESG (hey wait no O something something). The Enterprise Secure Gateway is used as a unified access layer for external, but optionally also internal connections. ESG tunnels all the OVD connections between the client and itself, over a HTTPS session. So from any location, users that have access to HTTPS (443), will also be able to start a OVD session. If not using ESG tunnels OVD client will need to have HTTPS and RDP open to the OAS. Require the Enterprise license.
  • Further 2.3.0 brings a tech preview to OWAC. Web Application Connector to offer SSO integration as an identity appliance.

All components run on a Linux distribution supporting the flavors RHEL, CentOs or Ubuntu LTS. The only component where Windows will be used is when OAS is offering Windows desktops or Windows-based applications on RDS services. Supported RDS OS versions are Windows 2K8R2, W2012 and W2012R2. Isn’t it time for Windows 2016 by now?

In the OVD architecture we see sorts of familiar components that we see in similar virtual desktop solutions, only with a bit of a different naming. In a first overview the OVD architecture seems like what we are used to, no barriers here to cross.

In a production environment the Inuvika OVD installation will use several servers all doing their specific role. Some roles you will always see in a OVD deployment. Others are optional or can be configured to run together with other roles. And with external dependencies entering the mix with load balancers in front of OWA for example. Small shops will have some roles combined while having a smaller amount of OAS times n.

It all depends on the environment size and requirements you have for availability, scalability, resilience, security and so on.

Into the Bat-lab

Come on Robin to the Bat Cave! I mean the test lab. Time to see that OVD in action and take it for a spin. Lab action that is, however Inuvika also offers access to a hosted demo platform if you don’t have a lab or test environment lying around. From the download page you can download the Demo Appliance or register for the OVD Full installation. I will use the demo appliance for this blog post. As I would probably also would be installing multiple roles on the same virtual machine. The Demo Appliance is a virtual machine with the following OVD roles installed:

  • OVD Session Manager (OSM)
  • OVD Web Access (OWA),
  • OVD Application Server for Linux (OAS)
  • OVD File Server (OFS).

I will be using my Ravello Cloud vTestlab to host the OVD. So first I have to upload the OVA into the Ravello library. Once available in Ravello I can create a lab environment. I can just import the OVD, but I also want to see some client and AD integration if possible. I added my vTestlab domain controller and Windows 10 Clients in to the mix.

Invuvika Demo Lab

Let’s see if I can use them both, or I am wasting CPU cycles in Ravello. Good thing April is half through and I still have 720 CPU hours remaining this month, so not much of a problem in my book.

When starting the OVD demo appliance it will start with the Inuvika Configuration Tools. Choose your keyboard settings (US). And presto the appliance starts up with the IP I configured while deploying the application.

OVD - Demo Console after start

Here you can also capture the login details for the appliance: inuvika/inuvika. The default user for the administration console is admin/admin. Open up a browser and point to the FQDN or IP for web access. HTTP://<your appliance>/. Here we are greeted by a page where we can start a user sessions, open the administration console, documentation, the installer bits for the Windows AS and the clients.

The user sessions offered in the demo appliance are based on the internal users and internal Ubuntu Desktop and applications. The client can be set to desktop mode, which is a virtual desktop with the applications published to the user. Or can be portal mode, where the user is presented with a portal (so it’s not just a clever name) with all its application entitlements. The client starts with Java to allow for redirecting drives. Using HTML5 will not allow a drive to be redirected. The Demo appliance is populated with demo users where the password is the same as the user name. Just add cholland with password cholland in the client screen and you will be presented with a user session.

OVD Web login.png

And see the portal with the users entitlement and the file browser for data exchange between sessions.

OVD Demo - Client Portal

Start up a Firefox browser session and open my blog. Yup all works.

OVD - Client Firefox Blog

For using the Enterprise Client the demo appliance needs to be switched to Enterprise. And you need a license for that! Via the admin console you need to set the system in maintenance mode. Via the appliance console after logging in you get the menu where you can choose option 3 Install OVD Enterprise. After this you can set the system back to production, are greeted by a subscription error and via Configuration – Subscription Keys you can upload the license File. When a valid license is installed you can now run the Enterprise client for your evaluation. The client options are the somewhat similar as with the web client. Besides adding the site name in the client instead of a browser URL.

OVD Ent Client Login

We also have the administration console. While this has a bit more options and I am not trying to rewrite the documentation, I will show some of the parts. Basic try out the options yourself to see what the differences are.

We are greeted with an index page with an environment overview and user/applications publications. These will be the main actions when using the product. Of course we also have some menu options for reporting and configuration.

OVD - Admin Index

Let see if we can get some AD users in and entitle them to the demo. Seems like a lot of organization have their identity source already in place, and Microsoft is something used there. Configuration option seems like a logical part to start. And here we have the domain integration settings. Currently it is set to the internal database. Let get some information in the Microsoft option to see if we get the AD part in.

OVD - Configuration

I am using the internal users to keep it simple and leave in the support for Linux. This is a demo, not production.

When the information is done and added push the test button to see if the LDAP connect and bind works. Save when all green. Problems here? Go to status – logs to see wtf is happening. Main issues can be DNS, time offset or the standard account not having to correct information or UPN in the domain. The OVD Linux bind command is trying [email protected] hardcoded.

And viola Administrator from the vTestlab domain has a session connected:

OVD - Administrator Session

My opinion about OVD

It works out of the box with any HTML5 Browser. Or you can of course use the Enterprise client, but this will required an Enterprise license and RDP or i-RDP to the client desktops (or ESG to be SSL tunneled).

[Edit] I most correct my previous version that Inuvika is using RDP as an enterprise display protocol.  That is not entirely true. OVD uses RemoteFX with the Enterprise Desktop Client and Windows Application Servers. RemoteFX is a set of technologies on top of RDP that enhances the visual experience significantly in comparison with the older RDP (the non-RemoteFX). Indeed better for the user experience, how much better we will leave up to the users. For Linux Application Servers there is not yet RemoteFX support, this is forthcoming.
[Close Edit]

For HTML browser user connections, or using the Enterprise client in combination with the ESG, OVD utilizes HTTPS (tcp/443) and thus is roadwarior friendly. With roadwarrior friendly I mean a service that is firewall friendly and makes hotel, Starbucks cafe or airport WiFi a place to use the environment without blockages, changing ports, VPN tunnels or not be able to use the service remotely from that location.

For IT Operations the administration console is in a single console. No scattering consoles or admin tools all over the place. And no dependencies, like the disliked flash plugin for some other solution out there ;). Further the expected components are there in a logical location.

Cross publishing apps between distributions is a very nice feature. Windows in Linux or Linux with Windows apps, great. Or add web applications to the mix. Furthermore Inuvika is not bound by a stack choice or hypervisor. VMware vSphere yes, Nutanix (Nutanix Ready AHV) yes, KVM, etc yes.

The use cases, applications and desktops still have to be assessed and designed accordingly. And these will be the most important bits for the users. This is what wins or breaks an EUC environment. I won’t see a lot of users now on Windows-based desktops and applications, going to Linux desktop and apps without more or less resistance and opposition. That Windows will be in there for now. But this is the same for the other vendors, not much difference here.

I personally don’t know what the user experience is when doing your day-to-day working throughout the business cycle. I haven’t come across Inuvika OVD in the wild.

One of the strong points of going open source is that the product will be improved by the contributions of the community (if there still is a community version….). That will mitigate some of the above. But also will require the OVD community to have a footprint of some sort for the required input and change. If the community is too small it will not be able to help Inuvika and the OVD user base.

I think cost wise it will be interesting for some shops out there looking to replace their EUC solutions and in the mean time look for ways to cut costs. These shops probably already have some issues and bad experience with their current solution along the way. I do not think organizations happy with VMware Horizon or Citrix will be lining up to replace their EUC with Inuvika. Yet ..that is.
This is a fast world, and it is interesting to see that there are vendors thinking outside of the paved roads. It makes their but also other solutions a better place for the users. It’s the community and open source that is really interesting here. So just give it a go and see for yourself. Don’t forget to share your experience with the community.

– Happy using your OVD from Inuvika!


Stories from the VMworld Solutions Exchange: PernixData FVP: The what and installation in the lab.

This year my plan is to write up some blog posts about some of the solutions of the partners at the VMworld Solutions Exchange. Next to VMworld general sessions, technical sessions, hanging space and hands on Labs, an important part of VMworld is the partner eco system at the Solutions Exchange. I have visited the solution exchange floor several times this year (not only for the hall crawl) and I wanted to make a series about some of the companies I spoke with. Some I already know and some are new to me. The series is in absolute no order of importance, but about companies that all offer cool products/solutions and present their solutions with a whole lotta love to help the businesses and technologies getting happy. I have been a bit busy these last couple of weeks so probably a bit later then I first wanted, but here goes…

This time it is about getting familiar with PernixData’s FVP. I have seen enough on the communities and concepts on the big bad Intarweb, I had a chance to see it in real action from some demo’s and the shots in the technical presentations by PernixData (thanks for those sessions guys 🙂 ). Time to take it for a spin in the test lab. But first a little why and what before the how.

PernixData Logo

What is PernixData FVP?

To know about PernixData FVP you first have to start at the issue that this software solution is trying to solve; IO bottlenecks in the primary storage infrastructure. These IO bottlenecks add serious latency to the application workload the virtual infrastructure is serving out to the users.  Slow responses or unusable applications are a result of these latency issues. This leads to frustrated and suffering end users (which leads to anger and the dark side), but this also creates increased work for the IT department, such as increased help desk calls, troubleshooting/analyze performance issues and extra costs to compensate these troubleshooting (cost in personnel and hardware to patch up the issues). One of the options often used to try and solve the IO puzzle is to add flash, at first mostly to the storage infrastructure as a caching layer or a flash all storage arrays. Flash has microsecond response times and delivers more performance as magnetic spinning disks with their millisecond response times. The problems with adding to the storage infrastructure are high returning costs and not really solving the problem. Sure giving the storage processors faster IO response and more flash capacity will have an improvement of some sort vs traditional storage, but this needs constant investments when the limit is reached again and IO is still far from the workload. The IO still must travel  through the busses, to the host adapter over a network and through the storage
processor to reach the flash, and back the same way for a response of the operation or the requested data. This typically adds some response times where each component adds some handling and their own response time. Not the talk about adding workload to the storage processors with all the additional processing. Flash normally does it’s responses in microseconds. That seems to be a waste.
Okay no problem we add flash to the compute layer at the host. That is close to the application workloads. Yes good, performance needs to cuddle with the workloads. We decouple storage performance and capacity to performance in the host and storage capacity in the traditional storage infrastructure. Only just putting flash in the host does not solve it as a whole. The flash still needs to be presented to the workload and handle locality issues for VM mobility requiremens (fault tolerance, HA, DRS and such). As PernixData is not the only that tries to solve this issue, but some are presenting the local acceleration resources with a VSA (Virtual Storage Appliance) architecture. This in itself introduces yet another layer of complexity and additional IO handling as those appliances act as an intermediate between workload, hypervisor and flash. Furthermore as they are part of a virtual infrastructure they can have to battle with other workloads (that are also using the VSA resources for IO) when needing host resources. We need a storage virtualization layer to solve the mobility issue, optimize IO for flash and need to talk as directly as possible, or need a protection mechanism or smart storage software of some sort for IO appliances (there are some solutions out there that are also handling these). The first is where PernixData FVP comes in play.

PernixData Overview


The architecture of FVP is simple. All the intuition, magic and smartness is in the software it self. It uses flash and/or RAM on the host, a host extension on those hosts and a management server component. It currently works only with the VMware hypervisor (a lot of the smart people from PernixData come from previous work at VMware). It can work with backend storage in block (FC, iSCSI) or file (NFS) storage as well as direct attached storage. As long as it is on the VMware HCL.
The host extension is installed as a VMware VIB. The management server requires a Windows server and a database. The management server is installed as a extension to vCenter and uses a service account (with rights to the VMware infrastructure and the database) or a local user (can be SSO only). With the latter it uses local system as the service account.
When adding a second FVP host this host is automatically added to the default fault domain. Default local acceleration is replicated to one peer in the same fault domain (with the Write-Back policy). This works from the box, but you probably will need to match the domains (add you own) and settings to the architecture you are using. The default fault domain cannot be renamed, removed, or given explicit associations.


After installing the flash or RAM to use for acceleration at the hosts. We can install the host extension with access to the ESXi shell (local or remote with SSH). I downloaded the FVP components and place the zip with the host extension on the local datastore as I’m not installing across a lot hosts. To install a VIB the host must be in maintenance.

# esxcli system maintenanceMode set –enable on
~ # esxcli software vib install -d /vmfs/volumes/datastore1/PernixData-host-exte
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: PernixData_bootbank_pernixcore-vSphere5.5.0_2.0.0.2-32837
VIBs Removed:
VIBs Skipped:
~ # esxcli system maintenanceMode set –enable off

Next up is the management server. The Windows installer itself is pretty straightforward. Have a service account, database and access to the database and vCenter inventory for that service account setup beforehand, and you are ready to roll.

Install - FVP Management Server IP-port Install - SQL Express Install - vCenter

When installing with a Web Client the FVP client add on is added when registering with the vCenter. Restart any active session by logging off and logging on and FVP will show up in the object navigator.

Web client Inventory

Next up create your FVP cluster as a transparent IO acceleration tier and associate this with a vCenter cluster of the to accelerate hosts (with the host extension and local resources). The hosts in this cluster will be added to the cluster. With cluster created and the hosts visible in the FVP cluster we add the local acceleration resources (the flash and/or RAM) to use. Next we add datastores and VM’s. Add this level we can set the policies to use. Depending on your environment certain policies will be effect. In the manage advance tab we can blacklist VM’s to be excluded from the acceleration (VADP or other reasons). In the advance tab we can also define which network to use for acceleration traffic. By default FVP automatically chooses a single vMotion network. In the manage advance tab we can also put in the license information or create a support bundle if we ever need one. The manage fault domain tab is not just a clever name, here we can see the default domain and add our own when needed.

Add Flash Devices Add FVP Cluster Fault Domain
The monitoring tab is where you we have the opportunity to look at what your environment is doing. An overview of what FVP is doing is shown in the summary tab. These are great places to get some more insight on what your workloads IO are doing. My test lab is getting the acceleration for the moment it is started.

Monitor Tab Performance results writethrough and writeback two login sessions

I can also show a first write policy versus on a Login VSI workload. (Keep in mind my testlab isn’t that much) LoginVSI-Policies

But that is more something for an other blog post about FVP.


When an application issues a write IO, the data is committed to the flash device however this data must always be written to the storage infrastructure. The timing of the write operation to the storage infrastructure is controlled by the write policies in FVP. There are two policies to choose from: the write-through and write-back policy.

Write-Through. When a virtual machine application workload issues an IO operation, FVP will determine if it can serve it from flash. When it’s a write IO the write goes straight to the storage infrastructure and the data is copied to the flash device. FVP acknowledges the completion of the operation to the application after it receives the acknowledge from the storage system. In effect write IO’s are not accelerated by the flash devices when selecting this write policy, but all subsequent reads on that particular data are served from flash. The write IO will benefit from the reads acceleration as these read operations/requests are not hitting the storage infrastructure and more resources are available there to serve the write IO.

Write-Back. This policy accelerates both read and write IO. When an virtual machine application workload issues a write IO operation, FVP forwards the command to the flash device. The flash devices acknowledges the write to the application first and then handles the write operation to the storage system in the background. With these delayed write’s we have a situation that for a small time window the data is on the host and not yet written to the backend storage infrastructure. When something happens on the vSphere host, this could end up with data loss. For this replica’s of the data are used. With replica’s the data is forward to the local and one or more remote acceleration devices. This results in the application having flash-level latencies while FVP deals with fault tolerance, and latency/throughput performance levels of the storage infrastructure. 

FVP will allow to set policies on datastores or on a per VM level. You can have an environment where a great number of virtual machines run in write-through mode, while others write-back.


You notice from the installation and usage of FVP that this product is to give simplicity to it’s operators. Just add some VIB and initial configuration, and your FVP solution is running and showing improvements within a few minutes (when you have some flash/RAM else it’s installing those first). No calculating, sizing and deploying virtual appliances where IO flows through, with FVP it’s extension it is talking to the right place. Yes you will have to have a Windows server for the Management component, but this is out of band of the IO flow.
If you have some experience in the VMware product line and understand the way the PernixData product is setup, the level of entry to using FVP is super low. You just have to familiarize yourself with the way FVP handles your workload performance/IO (the policies, settings and the places to set them), next to actually knowing some of the workloads that you have in the environment and what they are doing with IO. And there FVP can be of assistance as well, next to accelerating your IO workloads it will give you lot’s of insight on what storage IO is doing in your environment by presenting several metrics in the management interface of vSphere. And that’s another big simplicity, integrating seamlessly in one management layer.



Atlantis ILIO – RAM Based storage matched for VDI

I personally am very fond of solutions that handle IO close to the source and therefor give more performance to your virtual machine workload and minimize (or preferably skip) the storage footprint downstream. I previously written a blog post about sollutions you can use at the host. One of these solutions is Atlantis ILIO.
As the company I currently work for (Qwise – is also a partner for consulting on and delivering Atlantis ILIO solutions, I thought one plus one is… three. 

If you’re not familiar with Atlantis ILIO, it works with running an Atlantis appliance (VSA) on each of your hypervisor hosts (dedicated for VDI for example) and presenting a NFS or iSCSI data store that all the VMs on that host use. For this data store it uses a configured part of the hosts RAM to handle all reads and writes directly from this hosts RAM (that is when you let the VM deploy here and you have reserved this RAM for this kind of usage). The IO traffic is first analyzed by Atlantis to reduce the amount of IO, then the data is de-duplicated and compressed before being written to server RAM. When needed Atlantis ILIO converts small random IO blocks into larger blocks of sequential IO before sending to storage, increasing storage and desktop performance. This is the IO Blender Effect.
The OS footprint is minimized to a rather small one in RAM, numbers of 90% percent can be reached depending on the type of workload. Any data that will be written to the external storage (outside of RAM) also undergoes write coalescing before it is written. 

Since Atlantis will only store each data block once, regardless of how many VMs on that host use that block, you can run dozens or hundreds of VMs of just a tiny amount of memory.

And what does RAM gives? A warp speed user experience and faster deployment.

Atlantis ILIO can be used for stateless VDI (completely in RAM), persistent VDI (out of server memory or shared storage backed), XenApp and can also be used with virtual server infrastructures.

Atlantis ILIO Architecture


Like written before Atlantis ILIO is deployed as an appliance on each host or on a host that serves a complete rack. This appliance is an Atlantis ILIO (or ILIO for short) controller or instance. The Atlantis ILIO appliance uses a defined part of the host it’s RAM to present a NFS or iSCSI datastore via the hypervisor. Here you can place the VD’s, XenApp or other needed to accelerate VM’s. ILIO sits in the IO stream of your VM, hypervisor and storage. You need the correct Atlantis product to use the optimized features for the wanted solution workload, currently VDI and XenApp. Keep an eye out for other servers solutions, there bound to come out this half of 2014.
In above model the hypervisor is VMware vSphere with a stateless VDI deployment, but this can be Citrix XenServer or Microsoft Hyper-V as Atlantis supports these also. The Atlantis presented storage can be easily used to accelerate PVS or MCS for using XenDesktop provisioning. Or in combination with some form of local or shared storage for persistent desktops unique user data.

Atlantis ILIO Management Center.

The Atlantis ILIO Management Center will setup, discover and manage one or more Atlantis ILIO instances. The ILIO Center is a virtual appliance that is registered with a VMware vCenter cluster. Once ILIO Center is registered with a vCenter, ILIO Center can discover Atlantis ILIO instances that are in the same vCenter management cluster and selectively install a Management Agent on Atlantis ILIO instances. If additional vCenter clusters with Atlantis ILIO instances exist, then an ILIO Center virtual machine can be created and registered for each cluster.
The ILIO center can be used for provisioning of ILIO instances, monitoring and alerting, maintenance (patching and updates) and for (probably the most importing part) reporting of the status of the ILIO proces and handled IO offload (for example what amount of blocks is de-duped). The ILIO center can also be used to fast clone a VD image. This clones full desktop VM’s in as little as 5 seconds without network or storage traffic.

Hosts and High Availability (mainly for persistent deployments)

Atlantis supports creating a synchronous FT cluster of Atlantis ILIO virtual machines on different hosts to provide resiliency and “zero downtime” during hardware failures. Atlantis supports using HA across multiple hosts or automatically restarting virtual machines on the same host. 

A host that is offering resources to a specific workload, for example the VD’s, this is called a session host. This session host can use local or share storage for it’s unique data storage. With shared storage when a failure happens you can use the hypervisor HA (together with DRS VM rule to keep appliance and VD’s together). When using local storage in vSphere this is not an option as HA requires a form of shared storage. For this you can use ILIO clustering with replication.

With the availability of unique local host data a replication and a standby host come into the picture.

In a Atlantis ILIO persistent VD solution, a replication host is a centrally placed Atlantis ILIO instance that maintains the master copy of each user’s unique data. The desktop reads and writes its IO on the RAM of the session host. The session host (after handling the IO) then replicates any unique compressed data over to the replication host. This replication is called Fast Replication. The replication is handled over the internal out-of-band Atlantis ILIO network. The replication host is shared storage backed where the unique user data is written. There is also a standby host that is a standby for the replication host. This standby host has the same access to shared storage location as the replication host. In case the replication host fails the standby host takes over and has access to the same unique user data on the share storage. Keep in mind that, depending upon your workload, between 5 and 8 session hosts can share a single replication host. 
Disk-backed configurations that leverage external shared storage do not need a Replication Host as ILIO Fast Replication mirrors the desktop data directly from the external storage to this shared storage

For non persistent stateless VD’s the data stays purely in RAM. VMware Horizon View or Citrix XenDesktop will notice that the VD’s are down when the hosts fails, and will make new VD’s available at an other host. Users will temporarily experience a disconnect but their workspaces will reconnect when available again.


With interesting RAM pricing and reducing infrastructure complexity Atlantis ILIO is the perfect solution to use in (re)building VD infrastructure with already in place solutions or solution components. You can provision lighting fast VD’s and engage your workforce to warp speed productivity) with a user cost around 150-200 euro. You can session hunderd of VD’s with just a small amount of configured RAM on one host. Next to this you will have a much smaller unique IO data footprint on your shared storage. No need to go with expensive accelarated storage infrastructure controllers. You can easily go with a cheaper SAN/NAS/JBOD or a No SAN solution.

– Happy Atlantis ILIO’ing!

Evaluations – Veeam ONE v7

A few blog post back I did a evaluation of Veeam Backup and Replication v7 (read it at A logical step from a backup and replication solution is a management solution to manage your back-up and replication and monitor your virtual infrastructure from one solution. Veeam has the Veeam ONE product for that.
Veeam actually has an other solution for management and monitoring of your back-up solution, which is targeted at enterprise customers; Veeam Backup Management Suite. You will miss the monitoring of the virtual infrastructure with the latter (and probably gain some, but that is a vs evaluation and outside this scope). For now I will concentrate on Veeam ONE.

What is Veeam ONE?

Veeam ONE is a single solution for managing virtual infrastructures and Veeam Backup and Replication. This solution enables real-time monitoring, capacity planning, documentation, mapping and reporting for virtual infrastructures based on VMware vSphere or Microsoft Hyper-V, and Veeam Backup and Replication.

Veeam ONE comes in a licensed full (per CPU socket of a managed/monitored host) and a free edition. The free edition includes all the core functionality of the full, but is restricted in some of the features (either a lower threshold or not available). These restricted or not available features limit the scale of the monitored infrastructure, amount of historical data and reporting. Thus limiting your capabilities to thoroughly analyse, trend and forecast your environment. But for small deployments this is less of a problem (as there are other means) then bigger environment (those means are not automated or from a single management solution).


So we now know the why, now we need the what some Veeam ONE Architecture. The Veeam ONE architecture is composed of the Veaam ONE components and the components of the monitored infrastructures. As stated above, Veeam ONE can monitor virtual infrastructures from VMware vSphere and Hyper-V. Veeam ONE is deployed as either a virtual in these environments (probably in a back-end or infrastructure cluster) or as a physical server outside these infrastructures. The virtual infrastructure nodes can be monitored as hosts or via management such as vCenter or SCVMM.
Secondly Veeam ONE monitors Veeam Backup and Replication so it needs to be able to access the Veaam Backup Server.


But Veeam ONE of course has it’s one architecture as well. Veeam ONE is a client server architecture and incorporates the following structural components:
– Veeam ONE Server – a virtual or physical server responsible for collecting data from virtual infrastructure components (hosts, vCenter or vCloud Director), Veeam backup and replication and storing this to a SQL Database. Veeam One Server actually is about two parts Veeam ONE Monitoring Server and Veeam ONE Reporting Server.
– Veeam ONE Web UI – the client part that communicates with the SQL database to access data for viewing reports and customizing infrastructure views. The client is composed from the Reporting Client and the Business View Client.
– Veeam ONE Monitor Client – the client part that is used to connect to the Veeam ONE Monitoring server. This is the primary tool for monitoring your environment.
– Veeam ONE Database. Internally MSSQL Express 2008R2 or MSSQL server (2005 on to 2012) outside the environment. For reporting MSSQL 2008 Reporting service could be included.

Deployment of Veeam ONE can be done in a typical setup with all components on one server, or an advanced setup with components separated on several servers.


For the evaluation I’m doing a simple typical deployment with a Server 2012 host as Veeam backup server and repository, Veeam ONE server, and a VMware ESXi 5.5 host managed by a vCenter Server 5.5 Appliance (which are not yet supported but will find out if it works. Do not use this in a production environment).


The wizard starts when you push the appropriate installer option, in my case the Veeam ONE server. You can input your license file for the full edition or use the free edition when you have got none. I’m using a NFR license for lab/demo purposes.


Next up I’m selecting the typical setup option. After this prerequisties are checked and your are shown the results. If there are failed in there (my setup wasn’t prepared) you have to option to let the installer install (push that button) for you.


Wait for those to be installed and a re-check is performed. Continue when status is passed.

I’m using the default for the locations, be sure to change them to your needs. Add a service account. Preferably from the domain. My lab consist of a single Windows 2012 server which I haven’t added the AD DS role, so I’m going for local.
For the database use a existing SQL instance or let the installer add a MS SQL Express 2008R2 one for you. I’m going for an existing database instance, the express I installed with the local Veeam Backup and Replication installation.


When using an existing one, be sure your service account is granted access and permissions.

Ports can remain the default ones.
You can now connect to your virtual  infrastructure from the installer. Same goes for you Veeam backup and replication. I’m doing it from the installer, connecting to VCSA (as accessing vCenter has not changed from 5.1 to 5.5 Veeam connects to vCSA 5.5) and local Veeam Backup and replication.

You can add or change these connections at a later time, in that case just select the skip options.

And hit the install button to start going for distance….


To finish the installation you will have to log off.

This gives us the three application icons.

Opening business View icon will open a Internet Explorer (or other preferred browser) with the application URL. For IE you probably have to trust the applications or they will standard be blocked. Veeam ONE monitor gives you insight in your infrastructure, business (after defining), data protection (backup and replication like shown below) and alerting.

Next up are the defining of notifications, rules, categories (for example SLA), groups, etc. to have your environment be monitored with organizational needs. But for now we will stop here.

This concludes the introduction of Veeam ONE and the typical (and basic) installation of Veeam ONE.

– Enjoy Veeam ONE with your environment!

Evaluations – Veeam Backup and Replication version 7- What and Installation.

And now for something completely different… Well different, still has to do with a virtual infrastructure. Evaluating the version 7 of Veeam Backup and Replication.

What is Veeam Backup and replication?

Veeam Backup and replication is a data protection and disaster recovery solution for virtual infrastructures. It supports virtual infrastructures from VMware or Hyper-V.
It brings features such as instant file-level recovery and VM recovery, scalability, backup & replication, built-in de-duplication and bringing centralized back-up and replication management to your infrastructure.

To produce a backup, Veeam Backup leverages VMware snapshot capabilities. When you need to perform backup. The VMware snapshot technology lets you back up VMs without suspending them; also known as online hot backup.



The picture above (picture credits to the Veeam Evaluation guide. Get this guide at shows the components that make up the Veeam Backup and replication infrastructure:

  • Veeam Backup server—a physical or virtual machine. The Veeam Backup server is the core component: responsible for configuration and management.
  • Backup proxy—a “data mover” component used to process VM data and transfers to the datastore targets.
  • Backup repository—a storage location for storing backup files, VM copies and replicas.
  • Virtual infrastructure servers—ESXi or Hyper-V hosts which are sources and targets for backup and replication operations.


For the evaluation I’m doing a simple deployment with a Server 2012 host as backup server and repository, and a VMware ESXi host managed by a vCenter Server Appliance. I’m not using multi core/processors so you will get a warning about data processing times.


The wizard starts when you push the appropriate installer. You can input your license file or use the free edition when you have got one. I’m using a NFR license for demo purposes.


I’m doing the complete setup. Not changing the default install, I currently just have one disk connected. Prerequisite software checks are done next. If you are not compliant push the install button to get the required software.

Connect with a local admin (from domain or not) and use a existing SQL instance or let the installer add a MS SQL Express 2008R2 one for you (I’m currently going for the latter). Ports can remain the default ones. Same goes for the locations, be sure to change them to your needs. And hit the install button to start the engines….


And have a little patience for the installer to finish. And lift off..


Now let’s add the virtual servers. Go to Backup Infrastructure – Managed Servers and right click to select add server. You can select vSphere, vCloud, Hyper-V and Windows hosts. Add the VCSA via the vSphere option.

Add the VCSA credentials to the Wizard (in my case the standard root vmware combo). It takes a while as my lab has not enough resources…
The Wizard will create a new VMware object in the backup inventory.


Next up, the backup proxy. As described earlier, this is the data mover and needs access to the source and destination datastore. This is a Windows server with either a physical connection (physical server with LUN’s attached) or a VM. Add it as a managed server (add a windows server at managed servers) and assign the backup proxy role (add at backup Proxies.). I am using the same server for all roles, so it is already added to the server list and to the VMware proxy by the Veeam Wizard.


Next up: a backup repository. This can be a:
– Windows Server with storage attached.
– Linux server with local or NFS mounted storage.
– a CIFS share.

I have added a vmdk to my server, and am using this as the backup repository. So I add a repository to a Microsoft Windows server, to this server and use populate to find the appropriate disk. For additional features I’m also adding this as a vPower NFS server. image

And boom, your Veeam infrastructure is up and running in minutes. Just know the architecture components and prepare in advance. Surely this test lab is not sufficient for production as I haven’t taken retention, archiving, access and RTO/RPO in mind.

Next up is creating some jobs and fill up the repository. Go to backup & Replication pane, and add a backup job.

The add backup job is straightforward  Select the source machine and th what. Select the destination and which proxy to use.


One of the important screens is the Advanced Settings.


here the mode can be selected, storage and methods (use vSphere CBT).

And viola start you engines, a test job can be run.


This concludes the Veeam Backup and Replication introduction and basic installation.

– Enjoy Veeaming across your virtual infrastructure.