Powershell: Patching XenServer

And now for something different…..
Last week I was tasked with the question to be able to automate applying multiple patches to a XenServer instead of admins doing the patching manually. Sure no problem, we can do this with Powershell.


I am using XE.exe from the XenCenter installer to control the XenServers. xe.exe needs to be in the path environment variabel. If you installed XenCenter to D:\Program Files (x86)\XenCenter you need to add this to the current path.

Path XenCenter

Next up is Powershell. As XenCenter needs to be on a Windows server, be sure to install Powershell if you are not running a version already. Should be in the management server by default.

Download the required patches to a location on the server. This location can be added to the script via the $UpdatePath variable for example download and set it to “E:\Patches\NEW-XEN-6.1\xen-6.1”.


Next up set your pool master ip (or IP of the server). Currently the script is set to get two values, one for the IP hard coded and one for the server you want to patch via user input. They can be the same, but the IP cannot be a slave. Xe won’t connect to a slave in a pool setup. At the environment this script was needed the hosts did not have a DNS record therefor IP and name is used. Hosts without DNS records is not advisable!
The script also needs the root user and password. At this time this is hard coded.

What does the script do?

The script takes a server name to patch as user input (with Read-Host), this is the name of the server as it is known in XenCenter. Next the script finds the uuid via the host-list command.

The patching procedure is to disable the host (aka Maintenance mode via host-disable), Evacuate running VM’s via host-evacuate. With the evacuate command there is an error checking to be sure the script can evacuate. If not the script exits here. We won’t apply patches and reboot with running VM’s. If you happen to run this on a non pooled server, you will need to power off the VM’s yourself. If the scrip exits here, you can find the reason in the evacuate txt file in the running directory. Most of the time this is a memory related error, but this has more to do with networking not available on an other host, storage, mounted iso’s or other items that fail a evacuation.

If all is well the patch directory is executed trough (Get-ChildItem of $UpdatePath), the patch is uploaded to the server. The UUID of the patch is saved and the update is installed. And on to the next one.

Last is the server reboot to finish the updates. The script sleeps for five minutes to let the host start-up, and then enables the host again. From this point you can distribute workloads back to this server and optionally run it at the next one.

Status of the patching is shown in the execution Window, and two files are saved during the execution: Hostname_evacuate.txt and Hostname_uuid.txt. The first saves the host-evacuate output (if there is nothing in the file all went okay), the second is used to save the UUID’s of the patches.

XenServer Patch script

Note: Citrix recommends rebooting the server before a patch. Currently this is not in the script as we where doing some other actions before the patch that included reboot. But this is easily fixed by copying the host-reboot line and pasting that one just after the evacuation. Be sure to check if the reboot enables the host or not.

Putting it all together

The script:

# This script connects to the Defined Pool master, Get server to patch, enters maintenance, patches the server, reboots and enables again.
# uses xe.exe from the XenCenter installation, requirement is to have this in the path of the system where this is run.
# Patches must be downloaded and placed in UpdatePath location.

# The script leaves txt files for evacuate and patch uuid’s actions. When evacuate fails script will stop, you can find the error in the text file.
# Mostly to do with networking or storage, or mounted tools on VM’s….

# Citrix Highly recommends rebooting XenServers prior to installing an update, this is not implemented in this script.

# Settings
$XenServerMst = “<pool master Ip>” # Pool Master IP and name of to update server must be filled in! XE cannot connect to slave.
$XenServer_Name = (Read-Host “XenServer Name to update”) # Server to update.

$XenServer_username = “<user>”
$XenServer_password = “<passwd>”
$UpdatePath = “E:\Patches\NEW-XEN-6.1\xen-6.1”

# nothing should be changed below this comment

# Loop through Updates
# Every Patch should be applied in the same procedure.
# Maintenance, reboot, upload, patch, reboot, end maintenance
# Evacuation of VM’s that are running.
Write-Host “Getting host uuid” -foreground green
$SrvUUID=(xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password host-list name-label=$XenServer_Name –minimal).split(‘,’) | % {$_.trim()}

# Maintenance, reboot, upload, patch, reboot, end maintenance
# Evacuation of VM’s that are running.
Write-Host “$SrvUUID will be disabled….” -foregroundcolor green
xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password host-disable host=$SrvUUID

Write-Host “$SrvUUID will be evacuated…(will take a long time)” -foregroundcolor green
xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password host-evacuate host=$SrvUUID | out-file $XenServer_Name”_evacuate.txt”

# We should exit the script when this fails
# $?
# Contains the execution status of the last operation. It contains
# TRUE if the last operation succeeded and FALSE if it failed.
Write-Host “Migrated. Continuing with update” -foregroundcolor green
Write-Host “Error with migration. Exit script at this point. Updates are not installed” -foregroundcolor red

Foreach ($Update in Get-ChildItem $UpdatePath){
# Loop through Updates
# Every Patch should be applied in the same procedure.

Write-Host “$SrvUUID will be pathed with $Update. Preparing to upload….” -foregroundcolor green

Write-Host “$Update will be uploaded” -foregroundcolor “green”
xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password patch-upload file-name=$Updatepath\$Update\$Update.xsupdate | out-file $XenServer_Name”_uuid.txt”

$uuid_patch = (xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password patch-list name-label=$Update –minimal)

Write-Host “$Update will be installed to this server” -foregroundcolor green
xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password patch-apply uuid=$uuid_patch host-uuid=$SrvUUID

Write-Host “After Patching, remove patch files to protect against running out of disk space” -foregroundcolor green
xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password patch-clean uuid=$uuid_patch

Write-Host “=======================================================” -foregroundcolor green
# End Update Loop

Write-Host “Post Patch reboot” -foregroundcolor green
xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password host-reboot host=$SrvUUID

# Wait for reboot seconds 300 (5 minutes)
Write-Host “Waiting five minutes for reboot to continue” -foregroundcolor green
start-sleep -s 300

Write-Host “Enable $SrvUUID again” -foregroundcolor green
xe.exe -s $XenServerMst -u $XenServer_username -pw $XenServer_password host-enable host=$SrvUUID

Write-Host “=======================================================” -foregroundcolor green

# Need to redistribute VM’s yourself


– Happy patching



Managing multi-hypervisor environments, what is out there?

A little part of the virtualization world I visit are in the phase of doing multi-hypervisor environments. But I expect more and more organizations to be not one type only and are open to using a second line of hypervisors other then their current install base. Some will choose on specific features or on product lines for specific workloads or changing strategies to opensource for example.

Some providers of hypervisors are having or bringing multi support to their productlines. VMware NSX brings support for multi-hypervisor network environments via the Open vSwitch support in NSX (with a separate product choice that is), where XenServer leverages the Open vSwitch as an standard virtual switch option. Appliances are standard delivered in the OVF format. Several suites are out there that claim a single management for multi-hypervisors.

But how easily is this multi-hypervisor environment managed and for what perspective? Is there support in only a specific management plane? Is multi-hypervisor bound to multi-management products and thus adding extra complexity? Let’s try and find out what is currently available for the multi-hypervisor world.

What do we have?

Networking, Open vSwitch; a multi-layer virtual switch which is licensed under the open source Apache 2.0 license. Open vSwitch is designed to enable network automation through programmatic extension, and still supporting standard management protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). Furthermore it is designed to support distribution across multiple physical servers similar to VMware’s distributed vswitch concept. It is distributed standard in many Linux kernel’s and available for KVM, XenServer (default option), VirtualBox, OpenStack and VMware NSX for multi-hypervisor infrastructures. Hyper-V can use the Open vSwitch, but needs a third party extension (for example using OpenStack extension). Specifically for networking, but it is a real start for supporting true multi-hypervisors.

Transportation, Open format OVF/OVA; Possibly the oldest of the open standards in the virtual world. Open Virtualization Format (OVF) is an open standard for packaging and distributing virtual appliances or more generally software to be run in virtual machines. Used for offline transportation of VM’s. Wildly used for transporting appliances of all sorts. Supported by muiltiple hypervisor parties, but sometimes conversion are  needed especially for the disk types. OVF’s with a VHD disk needs to be converted to VMDK to be used on VMware (and vice versa). Supported by XenServer, VMware, Virtualbox and such. OVF is also supported for Hyper-V, but not in all versions of System Center Virtual Machine Manager support importing/exporting functionality. OVF allows a virtual appliance vendor to add items like a EULA, comments about the virtual machine, boot parameters, minimum requirements and a host of other features to the package. Specifically for offline transportation.

VMware vCenter Multi-Hypervisor Manager; Feature of vCenter to manage other hypervisors next to ESXi hosts from the vCenter management plane. Started as a VMware Lab fling, but now a VMware supported product (only support for the product, underlying Hyper-V issues are still for the Microsoft corporation) available as a free download with a standard license. Currently at version 1.1. Management of host and provisioning actions to third party hypervisors. Supported other then VMware hypervisors is limited to Hyper-V. And to be honest not primarily marketed as a management but more a conversion tool to vSphere.

vCloud Automation Center (vCAC);  vCloud Automation Center focuses on managing multiple infrastructure pools at the cloud level. You can define other then vSphere endpoints and collect information or add these computing resources to an enterprise group. For certain tasks (like destroying a VM) there still is manual discovery necessary for these endpoints to be updated accordantly. But you can leverage vCAC workflow capabilities to get over this. Uses vCAC agents to support vSphere, XenServer, Hyper-V or KVM hypervisors resource provisioning. Hypervisor management is limited to vSphere and Hyper-V (via SCVMM) only. vCAC does offer integration of different management applications for example server management (iLO, Drac, Blades, UCS), powerShell, VDI connection brokers (Citrix/VMware), provisioning (WinPE, PVS, SCCM, kickstart) and cloud platforms from VMware and Amazon (AWS) to one management tool. And thus providing a single interface for delivery of infrastructure pools. Support and management is limited as the product is focussed on workflows and automation for provisioning, and not management per se. But interested to see what the future holds for this product. Not primarily for organisations that are managing their own infrastructures and servicing only their own. Specifically for automated delivery of multi-tenant infrastructure pools but limited.

System Center Virtual Machine Manager (SCVMM); A management tool with the ability to manage VMware vSphere and Citrix XenServer hosts in addition to those running Hyper-V. But just as the product title says, it is primarily the management of your virtual machines. As SC VMM can be able to read and understand configurations, and do VM migrations leveraging vMotion. But need to do management tasks on networking, datastores, resource pools, VM templates (SCVMM only imports metadata to it’s library), host profile compliancy (and more) or fully use distributed cluster features you will need to switch to or rely on vCenter to do this tasks. Some actions can be done by extending SCVMM with a vCenter system, but that is again limited to managing VM tasks. Interesting that there is support to more then one other hypervisor with vSphere and XenServer support. And leveraging the system center suite gives you a data center broader management suite, but that is out of scope for this subject. Specifically for virtual machine management, and with another attempt to get you to convert to the primary hypervisor (in this case Hyper-V).

Other options?; Yes, automation! Not a single management solution but more of a close the gap between management tasks and support of management suites. Use automation and orchestration tools together with scripting extension to solve these management task gaps. Yes, you still have to have multiple management tools, but you can automate repetitive tasks (if you can repeat it, automate it) between them. PowerShell/CLI for example is a great way to script tasks in your vSphere, Hyper-V and XenServer environments. Use a interface like Webcommander (read at a previous blog post https://www.pascalswereld.nl/post/65524940391/webcommander) to present a single management interface to your users. But yes, here some work and effort is expected to solve the complexity issue.

– Third parties?; Are there any out there? Yes. They are providing ways to manage multi-hypervisor environment as add-ons/extensions that use already in place management. For example HOTLINK Supervisor adds management of Hyper-V, XenServer and KVM hosts from a single vCenter inventory. And Hotlink hybrid express adds Amazon cloud support to SCVMM or vCenter. Big advantage is that Hotlink is using the tools in place and integrate to those tools so there is just a minimal learning curve to worry about. But why choose a third party when the hypervisor vendors are moving there products to the same open scope, will an addon add extra troubleshooting complexity, how is support when using multiple products from multiple vendors where does one ends and the other starts? Well that’s up to you if these are pro’s or cons. And the maturity of the product of course.


With the growing number of organisations adopting a multi-hypervisor environment, these organisation still rely on multiple management interfaces/applications and thus bringing extra complexity to management of the virtual environments. Complexity adds extra time and extra costs, and that isn’t what the large portion of the organisations want. At this time, simply don’t expect a true single management experience if you bring in different hypervisors or be prepared to close the gaps yourself (the community can be of great help here) or use third party products like Hotlink.
We are getting closer with the adoption of open standards, hybrid clouds and a growing support of multiple hypervisors in the management suites of the hypervisor players. But a step at a time. Let’s see when we are there, at the true single management of multi-hypervisor environments.

Interested about telling your opinion, have a idea or party I missed? Leave a comment. I’m always interested in the view of the community.

– Happy (or happily) managing your environment!