vSphere: Working with traffic filtering in the vNetwork Distributed Switch


Within a physical and virtual infrastructure there are several options to limit the inbound and outbound traffic from and to a network node, part of the network or entire network (security zone). A limit can be, filtering (allow or dropping certain traffic) or the prioritization of traffic (QoS / DSCP tagging of the data) where a defined type of traffic is limited versus a kind of traffic with a higher prioritization.

Options include filtering with ACL, tagging and handling sort of traffic with QoS / DSCP devices, firewalling (physical or virtual appliances), physical or logical separation or Private VLAN’s (PVLAN for short). Furthermore, an often overlooked component, keep all your layers in view when designing the required security. If required to filter traffic from a specific data source to a specific group of hosts where the requirement is that those VM’s are not allowed to see or influence the other hosts, traffic filters setup on the physical network layer will not always be able to “see” the traffic as for example blade servers in certain blade chassis can access the same trunked switch ports / VLAN, or VM’s with same portgroup / VLAN are able to connect to each other’s network as the traffic is not reaching or redirected to the physical network infrastructure where these filters are in place. That is when not using a local firewall on the OS. You could say this is bad designing, but I have seen these described “flaws” pop up a little too often.

 Options in the VMware virtual infrastructure

You have to option to use third party virtual appliances as firewalls, vCloud suite components or network virtualization via NSX (SDN) for example. Not always implemented due to constraints overheard around, like: overhead of the handled traffic by the virtual firewall (sizing), single point of failure when just using one appliance, added complexity for certain IT Ops where networking and virtualization are strict separated (Bad bad bad) or just no budget/intention to implement a solution that goes further than just the host virtualization the organization is at (as they probably just started). These are just a few, not all are valid in my opinion….

From vSphere 5.5 there is another unused option (mostly unknown); use the traffic filtering and tagging engine in the vNetwork Distributed Switch (vDS or dvSwitch). That is when you have an Enterprise Plus edition, but hey without this a vDS is not available in the first place. Traffic filtering is introduced in version 5.5 and therefore can only be implemented on vSphere 5.5+ members of the 5.5+ version of vDS. This vDS option is the one I want to show you in this blog post.

Traffic filters, or ACL, control which network traffic is allowed to enter or return (ingress and/or egress rules) from a VM, a group of VM’s or network via the port group, or a uplink (vmnic). The filters are configured at the Uplink or port group, and allow for an unlimited number of rules to be set at this level. These handle the traffic from VM to the portgroup and/or the traffic from portgroup to the physical uplink port, and vice versa. The rules are processed in the VMkernel, this is fast processed and there is no external appliance needed. With outgoing traffic rule processing happens before the traffic leaves the vSphere host, which also possibly will save on the ACL on the physical layer and networking traffic when only types of traffic or to a specific destination are allowed.

With the traffic filter we have the option to set rules based allow drop (for ACL) on the following Qualifiers:

vDS - image1

The tag action allows setting the traffic tags. For this example we don’t use the tag action.

System Traffic are the vSphere traffic types you will likely see around, where we can allow a certain type of traffic to a specific network. MAC let’s us filter on layer 2, and specific source and/or destination MAC addresses or VLAN ID’s. IP let’s us filter on Layer 3 for the IP traffic types TCP/UDP/ICMP traffic for IPv4 and IPv6.

The following System traffic type are predefined:

vDS - image2

Make it so, number One

I will demonstrate the filtering option by creating a vDS and adding a ESXi host and VM to this configuration. Just a simple one to get the concept.

My testlab vDS is setup with a VM like this screenshot:

vDS - image3

I got a DSwitch-Testlab vD switch with a dvPortgroup VM-DvS (tsk tsk I made a typo and therefore not consistent with cases, please don’t follow this example ;-)). A VM Windows Server 2012 – SRDS is connected to this portgroup.

 The VM details are as follow:

vDS - image4

The IP address we will be looking at.

A the VM-DvS and going to the manage tab, we can choose Policies. When we push the edit button we can add or change the traffic filtering (just look for the clever name).

vDS - image5vDS - image6

As you see I already have created an IP ICMP rule which action currently says something completely the opposite as the rule name. This is on purpose to show the effect when I change this action. When I ping the VM from a network outside of the ESXi host, I get a nice ICMP response:

vDS - image7

When we change the ICMP rule to drop action, we get the following response:

vDS - image8


That’s what we want from the action. Other protocols are still available as there are no other rules yet, I can open an RDP to this Windows Server.

vDS - image9

When wanting to allow certain traffic and others not you will have to create several rules. The applied network traffic rules are in a strict order (which you can order). If a packet already satisfies a rule, the packet might not be passed to the next rule in the policy. This concept does not differ from filtering on most physical network devices. Document and draw out your rules and traffic flows carefully else implementation/troubleshooting will be a pain in the $$.

This concludes my simple demonstration.

 – Enjoy!

Sources: vmware.com

Hey come out and play. Join the vSphere Beta Program

At June the 30th VMware announced the launch of the latest vSphere Beta Program. This program is now open for anyone to register and participate in the beta program. The Beta program used to be for just a select group, but with VSAN Beta VMware started to allow a wider group of participants. In my opinion this is good as the group of software testing participants is larger and the amount of feedback, learned lessons and experience will be greater. The community and it’s testers will make the product and it’s features even better. Hopefully more Beta’s will also be open to a larger group of participants.

A larger group will make it harder to keep information within the group and not publicly shared on for example the big bad Intarweb. As this Beta program seems open to everyone, it is still bound to NDA rules. Details are in the VMware Master Software Beta Test Agreement (MSBTA) and the program rules which you are required to accept before joining. After that share your comments, feedback and information in the private community that is offered with the program.

What can you expect from the vSphere Beta program?

When you register with a my VMware account, the participant can expect to download, install, and test vSphere Beta software for his or hers environment. The vSphere Beta Program has no established end date. VMware strongly encourages participation and feedback in the first 4-6 weeks of the program (starting on June 30 2014). What are you waiting for? 

Some of the reasons to participate in the vSphere Beta Program are:

  • Receive access to the vSphere Beta products
  • Gain knowledge of and visibility into product roadmap before others.
  • Interact with the VMware vSphere Beta team, a chance to interact with engineers and such.
  • Provide direct input on product functionality, configuration, usability, and performance.
  • Provide feedback influencing future products, training, documentation, and services
  • Collaborate with other participants, learn about their use cases, and share advice and learned lessons of your own.

What is expected from the participants?

Provide VMware with valuable insight into how you use vSphere in real-world conditions and with real-world test cases, enabling VMware to better align the products with business needs.


Sign up and join the vSphere Beta Program today at: https://communities.vmware.com/community/vmtn/vsphere-beta.

– Go ahead, come out and play. Join the vSphere Beta program now!

Source: vmware.com

Managing multi-hypervisor environments, what is out there?

A little part of the virtualization world I visit are in the phase of doing multi-hypervisor environments. But I expect more and more organizations to be not one type only and are open to using a second line of hypervisors other then their current install base. Some will choose on specific features or on product lines for specific workloads or changing strategies to opensource for example.

Some providers of hypervisors are having or bringing multi support to their productlines. VMware NSX brings support for multi-hypervisor network environments via the Open vSwitch support in NSX (with a separate product choice that is), where XenServer leverages the Open vSwitch as an standard virtual switch option. Appliances are standard delivered in the OVF format. Several suites are out there that claim a single management for multi-hypervisors.

But how easily is this multi-hypervisor environment managed and for what perspective? Is there support in only a specific management plane? Is multi-hypervisor bound to multi-management products and thus adding extra complexity? Let’s try and find out what is currently available for the multi-hypervisor world.

What do we have?

Networking, Open vSwitch; a multi-layer virtual switch which is licensed under the open source Apache 2.0 license. Open vSwitch is designed to enable network automation through programmatic extension, and still supporting standard management protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). Furthermore it is designed to support distribution across multiple physical servers similar to VMware’s distributed vswitch concept. It is distributed standard in many Linux kernel’s and available for KVM, XenServer (default option), VirtualBox, OpenStack and VMware NSX for multi-hypervisor infrastructures. Hyper-V can use the Open vSwitch, but needs a third party extension (for example using OpenStack extension). Specifically for networking, but it is a real start for supporting true multi-hypervisors.

Transportation, Open format OVF/OVA; Possibly the oldest of the open standards in the virtual world. Open Virtualization Format (OVF) is an open standard for packaging and distributing virtual appliances or more generally software to be run in virtual machines. Used for offline transportation of VM’s. Wildly used for transporting appliances of all sorts. Supported by muiltiple hypervisor parties, but sometimes conversion are  needed especially for the disk types. OVF’s with a VHD disk needs to be converted to VMDK to be used on VMware (and vice versa). Supported by XenServer, VMware, Virtualbox and such. OVF is also supported for Hyper-V, but not in all versions of System Center Virtual Machine Manager support importing/exporting functionality. OVF allows a virtual appliance vendor to add items like a EULA, comments about the virtual machine, boot parameters, minimum requirements and a host of other features to the package. Specifically for offline transportation.

VMware vCenter Multi-Hypervisor Manager; Feature of vCenter to manage other hypervisors next to ESXi hosts from the vCenter management plane. Started as a VMware Lab fling, but now a VMware supported product (only support for the product, underlying Hyper-V issues are still for the Microsoft corporation) available as a free download with a standard license. Currently at version 1.1. Management of host and provisioning actions to third party hypervisors. Supported other then VMware hypervisors is limited to Hyper-V. And to be honest not primarily marketed as a management but more a conversion tool to vSphere.

vCloud Automation Center (vCAC);  vCloud Automation Center focuses on managing multiple infrastructure pools at the cloud level. You can define other then vSphere endpoints and collect information or add these computing resources to an enterprise group. For certain tasks (like destroying a VM) there still is manual discovery necessary for these endpoints to be updated accordantly. But you can leverage vCAC workflow capabilities to get over this. Uses vCAC agents to support vSphere, XenServer, Hyper-V or KVM hypervisors resource provisioning. Hypervisor management is limited to vSphere and Hyper-V (via SCVMM) only. vCAC does offer integration of different management applications for example server management (iLO, Drac, Blades, UCS), powerShell, VDI connection brokers (Citrix/VMware), provisioning (WinPE, PVS, SCCM, kickstart) and cloud platforms from VMware and Amazon (AWS) to one management tool. And thus providing a single interface for delivery of infrastructure pools. Support and management is limited as the product is focussed on workflows and automation for provisioning, and not management per se. But interested to see what the future holds for this product. Not primarily for organisations that are managing their own infrastructures and servicing only their own. Specifically for automated delivery of multi-tenant infrastructure pools but limited.

System Center Virtual Machine Manager (SCVMM); A management tool with the ability to manage VMware vSphere and Citrix XenServer hosts in addition to those running Hyper-V. But just as the product title says, it is primarily the management of your virtual machines. As SC VMM can be able to read and understand configurations, and do VM migrations leveraging vMotion. But need to do management tasks on networking, datastores, resource pools, VM templates (SCVMM only imports metadata to it’s library), host profile compliancy (and more) or fully use distributed cluster features you will need to switch to or rely on vCenter to do this tasks. Some actions can be done by extending SCVMM with a vCenter system, but that is again limited to managing VM tasks. Interesting that there is support to more then one other hypervisor with vSphere and XenServer support. And leveraging the system center suite gives you a data center broader management suite, but that is out of scope for this subject. Specifically for virtual machine management, and with another attempt to get you to convert to the primary hypervisor (in this case Hyper-V).

Other options?; Yes, automation! Not a single management solution but more of a close the gap between management tasks and support of management suites. Use automation and orchestration tools together with scripting extension to solve these management task gaps. Yes, you still have to have multiple management tools, but you can automate repetitive tasks (if you can repeat it, automate it) between them. PowerShell/CLI for example is a great way to script tasks in your vSphere, Hyper-V and XenServer environments. Use a interface like Webcommander (read at a previous blog post https://www.pascalswereld.nl/post/65524940391/webcommander) to present a single management interface to your users. But yes, here some work and effort is expected to solve the complexity issue.

– Third parties?; Are there any out there? Yes. They are providing ways to manage multi-hypervisor environment as add-ons/extensions that use already in place management. For example HOTLINK Supervisor adds management of Hyper-V, XenServer and KVM hosts from a single vCenter inventory. And Hotlink hybrid express adds Amazon cloud support to SCVMM or vCenter. Big advantage is that Hotlink is using the tools in place and integrate to those tools so there is just a minimal learning curve to worry about. But why choose a third party when the hypervisor vendors are moving there products to the same open scope, will an addon add extra troubleshooting complexity, how is support when using multiple products from multiple vendors where does one ends and the other starts? Well that’s up to you if these are pro’s or cons. And the maturity of the product of course.


With the growing number of organisations adopting a multi-hypervisor environment, these organisation still rely on multiple management interfaces/applications and thus bringing extra complexity to management of the virtual environments. Complexity adds extra time and extra costs, and that isn’t what the large portion of the organisations want. At this time, simply don’t expect a true single management experience if you bring in different hypervisors or be prepared to close the gaps yourself (the community can be of great help here) or use third party products like Hotlink.
We are getting closer with the adoption of open standards, hybrid clouds and a growing support of multiple hypervisors in the management suites of the hypervisor players. But a step at a time. Let’s see when we are there, at the true single management of multi-hypervisor environments.

Interested about telling your opinion, have a idea or party I missed? Leave a comment. I’m always interested in the view of the community.

– Happy (or happily) managing your environment!

Dissecting vSphere – Data protection

An important part of a business continuity and disaster recovery plans are the ways to protect your organisation data. A way to do this is to have a back-up and recovery solution in place. This solution should be able to get your organization back in to production with the set RPO/RTO’s. The solution needs to be able to test your back-ups, preferable in a sandboxed testing environment. I have seen situations at organisations where backup software was reporting green lights on the backup operation, but when a crisis came up they couldn’t get the data out and thus failing recovery. Panicking people all over the place….

Back-up and recovery solution can be (a mix of) commercial products to protect the virtual environment like Veeam or from within guest with agents like Veritas or DPM or from features of the OS (return to previous version with snapshots). Other ways included solutions on the storage infrastructuur. But what if your budget constrained….

Well VMware has the vSphere Data Protection that is included from the Essentials Enterpise Plus kit. This is the standard edition. The vSphere Data Protection Advanced edition is available from the enterprise license.
So there are two flavours, what is standard giving and lacking from advanced?
First the what; like previous stated VDP is the backup and recovery solution from VMware. It is a appliance that is fully integrated with vCenter. It’s easy to be deployed. It performs full virtual machine and File-LevelRestore (FLR) without installing an agent in every virtual machine.It uses data deduplication for all backup jobs, reducing disk space consumption.


VDP standard is capped with a 2TB backup data store, where VDP advanced allows dynamic capacity growth. This allows a growth of capacity to 4TB, 6TB or 8TB backup stores. VDP advance also provides agents for specific applications. Agents for SQL Server and Exchange agents can be installed in the VM guest os. These agents provides selecting individual databases or stores for backup or restore actions, application quiescing and advanced options like truncating transaction logs.


At VMworld 2013 further capabilities of VDP 5.5 are introduced:

– Replication of backup data to EMC.
– Direct-to-Host Emergency Restore. (without the need for vCenter, so perfect for backing up your vCenter)
– Backup and restore of individual VDMK files.
– Specific schedules for multiple jobs.
– VDP storage management improvements. Selecting separate backup data stores.

Sizing and configuration

The appliance is configured with 4vCPU’s and 4GB RAM. For the available backup stores storage capacity 500GB, 1TB or 2TB they will consume respectivily 850GB, 1,3 TB and 3,1TB of actual storage. There is a 100 VM limit, so after that you would need another VDP appliance (maximum of 10 VDP appliances per vCenter).

After the appliance deployment the appliance need to be configured at the VDP web service. The first time it is in installation mode. Items such as IP, hostname, DNS (if you haven’t added these with the OVF deployment), time and vCenter need to be configured. After completion (and sucessful testing) the appliance needs to be rebooted. A heads up, the initial configuration reboot can take up to 30 minutes to complete so have your coffee machine nearby.

After this you can use the webclient connected to your VDP connected vCenter to create jobs. Let the created jobs run controlled for the first time; the first backup of a virtual machine takes time as all of the data for that virtual machine is being backed up. Subsequent backups of the same virtual machine take less time, here changed block tracking (CBT) and dedup is preformed.


Well this depends on the kind of storage you are going to use as the backup data store. If you going for low cost storage (let say most of the SMB would want that), your paying in performance (or lacking it most of the time).

Storage Offsite

Most organizations want their backup data stored offsite in some way. vDP does not offer replication (or with VDP5.5 to only EMC), so you want to have some offsite replication or synchronization in place (and a how are you able to restore from this data if your VDP is lost also). vSphere Replication only protects VM’s and not your backup data store. Most SMB’s don’t have a lot of storage able replication devices in place, and when they do, there using it for production and not use that as a backup datastore. Keep this in mind when researching this product for your environment.

– Enjoy data protecting!

Dissecting vSphere 5.5 Enhancements – HA improvement and App HA

With the introduction of vSphere 5.5 there are two mayor HA improvements announced:

– vSphere App HA, on the Intarweb also known as App aware HA; High Availability at the application layer.
– vSphere HA detecting VM antiaffinity rules.

I’ll start with the latter.

HA detecting VM Anti-affinty rules

With vSphere DRS… Hey wait isn’t the subject supposed to be HA… Well yes, but the anti- or affinity rules are DRS rules. So a bit of DRS rule explanation;..these rules helps maintain the order of placements of VM’s on hosts throughout the cluster.  Affinity rules are rules that places VM’s together on certain hosts. Anti-affinity rules are rules that places VM’s separate from those VM’s in the rule. Think of VM’s that are already in a software availability service, such as the nodes of a cluster. You don’t typically want the nodes on one physical host.
With vSphere 5.1 and earlier vSphere HA did not detect a violation of these rules (these rules are unknown to vSphere HA). After a HA failover the VM’s could be place on the same host, after vSphere DRS would kick in and vMotion the VM’s so the anti-affinity rules are satisfied (DRS needs to be in full automated to enable the auto vMotion). Applications with high sensitivity to latency would not like this vMotion and there is a (very slight) moment that HA application clustering service are at higher risk as both VM’s are on the same physical host. A failure of the physical host before the vMotion is completed, would impact a downed service.
In a application cluster service you could also choose to use VM Overrides to disable HA restart for the VM cluster nodes as the application service handles the application HA actions. After a failure you would have to manually get the failed node online (or add a new one) in the application service. But that looses automation…

With vSphere 5.5 HA has been enhanced to conform with the anti-affinity rules. In a case of a host failure the VM’s are brought up accordant to the anti-affinity rules without the need of a vMotion action.This enhancement is configured as an advanced option.

vSphere App HA aka App aware HA

We already have host and VM monitoring, with vSphere 5.5 lifts this to application monitoring. vSphere App HA can be configured to restart an application service when an issue is detected with this service. It is possible to monitor applications as IIS, MSSQL, Apache Tomcat and vCenter. When the application service restart fails App HA can also reset the virtual machine. Service actions can be configured with the use of App HA policies. VM monitoring must be enabled to use application monitoring.

App HA Policies are definitions of the number of times vSphere App HA will attempt to restart a service, the number of minutes it will wait for the service to start, and the options to reset the virtual machine if the service fails to start and to reset the virtual machine when the service is unresponsive. They can also be configured to use other triggers, such as e-mail notifications or vCenter alerts.

When a configured App HA policy is assigned to a specific application service, vSphere App HA is enabled for that service.

Pretty nice.

But what’s needed:

For App HA to work two appliances are needed in the environment (per vCenter), vSphere App HA and vFabric Hyperic. The latter is used by the App HA architecture to monitor applications and is a vFabric Hyperic Server that communicates with vFabric Hyperic agents.
The roles of the both appliances are as follow: the vSphere App HA virtual appliance stores and manages vSphere App HA policies. The vFabric Hyperic appliance monitors applications and enforces the assigned vSphere App HA policies. For monitoring the applications of a VM, vFabric Hyperic agents must be installed inside the VM’s of these applications. These agents are communication brokers for the applications of the VM’s and the vFabric Hyperic appliance.

The vFabric Hyperic agents are supported to be deployed at Linux and Windows os’ses for 32-bit or 64-bit applications. How and what is supported for vSphere 5.5 HA is not yet completely clear (service support for IIS6/7/8, MSSQL 2005/2008/2012, Apache Tomcat, Apache HTTP and vCenter). Following the current vFabric Suite supported OS’ses these include Windows 2003, Windows 2008R2, Red Hat Enterprise Server and Suse Enterprise Linux.



Well. Good Question. App HA is part of the vSphere Enterprise plus edition only. Costs of vSphere 5.5 is expected to be around the current vSphere 5.1 costs. But with what options, constrains and limits…..unknown. The General Availability of vSphere 5.5 is yet unknown.

Separately VMware vFabric Suite is currently available as a one-time perpetual license under which support and subscription (SnS) contracts can be renewed annually – See more at: http://www.vmware.com/products/vfabric/buy.html#sthash.lTBxCHHK.dpuf

How the both are combined at what options/editions/prices keep a look out for further vSphere 5.5 product announcements.

– Exiting. I have the HA BCO5047 – vSphere High Availability – What’s New and Best Practices in my Barcelona schedule to get some more insight at VMworld EU 2013.

Dissecting vSphere 5.5 Enhancements – vCenter Server Appliance ready for lift off

A real concern for implementing the vCenter Server appliance (vCSA) 5.1 in production environments is the environment limits with the use of the embedded database. The embedded database with 5.1 has a limit of 5 managed hosts and 50 managed VM’s. That is not the size of production in a lot of setups.
The maximums could be lifted, but needed an external database service that was limited to only Oracle (and with the 5.5 version still is limited to Oracle as an external database). Not a lot of organizations use Oracle for infrastructure services, so that option is not widely used (or at least haven’t seen that around much). I’ve only used the vCSA in lab / testing environments.

With the introduction of vCSA 5.5 one of the most important parts is a reengineered embedded database (vPostgres). This lifts the configuration maximums of the vCSA to (editted, still production worth though) 100 managed hosts and 3000 managed VM’s. Well that’s more like it, this is production worth.

Table with specification vCSA 5.1 vs vCSA 5.5.

*) Specifications of the minimum requirements are depending on your environment. Don’t expect a difference approach, 2vCPU and memory according to your environment size. Scale down or scale up the standards.

The 5.5 release makes vCSA a production appliance. It is deployed fast and in comparison to a vCenter server does not need a Windows license.

– I’m gonna see a lot more of the vCSA 5.5 in customer environments.

— This post has been edited to change the number of managed hosts to 100 and managed VM’s to 3000. Earlier numbers were unofficial.

Learned Lessons – Nexus 1000V and the manual VEM installation

At an implementation project we implemented 24 ESXi hosts and used the Nexus 1000V to have a consistent network configuration, feature set and provisioning throughout the physical and the virtual infrastructure. When I tried to add the last host to the Nexus it failed on me with the InstallerApp (Cisco provided java installer that adds the VEM and adds the ESXi host to the configured DVS and groups). Another option is to use Update Manager (the Nexus is a appliance that can be updated by update manager), but that one threw an error code at me. I will describe the symptoms a little bit later, first some quick Nexus product architecture so you will have a bit understanding how the components work, where they are and how they interact.

Nexus 1000V Switch Product Architecture

Cisco Nexus 1000V Series Switches have two major components: the Virtual Ethernet Module (VEM), which runs inside the hypervisor (or with other words on the ESXi host), and the external Virtual Supervisor Module (VSM), which manages the VEMs (see figure below). The VSM can be either a virtual appliance or an appliance in the hardware device (for example in the physical Nexus switch).
The Nexus 1000V replaces the VMware virtual switches and adds a Nexus Distributed Switch (Enterprise Plus). Uplink and portgroup definitions are bound to the Cisco ethernet profile configurations. The VEM and VSM use a control and data link to exchange configuration items.

Configuration is performed through the VSM and is automatically propagated to the VEMs. Virtualization admins can pick up these configuration to select the portgroups at VM provisioning.


Symptoms to a failing installation 

The problem occurred as follows:

– Tried to install the VEM with the InstallerApp. The installer app finds the host and when the deployment is done, it stops when adding the hosts to the existing Nexus DVS. This happens somewhere from moving existing vSwitches to the DVS. Error presented is: got (vim.fault.PlatformConfigFault) exception.

– Checked the status of the host in update manager and this showed a green compliant Cisco Nexus Appliance. This probably delayed me a bit, because it really wasn’t.

– Tried to manually add a host to the Nexus DVS in the vSphere Webclient. This gave an error in the task. Further investigation let me to the line in the vmkernel.log: invalid Net_Create: class cisco_nexus_1000v not supported. Say what?

– With the Cisco support site and a network engineer I tried some VEM cli on the host. But hey wait vem isn’t there. An esxcli software vib list | grep cisco doesn’t show anything either (duh). While on an VEM installed ESXi host this shows the installed VEM software version. So Update Manager is screwing with me.

Manual Installation

That leaves me with trying to manually install the VEM. With the Cisco Nexus 1000 installation guides the working sollution is as follows:

  • The preferred Update Managers does not work. It fails with an error 99.
  • copy the vib file containing the VEM software from the VSM homepage using the following url: http://Your_VSM_IP_Address/. Check an ESXi host that is installed for the running version (esxcli software vib list | grep cisco). Download this file (save as).
  • Upload this file to a location where the host can access this. On the host or a datastore accessible from the host. I did the latter as the host did not have direct storage. Used WinSCP to transfer the files to a datastore directory ManualCiscoNexus.
  • On the hosts I added this vib by issuing the following command:

esxcli software vib install -v /vmfs/volumes/<datastore>/ManualCiscoNexus/Cisco_bootbank_cisco-vem-v160-esx_4.

  • vem status -v now gives output. Look for VEM Agent is running in the output of the vem status command.
  • vemcmd show port vlans only shows the standard switches. Communication with the VSM is not yet there.
  • I added the host manually to the Nexus DVS and success. When migrating the standard vmkernel management port to the DVS groups the hosts is also visible on the VSM. Communication is flowing and the host is part of the Nexus 1000v.

I hope this post will help when you experience the same problem, and also learns you a little about the Nexus 1000V Product Architecture.

VMware IO Analyzer – lab Flings

Flings in VMware labs is a great place for (very) useful tools or applications. This time I want to blog about a fling I often use in a test phase for implementation projects or in health assessments, see what synthetic load an environment can handle and if your vSphere design is up to the right io charactics and capacity.

Important in these kinds of test is your test methodology and plan: Assess, Filter test, plan, collect, analyse and report. With several of these steps IO Analyzer can be the player.

IO analyzer can configure, schedule and run several IOmeter workloads or replay vSCSI traces.

Download at:


Import ovf to your environment. Start with more then one, so you have some dedicated workers thoughout your environment.

After deployment change the second vmdk for the defaulted “small” configuration to approx 4GB plus (and Thick Eager). Why? Because the small amount of disk is used as disk test and fits in most storage cache. We need to get out of that and hit some real scenario’s.

One (or yes two) more things, logon to the consoles of all the appliances. Open a console, choose first option or press enter and login with root and password vmware. *ssst a very secret vmware user*. An other usage of the console is checking or monitoring the IOmeter tests i the console when they are running.


Type down one of the ip’s or hostnames of the appliances, and will use that one as the controller.

Open a browser (chrome or firefox) and type http:// and you will reach IO analyzer in your environment.

There we have the following options


For this I will use the workload configuration to add two tests to two appliances and check the results. Test scheduler is not used, will run immediately.

In this screen we first add the hosts where our test machines are, use the root password to connect to the hosts. When a connection is established the VM’s on that host are visible in the Add Workload Entry. Here you can find all kinds of IOmeter tests.


I have created two workloads, one Exchange 2007 on our first appliance and SQL 64K blocks on the other appliance. The duration is changed from the default 120 seconds to a 5 minute (300 seconds) schedule. This configuration is saved as Demo config.

Click on Run Now to let the test run. After a initialization you can see the progress in the console of one of appliances.


After completion of the tests you can view the test results in View Test Results (so it is not just a clever name :)).

Here you can check the two tests and the different VM and host metrics saved from IOmeter and esxtop (if there are any, you will get as a bonus the metrics of other VM’s on the hosts). More detailed information about these metrics, see the following URL: https://communities.vmware.com/docs/DOC-9279. Duncan Epping also has a good article about esxtop metrics and more. Go see his site when waiting for the test to finish: http://www.yellow-bricks.com/esxtop/

Here see the results of our Demo tests (I’m not going over in detail in this post).


Enjoy stress testing.