SMEP and KVM – sounds interesting

Recently a patch was dropped into the KVM community – adding support for the Intel SMEP cpu feature (if available on the CPU). I thought to myself, what the hell is SMEP?

According to the Intel Software Developers Manual it is “Supervisor-Mode Execution Prevention” – this sounds like a great thing as the kernel is prevented from executing ‘user data’ in kernel mode – ie. If there is an exploit that delivers a page of data and asks the kernel to execute it then this wont happen and a fault will be triggered. This sounds like a neat piece of work and as it’s all h/w based then there should be little overhead.

Like me, i’m guessing you’re wondering if your system has the SMEP cpu feature then this code will show you. Don’t be disappointed if your cpu doesn’t have it – it’s a very new feature and I can’t even find what cpu’s implement it.

Anyway, it’s a step in the right direction and that future direction will hopefully allow hypervisors to be that little bit more secure from un-trusted VM’s and provide a VM ‘shell’ environment that’s a little more secure for the VM’s. Unfortunately the way things currently stand the usefulness for KVM is unlikely to be immediately realised as intel engineers suggest enabling SMEP without a guest vm’s knowledge is likely to be ‘problematic’.


Tidbit#1 -Managing other peoples stuff with your tools.

Interesting things i’ve found this week and of course my comments.

vSphere OVF tool

Firstly, for those wondering what the OVF tool is then you can go here.

If you’re wondering what OVF is, well here is a good introduction. The short summary is the Open Virtualization Format (OVF) describes an open, secure, portable, efficient, and flexible format for the packaging and distribution of one or more virtual machines.

So how do you create an OVF file from a VM. It’s simple.

Select the VM you want to export in your vSphere client. Then using the File menu, select Export

Then it’s just a case of following your nose and saving the OVF export to somewhere with sufficient disk space.

At the end of the process you get this :

and the OVF export is complete. The exported contents look like this :

~/ovf$ ls
winxp-sp3-disk1.vmdk winxp-sp3.ovf

The mf file is a set of SHA1 hashes for the OVF and any of the VMDK files.
The ovf file is an xml file that attempts to describe the virtual machine in an independent/open format which is in theory importable into virtualisation products that fully support OVF files – I must try that 🙂

To display the OVF file you can run the ovftool in probe mode.

~/ovf$ ovftool winxp-sp3.ovf
Opening OVF source: winxp-sp3.ovf
OVF version: 1.0
Name: winxp-sp3

Download Size: 20.35 GB

Deployment Sizes:
Flat disks: 37.27 GB
Sparse disks: Unknown
Name: VM Network
Description: The VM Network network

Virtual Hardware:
Family: vmx-07
Disk Types: SCSI-buslogic

Completed successfully

To import the OVF you use a command like (the syntax for the vSphere locator is a bit odd – I highly suggest you read the OVF Tool Guide)

$ ovftool --powerOn --datastore=NFS1 winxp-sp3.ovf vi://
Opening OVF source: winxp-sp3.ovf
Please enter login information for target vi://
Username: geoff
Password: ********
Opening VI target: vi://geoff@
Deploying to VI: vi://geoff@
Disk progress: 1%

and on the VC you’ll see

Eventually you’ll get

Powering on VM: winxp-sp3
Completed successfully

Sure you can deploy from templates, but what if you have multiple environments in a variety of network locations and you’d like to have a common set of templates – enter the OVF. With a repository full of OVF’s accessible via http you can centrally store and distribute standard images out into all of your virtual environments.

Of course this is quite a simplistic example of creating/deploying from an OVF file. In the future maybe all application servers will be deployed from vApp appliances built with VMware Studio – do you really need systems administrators poking around on individually customized vm’s when in most cases they can be stateless appliances (well stateless apart from the configuration information used at deployment time). Something to ponder.

vsphere VM hot plug CPU script

I was teaching myself how to code scripts using the vSphere SDK for perl.

I was running all this on an Ubuntu 10.04 system

It’s not the fanciest script in the world – it was just to demonstrate the concept of modifying a VM configuration on the fly and seeing what happened in the VM (in this case the VM is a SLES 11 x86_64 system).

Note: Not all systems support hot plugging memory or cpu and they will need to have the option enabled to allow hot plugging – this of course has to be set when the VM is powered off. Once set you’re ok for the future.

The script is called

–server enter the vCenter server you want to connect to.
-vmname enter the name of the VM you want
–cpu Enter the amount of vCPU’s you want to add or remove from the VM either as a positive or negative number

There are plenty of other options as set by the SDK itself.

The best way to run this is after you’ve created a credstore so you don’t have to constantly re-enter the username / password of the VC account.

As you can see from this screenshot the VM in question a SLES11 system only has one CPU.

and this is confirmed by top on the system

If I run my hotplug script

the VC shows some activity

and if we look at the VM setting once the script has run – note – this VM was powered on when we did this.

What’s this, the system still shows 1 cpu !

If we look at /var/log/messages we can see the cpu being added

But to make it active we need to bring it online

and now we get

So what happens if we try to remove a cpu

If we check the VM it doesn’t support hot removal of cpu’s 😦

The best we can do is to mark the cpu offline in Linux

and we can see in /var/log/message the cpu has gone offline

The script is here in case you wondered and I haven’t cleaned it up – I was just trying to work out the API for myself so the code isn’t pretty. You can find plenty of examples supplied with the SDK, that’s how I got the start for this script.

#!/usr/bin/perl -w

use strict;
use warnings;

use FindBin;
use lib "$FindBin::Bin/../";

use VMware::VIRuntime;
use XML::LibXML;
use AppUtil::VMUtil;
use AppUtil::XMLInputUtil;

$Util::script_version = "1.0";

sub display;
sub customize;
sub validate { my $valid = 1; return $valid; };
sub check_missing_value;

my %opts = (
'vmname' => {
type => "=s",
help => "The name of the virtual machine",
required => 1,
'cpu' => {
type => "=s",
help => "The number opf cpus to add or remove",
required => 1,


my $cpucount = Opts::get_option('cpu');
my $vmname = Opts::get_option('vmname');

# connect to the server

sub customize() {
my $vm_views = Vim::find_entity_views(view_type => 'VirtualMachine',
filter => {"" => $vmname});
if(defined @$vm_views) {
foreach(@$vm_views) {
if ($_->runtime->powerState->val eq 'poweredOff'){
Util::trace(0, "For hot(un)plugging cpus, VM '$vmname' should be powered on\n");
else {
my $num_cpu = $_->config->hardware->numCPU + $cpucount;
Util::trace(0, "VM '$vmname' CPUs =$num_cpu\n");
my $vmConfig =
VirtualMachineConfigSpec->new (numCPUs => $num_cpu);

eval {
Util::trace(0,"Updating cpu allocation...\n");
$_->ReconfigVM (spec => $vmConfig);

if ($@) {
if (ref($@) eq 'SoapFault') {
if (ref($@->detail) eq 'CustomizationFault') {
Util::trace(0, "\n Cannot Perfrom this operation"
." System Error" . "\n");
elsif (ref($@->detail) eq 'NotSupported') {
Util::trace(0, "\nThe operation is not supported"
." on the object" . "\n");
elsif (ref($@->detail) eq 'HostNotConnected') {
Util::trace(0, "\nUnable to communicate with the remote host, "
."since it is disconnected" . "\n");
elsif (ref($@->detail) eq 'InvalidState') {
Util::trace(0, "\nThe operation is not allowed in the"
." current state" . "\n");
elsif (ref($@->detail) eq 'InvalidPowerState') {
Util::trace(0, "\nThe attempted operation cannot be"
." performed in the current state" . "\n");
elsif (ref($@->detail) eq 'UncustomizableGuest') {
Util::trace(0, "\nCustomization is not supported for"
." the guest operating system" . "\n");
else {
Util::trace(0, "\n". $@ . "\n\n");
else {
Util::trace(0, "\n". $@ . "\n\n");
else {
Util::trace(0, "No Virtual Machine Found With Name '$vmname'\n");


rhev vs vmware – DPM

Well it seems there have been a few blog posts about the relative merits of RHEV powersave modes versus VMWARE DPM.

A couple of the better examples are here and supported by a blog post here.

If you read those articles then it seems that you’re far better off with DPM – but would you be?

Don’t get me wrong I’m a big fan of vSphere and i’m also a fan of RHEV. Competition is a good thing and ultimately the consumer wins – well hopefully 🙂

As things currently stand, vSphere DPM is certainly more efficient (power-wise) than RHEV – powering off servers has to be more power efficient than even the most aggressive cpu frequency scaling.

So what am I going on about here. If you look at the competitive pricing guide between RHEV and vSphere and actually do a quick dollar analysis of the RHEV/vSphere solutions then it can be quite revealing. I should point out I have no idea if the prices in the whitepaper are accurate – i’m just referring to them to demonstrate another way to look at the numbers.

In the windows scenario presented in the whitepaper there are 9 systems running 100 windows vm’s. Over a 3 year period the costs are given as $205,980 using RHEV and $284,382 for using vSphere. The difference being $78,402 in the favour of RHEV

How much of an impact could DPM have on this price difference?

In the 9 systems, i’m going to assume an aggressive 6 systems could be powered down (vSphere DPM) or put into idle state (RHEV) for 5 hrs in a 24 hr period.

Using the power consumption numbers from the above linked blogs (I don’t have my own numbers) then an example active server would run at approx 300 W and an idle server would run at 140 W.

If all 9 servers are on continuously we get 9 * 300 * 24 = 64.8 kWh

If 6 are idle for 5 hrs, then in the case of DPM they would be powered off saving

6 * 5 * 300 = 9 kWh

In the case of RHEV they would run at the lower power consumption, giving us a saving of

6 *5 * (300-140) = 4.8 kWh

Clearly DPM saves us 4.2 kWh in the above contrived case.

Over 3 years that would save us

3 * 365 * 4.2 = 4599 kWh

over the RHEV solution – certainly good for the environment.

If you see how much money that might save you it depends on how much you pay for power. If I use an expensive case of $0.50 / kWh then that would be

0.50 * 4599 = $2299.50 over 3 years – nothing to sneeze at.

However, vSphere is $78,402 more expensive over 3 years and i’ve only saved $2299.50 due to the more efficient DPM.

Hmm, $78,000 can buy me a whole lot of power!

Maybe I should buy RHEV and donate the difference to charity 🙂

Of course, everything above is contrived, but I just wanted to see how the numbers stacked up given the sales and marketing material going around. You have to look at the complete picture in either case as it applies to YOU. If the only differentiator for you is DPM and you’re interested in saving money they why wouldn’t you go RHEV. If there are features you *need* that only exist in vSphere then you’ll have to go that way until RHEV catches up (assuming it does).

Is any of the above data accurate – no idea – the costings come from Redhat and the power savings were just quoted example by people kind enough to measure the power and put their data on the net – the rest is up to you !

SUSE Linux Enterprise Server for VMware

According to this announcement you can now get

When you make a qualifying purchase of VMware vSphere, the industry’s leading virtualization platform, you will be entitled to receive SUSE Linux Enterprise Server (SLES) for VMware and a subscription to patches and updates at no additional cost (see terms and conditions below). By running SLES for VMware, you’ll also have the option to purchase technical support services for directly through VMware.

The terms and conditions are on the above web page. It will be very interesting to see how this pans out. This may be just what the Novell doctor ordered.

vmdk to kvm (qemu)

I finally decided to migrate the last of my vmware-server systems to KVM.

The process is pretty simple and this is what I did.

  1. As my vmdk file was split into many 2Gb chunks I had to firstly convert that into a monolithic file.  This is easily achieved with the vmware-vdiskmanager utility supplied with vmware-server (or at least the version 1.0.x of vmware-server I was running 🙂 )

    vmware-vdiskmanager -r winxp.vmdk -t 2 winxp-full.vmdk

    The -t 2 is the important part taking all the 2Gb chunks *referenced* by the vmdk file and creating an equivalen single pre-allocated vmdk file.

  2. Once you have that file then it’s a simple qemu-img command to convert it to something that KVM is happy with.  In my case I wanted QCOW2 format.

    qemu-img convert winxp-full-flat.vmdk -O qcow2 winxp.img

    You will notice that I said -full.flat.vmdk – this is the pre-allocated file referred to by the new winxp-full.vmdk file.

  3. That’s it 🙂