Cloud Computing is ACE!

<sermon>

Talk about hype!    I am a firm believer that the principles supporting cloud computing are sound as let’s face it, a lot of them have been around for quite some time.   I’m also pretty sure that marketing rules this environment and that marketing is usually full of crap so it is best to keep a skeptical hat around during marketing presentations.

Cloud computing does bring some new concepts to the table.  It  is effectively a dynamic Adaptive Computing Environment (ACE).  That is, it is sufficiently flexible to allow applications to reconfigure the environment in a way that best delivers the functionality required.  How is this different to plain old virtualisation?  Well, IaaS (Infrastructure as a Service) is a major concept in most cloud deployments and so it isn’t surprising that it incrementally builds on top of that methodology.  Cloud Computing is about providing the required level of services and resources when they’re required.  It isn’t just about provisioning another Linux or Windows system.   It is about giving the applications what they need and when they need it.

Can everything in a Cloud ‘space’ be wonderfully dynamic?  Currently No, not really.   Some problem sets require specific application and system/network topologies to best solve the problem set at hand.   Over time though I fully expect that IT will evolve and it will evolve away from the current system centric or sometimes network centric view of the datacentre.  With this evolution we will see different solution components that can be rapidly deployed and that they will seamlessly integrate into the cloud space.

Cloud administrators, which I firmly believe will be very different to existing systems administrators will be responsible for managing a complex and highly dynamic environment.  The cloud administrators will have far better understanding of the deployed application requirements.  They will have to facilitate database solutions from simple files to multi-system query engines, they will have to be able to provision disk storage when and where it is required, with backup policies meeting the application requirements, they will have to integrate highly performant cpu intensive applications and potentially unique processing capabilities and of course they have to do it securely.

All this cannot be done at the systems administrator level, where we still see constant bickering about whether my operating system is better than your operating system.   Big news flash, the applications don’t give a crap, and the businesses just want them to run and meet their requirements.   I fully expect to see some form of application enclaving occurring, a technique that will see an application component instance running on one system being seamlessly migrated to another system, possibly even a completely different operating system (you could argue this is possible now with Java and something like Terracotta).

In the cloud, all operating systems and architectures have to co-exist in harmony. They will all play a role in delivering application functionality and guess what – a lot of the reconfiguration of cloud resources will be triggered by the applications themselves, whether to scale up or down according to workload requirements or to reconfigure themselves based on changing business requirements.   The processing/connectivity elements in the cloud do not exist for themselves, without the application they are meaningless.  The operating system is merely a vehicle to provide services to the application, a concept that has definitely been forgotten.

In the cloud, the power shifts to the application developers who will in close conjunction with cloud administrators deploy and maintain application computing solutions.

As far as I can see. one of the biggest issues with cloud computing is ensuring the locality of the data and the computing elements.  If you have transfer petabytes of data considerable network distances to reach the computing resources and back then you will need to question the cost effectiveness of doing so.  Under these circumstances your cloud application design may see you creating a private cloud component that interracts with other private clouds (say an outsourcer) or public clouds.  To solve these issues, agreed interoperability standards are extremely important and you can easily argue that they are equally important just for ad-hoc  consumption of computing resources.

If you ignore the hype and look at the concepts, then there is significant value in an Adaptive Computing Environment.

</sermon>

Subway get your act together!

Come on subway – this is pretty pathetic.

I went to lunch today expecting to have a nice 30cm (aka Foot long) Veggie Patty roll and what do I find – oh we’re all out of veggie patties.   Ok I think, there are 3 other subway outlets in my area i’ll go to another.   All  3 of them were out.   Not only that, this is the second time in 2 months i’ve had the same story from subway.

So, in response I thought i’d provide them with some help.

If you have only one box of veggie patties left – please order some more pronto 🙂

Maybe there is a worldwide plague and veggie patties are the natural defence – maybe they’re being stockpiled and therefore there is a global shortage.  Somehow I suspect not, just poor management of stock.

So get your act together  – oh and you owe me 40 minutes of lunchtime and a couple of dollars of petrol 😉

Is KVM a type 1 or a type 2 Hypervisor? – aka – My Hypervisor is better than your Hypervisor!

There are many virtualisation solutions on the market today. As a result we now get companies telling us that their solution is better than their competitors – nothing new there. Of course, no one seems to provide any benchmarks and some companies even get their knickers in a twist if you even suggest you’re going to perform and openly document any benchmarks.

Firstly, if you remember that it is all done with smoke and mirrors you’ll be fine.

What is a Hypervisor ?

Some people/vendors say Hypervisors (also called the Virtual Machine Monitor) are new technology that enables multiple Operating Systems to co-exist on a single system. This is incorrect, hypervisors have been around since system virtualisation started back in the 1970’s with IBM’s CP-370 reimplementation of CP-67 for the System/370 known as VM/370. VM/370 has evolved over the years and is now known as z/VM and is fundamental to large scale virtualisation of linux (and opensolaris) on system z.

Hypervisors use a small layer of code to achieve fine-grained, dynamic resource sharing of the underlying system though you could easily argue that z/VM is not a small layer of code. In general to me a piece of code that provides fine-grained, dynamic resource sharing of the underlying system sounds awefully like an operating system.   Admittedly this operating system allows you to run other operating systems simultaneously. Again, this doesn’t sound terribly different to an operating system running processes so clearly there is something more to it.

The issue revolves around how an operating system manages the underlying hardware resources. It does so in a privileged state where it handles all requests to access the hardware on behalf of the user processes. That is, my user mode process cannot directly access the hardware and it delegates that request to code running in a more privileged state. This is where the smoke and mirrors come out. The Hypervisor provides each guest operating system the appearance of full control over a complete computer system (memory, CPU, and all the peripheral devices). Fundamentally, Hypervisors work by intercepting and emulating in a safe way sensitive operating system operations (such as page table manipulation) in the guest.

Hypervisors, in general, are historically classified in two types:

  • Type 1 hypervisor (Bare-Metal Architecture) – This is a hypervisor that runs directly on a given hardware platform. A Guest OS then runs at the second level above the hardware. The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s. Often quoted examples of this type are Xen, VMware’s ESX Server and IBM’s LPAR hypervisor (PR/SM).
  • Type 2 hypervisor (Hosted Architecture) – This is a hypervisor that runs within an OS environment. A Guest OS then runs at the third level above the hardware. Some examples quoted of this type are VMware Server and Linux KVM.

KVM is Type 1 versus Type 2?

Vendors will often be seen bagging their competition. Oh they’re a type 2 hypervisor, we’re a type 1. Since we’re type 1 we must be better.

Is the distinction between types even relevant anymore?

If we look at the above list of type 1 hypervisors we see IBMs PR/SM and the son of CP/CMS known as z/VM. Both system z PR/SM and z/VM are classified as type 1 hypervisors, but you can run z/VM as a type 1 hypervisor in a PR/SM logical partition. Does it make sense to keep the distinction of type 1 versus type 2 in this case – probably not.

What about x86?

If we look more closely at x86 style architecture we see that it is divided into 4 hardware resource privilege levels aka rings. The operating system kernel runs in privilege level 0 (aka ring 0) giving it complete control over the system. In the case of Linux, ring 0 is also known as kernel space, with user mode being in ring 3.

So where does the hypervisor and virtualisation fit into this?

Virtualisation effectively puts the hypervisor into Ring 0 which then in turn presents a ring 0 lookalike to the guest operating systems thereby fooling them into believing they are running on the native hardware.

In this context you could argue that a type 1 hypervisor runs directly in ring 0 and a type 2 hypervisor runs in ring 3, but as we’ve seen above with PR/SM and z/VM the distinction between type 1 and type 2 hypervisors is fuzzy at best.

Now coming back to the reason for me blathering on – Is KVM a type 2 or a type 1 hypervisor? Many people will flatly say type 2 as it is loaded by a hosted operating system (in this case Linux) and of course those running type 1 hypervisors will say that theirs is better. Others will say KVM is a type 1.

I’m not so sure that KVM is a type 2 hypervisor. Sure it does use a Linux operating system. But what are the differences between a dedicated hypervisor microkernel and a dedicated linux based hypervisor. I don’t think amount of code should really be a determining factor.

KVM makes available the hardware virtualisation extensions (AMD SVM or Intel VT-x) to the Linux kernel effectively making the kernel a Ring 0 Hypervisor. In this new mode the ring 0 hypervisor (VMXROOT) has full privileges and the guest operating systems run in what is known as a deprivileged ring 0. Sure guest VM creation is performed from user space via /dev/kvm device ioctls, but you could just as easily argue – do you want vm creation/management to be performed by a privileged microkernel?

Personally I think the distinction between Hypervisors based on their ‘type’ which is in itself a historical artefact of technology long extinct is a waste of time – perhaps that is why it is used in marketing campaigns 🙂

Perhaps we’d be better off looking at performance and interoperability and making those the things to argue about.   Let’s all forget about Type 1 and Type 2, it’s all too fuzzy to be bothered with those terms anymore.

nagios forked

I normally don’t get that fussed by opensource projects going off on tangents.  However, I think the fork of nagios to icinga is a good thing, much in the same way as quagga was a great fork of zebra.

Nagios is ok, it’s not great.  There are many areas where it can improve and now that the future for the tool is directly in the hands of the community i’m hoping it can make some big leaps forward.

Of course, the new team has to step up and deliver the goods.   I’m happy to support them where I can by replacing my nagios deployment with icinga.