Monday, November 01, 2010

IE6, Will no one rid me of this turbulent browser?

Internet Explorer 6 was released in 2001 and was deployed widely until the end of 2006 when IE7 appeared. Although a 2001 browser should be ancient history, IE6 has proved hard to dislodge from corporate desktops, in spite of concerted efforts by Mozilla Firefox, Google Chrome, and Microsoft. Internet Explorer 9 went into beta a month and a half ago.

If you work for a young company, or keep in-house development to LAMP or pure Java, the persistence of IE6 will seem baffling to you. Why not just migrate off to a later review of IE? Better still (in our opinion) try Google Chrome.

The reason why IE 6 has such staying power has a lot to do with the level of Intranet development activity between 2001 and 2007. This period was a boom time for in-house development, and at one point IE6 had around 90% market share. Today that has dwindled to 20%, but what really prevents some of the worlds largest companies from killing IE6 is the millions of lines of code and thousands of man hours of testing they have invested in their IE6-era Intranet applications.

Unfortunately, in an effort to make their applications run and render smoothly on IE6 many developers lost browser portability. Your Intranet works on IE6, but nobody is testing the applets, html, javascript, activeX, frames, and everything else on IE7 and certainly not Firefox or Chrome.

The feedback we have had from our corporate clients is that IE6 Intranets and applications are going to be around for a few years yet, probably outlasting Windows XP itself, which is only supported by Microsoft until April 2014. To give you an idea of just how strong IE6 inertia is, consider the following:
  • 360is know of around 30 unpatched security vulnerabilities in IE6.
  • Performance running certain content is 10x slower than a modern alternative.
  • IE6 doesn't support CSS v2, a cornerstone of modern web site design.
...and yet still your corporate desktop is running a ten year old browser. So how is an IT department to continue supporting IE6 for the foreseeable future, without impacting other desktop projects like Windows 7, VDI, and without exposing the organisation to any more risk than is absolutely unavoidable? On the face of it, there are a few options, but most of them do not survive the acid tests of user acceptance (too fiddly) or licensing (you aren't allowed to do that), or will not be supported by Microsoft. Given the importance of these Intranet applications (normally business critical pricing tools, order entry systems, customer databases, stock keeping) running without support is something we find hard to get comfortable with.
  • Putting XP in a VDI and asking users to access it just for legacy applications is going to be too much hard work for most users.
  • Running IE6 on VMware using ThinApp is not supported by Microsoft.
  • Session Virtualization using Microsoft RDS is a better solution, but not everyone is experienced with this technology.
If you need to support IE6 into 2011 and need to migrate away from Windows XP to a modern desktop operating system, but need expertise and support in your project then just let us know.

Tuesday, September 14, 2010

Citrix XenServer Certified at EAL2 Common Criteria

Back in April we discussed XenServer's recent submission to the Common Criteria (CC) scheme. Based upon previous evaluations, we predicted a 6-month end-to-end process. True to form the guys at SiVenture have delivered on time.

As is common with other virtualization technologies that have been through the CC process, the investigation work centered around separation of virtual machines, their memory, virtual disks, and execution on the CPU(s). The method of secure administration was also investigated.

This is good news particularly for those UK Government buyers who need a CC approved virtualization platform that will help to reduce OPEX through lower power consumption and CAPEX though XenServer's lower license cost.

There is going to be a lot more white space on this chart in 2011-2012 than there was in 2007-2008 when it was compiled, so a lower-cost method of virtualization will be a welcome addition to the procurement department's product list.

Thursday, July 15, 2010

Business Continuity Planning, Disaster Recovery, and Continued Operations

Sooner or later someone important asks you the awkward question.

     "So what happens if…"

     ......nobody can get in to our office?
     ......the power goes out in our building?
     ......there is a flood?
     ......the datacentre is ransacked?
     ......our network is cut?

Power companies, telcos, and data centre operators all strive for 100% uptime, their marketing literature is littered with phrases like resilience, N+1, self-healing, and high availability. What this really means, is that they have accepted that equipment is fallible and that people make mistakes. They have considered what we call modes of failure in their service.


     "So how come the redundant, self-healing, service went down twice this quarter?"


Unfortunately, failures in IT can be both subtle and complex. The interaction of so many intricate systems, hardware, software, and networks, give rise to an almost infinite number of ways in which things can go wrong. Sometimes, somehow, failures in a complex system can confound even the most prepared service providers. If you have not already experienced an outage of this sort, you may be surprised to learn that downtime due to something as simple as a power cut can run into many days. In the last 14 months there have been at least 7 major power incidents in the UK. The longest saw homes and businesses in London without power for 4 days. What is your plan for coping without power, networking, or staff on site for almost a week? It happens.

Experienced systems administrators and network engineers know that sometimes, adding additional software or hardware to an already complex system with the intention of achieving higher uptime, can sometimes have quite the opposite effect. At 360is we come into contact with far more misconfigured multi-path storage fabrics than we do faulty cables or broken network cards. Why? Because complexity is the enemy of availability. Additional network cards, cables, and even switches are relatively inexpensive. Expertise to make them all work properly is not.

     "If you don't have it twice, you don't have it"

As a response to the fallibility of complex systems, IT infrastructure managers have long sought to replicate their data and applications both within a given data centre and further afield to a secondary Disaster Recovery site. While this setup sounds impressive, it need not be costly. 360is replicates it's Manchester systems to just a few units of rack space in Munich

Once replication across geographic distances was prohibitively expensive for all but large financial institutions (who coupled mainframes in one location to remote disk drives in another), today it is within the reach of even the Small to Medium Enterprise.

Today's systems manager will find there are now many paths to replication, each with their own pros and cons in terms of price, functionality, level of integration, ease of deployment and ease of use.
Our consultants are here to help you navigate a path through the technology to the right solution. Get in touch to find out how you can take advantage of recent hardware-agnostic advances in software clustering, data replication, and fault tolerance to...
  • Increase Availability
  • Reduce Complexity
  • Make the better use of your capital and operating budget

Monday, April 26, 2010

Free Virtualization Assessments?

It's a familiar story.

The client had wanted to move to a virtualised data centre for some time but non-essential IT spending had been frozen. They took a 10% haircut across IT in the past 18 months and most of their contractors had been dropped, but after a long cold winter they were coming out of hibernation as business improved.

The call came in first to the notional head of IT. As in so many UK companies the IT function did not have a CIO or board-level sponsor, but found itself reporting instead to facilities or operations. The call was from one of VMware's 2000-or-so UK partners.

They offered a free virtualization assessment.

Although not an IT professional, the head of operations knew his IT team wanted to virtualise and had been stopped from doing so the previous year. The assessment would take 30 days, required 1 brief visit to site, and wouldn't cost a penny. It would kick-start this year's biggest IT project, virtualising the data centre.

After a month the client was handed a spreadsheet, and a quote for VMware licenses and 20 new HP servers valid for 30 days. Not being a technology company, and being apprehensive about making the leap to virtualization, the client wasn't ready to just sign on the dotted line, at least not without getting some questions answered:
  • What about re-using some or all of their servers?
  • How would this protect their investment in Network Attached Storage?
  • Did they need a paid-for product, what about free software like ESXi, Hyper-V, or XenServer?
  • How would a virtualised estate cope with growth coming out of recession?
  • How would all this fit in with the existing Disaster Recovery plan?
  • Did they have the required skills to operate a newly virtualised estate?
  • How firm was support for their critical ERP and CRM software in a virtual environment?
The spreadsheet didn't say. The reseller, seeing the prospect of 20 licenses and HP servers disappearing, didn't say either. They just seemed to lose interest. At this point the client found our web-site, registered to download a case study, and recounted his tale to us. Unfortunately, a virtualization assessment isn't just a maths problem that can be solved by dividing workloads by servers and multiplying the result by the cost of a VMware/XenServer/Hyper-V license. Automated tools and spreadsheets won't answer any of the points above, at least not the tools we have seen. A comprehensive Virtualization Assessment will however provide you with the foundation you need to adopt this technology in the most appropriate way for your business. Our comprehensive virtualization assessments will:
  • Reduce the risks inherent in adopting new technology
  • Increase the accountability of IT to the business over this project
  • Help you set budgets appropriately
  • Predict skills gaps and identify a plan to fill them
  • Support your IT staff with product-neutral advice and vendor evaluation
  • Accurately set expectations for timescales and functionality
  • Identify where virtualization impacts DR & BC processes
Where does that leave the average software reseller and his automated tool? Ever heard the saying "when the only tool you have is a hammer, everything looks like a nail"? If you want to virtualize mission critical systems in a safe, planned, way, but need more than a spreadsheet and a quote for some software to guide you, then let us know.

Thursday, April 15, 2010

360is Welcomes New Consultants

As a result of our growth last year, we'd like to welcome 2 new consultants to 360is.

Wynn was a contractor running the virtualised infrastructure for a financial services company, and is taking on part of the responsibility for customer support. After only a month he has proven an invaluable addition to the team. Iain we discovered as an independent contractor attending one of our training courses. He has extensive VMware and virtual desktop experience. With their first couple of client projects already completed we look forward to many more. It's rare that we are lucky enough to find individuals who fit the profile so well.

Welcome!

Wednesday, April 14, 2010

Citrix XenServer & VMware ESX Common Criteria Certification

As some of you already know, Citrix have sponsored XenServer, XenDesktop, XenApp, and Netscaler into the Common Criteria program for Information Technology Security Evaluation (CC). We have had several questions about the announcement and what it means for both VMware and XenServer in particular. Since 1 or 2 of us at 360is were there way-back when Common Criteria & ITSEC first started seeing mainstream IT products submitted for evaluation, we thought we would take this chance to answer some of your questions in this posting.

What is Common Criteria (CC)?
CC is an attempt to reduce duplication of effort of the IT security evaluation functions of several governments (6 in all). CC is an international standard that describes how product vendors may make claims about their security software or hardware, and have independent laboratories investigate these claims and certify the product has been designed and built in a way that meets the vendors claims and can be relied upon to function as described.

What is EAL?
Within CC, products are examined to an Evaluation Assurance Level (EAL). EALs are numbered currently from 1 to 7, with 7 being the most detailed, most stringent level of scrutiny that a product is put under. VMware ESX and ESXi 3.5 were certified to EAL4+ in February 2010. Citrix have submitted their products for the EAL2 process this month.

So An EAL4 Product Is More Secure Than An EAL2 Product?
No. This is probably the most common misconception about CC. A higher EAL number means only that the product passed a deeper level of scrutiny of the vendor's claims. For example, I might have a simple weak encryption application that passes EAL7, because it was found to meet my claims without fault, and its design and execution was found to be exemplary even when "put under the microscope" of EAL7. A much stronger encryption application, that would protect my data better using a strong algorithm, might only be submitted for EAL2, because I want to get some kind of basic certification quickly so I can sell to my government customers. There are also a number of misconceptions around how vendor claims are tested. In our experience, code review is only done at EAL 6 or 7 for example.

What Claims Might A Vendor Make?
The scheme allows for vendors to tailor their claims based upon their product and the way it is to be used. This means that a Firewall is not subject to the same investigation as an Email system or a Desktop OS. A vendor with a Firewall might claim that in order to administer the device you must pass 2-factor authentication, and can only do so over a strongly encrypted connection, and that there are no other possible way of gaining admin access. Such a claim would be investigated to the required depth as part of the CC certification. Another example of a popular claim might be "the admins can't automatically read everyones Email". CC tests these claims are true to a certain depth. Documentation is a vital part of passing an evaluation.

Does It Matter What Version Gets Certified?
Yes. It matters very much. Just because version 1 of a product received certification, it doesn't mean that v2 or even v1.0.1 is certified. The product must be resubmitted into the evaluation process for it to be re-assessed. This is because CC evaluates vendors claims for a given version and even a given configuration of the product. It is normal for a product to be obsolete by the time it passes certification. You could argue this is made worse by the pace of change in commercial software, with many companies pushed to make 1 major release per year and 2 functionality patches, alongside the 4 critical security related hotfixes, all of which take a product outside its certified condition.

How Long Does It Take?
For product of similar size/complexity, the higher the level of assurance the longer the evaluation takes. Expect to see an XenServer (we presume v5.0 or v5.5) certified within the next 6 months. A CC certification can be an expensive business, in our experience of the process (mainly CheckPoint-FW1 and Harris CyberGuard) the cost is £200K-£400K.

Who Cares If A Product Is Certified?
Mostly it is government buyers or those who have to work closely with government agencies, exchanging information with them, or connecting directly to them. Often such customers are restricted to choosing products from the catalogue of evaluated solutions. However, depending on the sensitivity of the information being handled by the IT, an EAL certified may not even be required.

Where Can I Find Out More?
As ever, Wikipedia is a good start.
Check the Portal for certified products.
Or talk to us.

Updated 14-09-2010: XenServer has now been granted it's certification.

Friday, March 19, 2010

Virtual Machine Company Webinar Recording

The Virtual Machine Company seminar is now available for download. For details of the agenda and material covered please see the original invite here.

We are making it available as a 40MB Powerpoint (pps) file.
Download it here (registration required).

Thursday, March 04, 2010

Paravirtualized OpenSolaris & Solaris on XenServer


NOTE: This entry was written for XenServer v5.0 and 5.5 and may not be applicable to 5.6. If you would like 5.6 compatibility then let us know and we'll consider updating this post.

We are big fans of Sun Solaris at 360is. Particularly for anything mission critical or when infrastructure is needed that requires little or no maintenance. Features like Zones, Dtrace, ZFS, and Live Upgrade, make it a joy to work with especially in large, complex environments. We are also big advocates of virtualization, including the XenServer product which brings enterprise virtualization features to the world, free of charge. We aren't alone, the world's largest VM platform runs on Xen technology.

Although Solaris and OpenSolaris run in XenServer, historically they have done so using something called HVM. HVM is slow for IO operations like disk or network activity. HVM works fairly well for pure CPU/RAM based tasks thanks to features AMD and Intel built into recent CPUs, but forget about doing anything else, and forget about using an HVM-mode VM in a real production environment. HVM is particularly unsuitable if low IO latency is a requirement.

Paravirtualization is a much more efficient method of virtualization than HVM, but requires the co-operation of the guest OS in order to play nicely with the hypervisor. Unfortunately Citrix don't supply a paravirtualized OpenSolaris template for XenServer, and there are some bugs in the PV interface that prevent regular Solaris running at full speed too. What a shame!

For those of you not already familiar with the differences between OpenSolaris and Solaris, check out Jim's blog. If you thought Solaris was just a George Clooney film, you can safely ignore this whole posting and probably the entire blog, you are in the wrong place.

Benchmarking HVM Versus Paravirtualized OpenSolaris
The chart at the top shows HVM versus PV OpenSolaris storage performance on a puny Dell system with 1 local hard disk and no optimisations (besides enabling a PV kernel). A single vCPU was used with 512MB RAM. Although the absolute numbers behind this chart are not exciting (we report just the relative figures), the percentage jump from HVM to PV certainly is. Our production systems (VMCo Virtual Appliances), running PV OpenSolaris/Solaris easily exceed 100MB/sec on a single iSCSI Gigabit Ethernet link, again with 1 vCPU allocated. The Paravirtualized guest has about 2x the performance of HVM guest in our simple filesystem test above.

Considering CPU & RAM Performance
Besides advocating Solaris and XenServer, we are also fans of Geekbench, the independent benchmarking software for... well... geeks. Both HVM and PV virtual machines clocked up almost identical scores around 2400 on Geekbench2 (a CPU/RAM based benchmark), thanks to the Dell's AMD-V processors and XenServer's efficiency. Again, not exciting as an absolute measure of performance, but the near identical Geekbench score for HVM and PV virtual machines confirms the vendors assertions about AMD-V/Intel-VT technology. For computationally intensive (pure RAM/CPU) work, HVM and PV have little to split them.

Getting Solaris & OpenSolaris Performing on XenServer
Getting the best performance from either kind of Solaris guest requires running it in PV mode. This is achieved via a slightly different route depending on whether its Solaris or OpenSolaris you are using. First up we will take the easier of the 2.

  • Solaris
    We have provided a trivial template to get around a bug with XenServer/Solaris that would otherwise prevent the guest running at full speed. It runs Solaris on XenServer using Sun's own hybrid drivers described here. "Performance of PV drivers in HVM domain looks similar to that of a fully PV guest domain". Download the template, import it as a custom template, and you are ready to start creating Solaris VMs with better performance than standard.




  • OpenSolaris
    Things are not quite so simple for OpenSolaris as they are for Solaris. In order to get OpenSolaris using PV drivers we need to have it boot to a PV-enabled kernel from cold, and set a couple of critical XenServer options on the VM. The first of these tasks involves making a simple change to the Domain0 machine of each physical server you want to run PV-OpenSolaris on.
    Health Warning: In the extremely unlikely event that you to break your XenServer, it's your problem. Yes 360is provides famously good XenServer support contracts, but this is not an attempt to break your system and then sell you one. This blog entry was originally written for XenServer v5.0. Don't experiment with production systems.
Step By Step Guide, Running Paravirtualized OpenSolaris On XenServer
  1. Locate 2 files from the OpenSolaris install CD, and copy them to the Domain0 of each and every XenServer you want to run a PV OpenSolaris instance on. If you can't find these 2 files on your OpenSolaris CD, you have hold of the wrong media:




    1. Copy


      /boot/x86.microroot
      from your OpenSolaris media into


      /opt/opensolaris/
      on your XenServer Dom0.

    2. Copy


      /platform/i86xpv/kernel/unix
      into the same directory.
  2. Create a new VM in XenCenter, using the "Other Install Media" template", do not start it up yet.

  3. Allocate enough RAM to the VM, 800MB to 1024MB is plenty.

  4. Identify the uuid of your new soon-to-be-OpenSolaris VM using the


    vm-list
    command.

  5. Execute the command:


    xe vm-param-set uuid=vm_uuid PV-kernel='/opt/opensolaris/unix'
    Our VM now knows to load this kernel from the Dom0 filesystem.

  6. Now tell tell the VM where to read the PV RAM Disk from:


    xe vm-param-set uuid=vm_uuid PV-ramdisk='/opt/opensolaris/x86.microroot'


  7. We set up our boot arguments for the VM:


    xe vm-param-set uuid=vm_uuid PV-args='/platform/i86xpv/kernel/unix -B console=ttya'


  8. Clear the VM boot policy, forcing it into PV operation:


    xe vm-param-set uuid=vm_uuid HVM-boot-policy=


  9. Also clear the bootloader value:


    xe vm-param-set uuid=vm_uuid PV-bootloader=


  10. Use the following command to identify the uuid of the vbd for our new VM:


    xe vbd-list


  11. Set our VM to boot from the vbd:


    xe vbd-param-set uuid=vbd_uuid bootable=true


  12. Finally we are ready to begin the OpenSolaris install proper. Attach the OpenSolaris install CD/ISO to your VM in the normal way.

  13. Boot the VM, and begin your OpenSolaris install. When finished, shut it down.

  14. Once your OpenSolaris install has completed, we will need to change some settings for the VM in XenServer, to get to to boot to the newly created OpenSolaris root filesystem. The trouble with this, is that the path to the root filesystem depends partly on the name you gave your new OpenSolaris system, in the installer. For this example we are going to assume you called it simply "opensolaris".


    xe vm-param-set uuid=vm_uuid PV-args='/platform/i86xpv/kernel/unix -B console=ttya,zfs-bootfs=rpool/ROOT/opensolaris,bootpath="/xpvd/xdf@51712:a”‘


  15. Disconnect your OpenSolaris ISO image from the VM and you are ready to go.
We still don't have any xen-tools for OpenSolaris so things like live migration will not work. If you need to move the VM then XenServer can't help and I haven't looked at OpenSolaris suspend-to-disk capabilities (suspend-to-RAM is no good to us), Randy is probably the man who knows whether you can suspend OpenSolaris to disk, and then shutdown/restart the VM on a different XenServer in the pool.


Solaris Vs OpenSolaris (PV) Performance on XenServer
Geekbench has the Solaris VM scoring around 2400, so CPU/RAM performance is similar to OpenSolaris, but sequential read and write speed is also very similar to OpenSolaris-PV. So Sun's claim about Solaris with Hybrid drivers being close to the performance of a fully PV (OpenSolaris) domain is true. Over the next few weeks we are going to be investigating further and finding ways of improving performance in vitualized Solaris and OpenSolaris under VMware, XenServer, and Sun's own xVM for our clients. For now, our advice is simple:
  • If the VM workload is purely CPU/RAM based then Solaris or OpenSolaris as PV or HVM performs pretty much equally.
  • If the workload is IO centric then you'd better use either Solaris with hybrid drivers or a Paravirtualized OpenSolaris, built following our instructions.
Links
  • To find out how we transform IT performance for clients who are capital constrained, read more.
  • For Solaris & OpenSolaris tips read Mark's blog, one of our founders.

Monday, March 01, 2010

Performance Expert Services

Over the last 12 months we have seen a sharp increase in clients targeting poor performance of systems and applications, particularly around storage, virtualization, and wide area networking. As a result of which we have formalised our professional services for performance investigation, reporting, and remediation. We call this our Performance Expert Service.

We provide fixed-fee projects where our consultants work either independently of your vendors, or with their assistance, to get the performance you need from mission critical systems. When more performance cannot be liberated from existing assets, we are able to provide a quantified case for additional investment, couched in business terms.

While everyone strives for more performance, it is only since the credit crunch and economic slowdown in the UK that there has been a significant increase in these projects for 360is. We put this down to factors impacting IT departments like reduced staffing, frozen budgets, and lack of visibility into the future. Not since the great Y2K spending freeze has there been such a focus on making do with what you have and ensuring it runs efficiently. Bad news for product vendors, but not necessarily bad for end users. Many found the Y2K hiatus in new IT deployments to be no bad thing, some even said IT had never worked so well.

Benefits

  • Get next years hardware performance now. (useful if your capex budget has been frozen)
  • Free up staff from nursing overloaded systems. (good if your team has recently shrunk)
  • Reduce license costs through higher utilization of fewer systems. (interesting if you just got your support renewal quotes)

Our consultants are there for when performance problems defeat your IT team’s efforts, and are beyond the scope of vendor patches and support contracts. Contact us to find out how we can solve your performance problems.

Monday, February 22, 2010

Web Conference, Virtualization Hardware

We are holding a 15 minute webinar on choosing dedicated virtualization hardware. Some of you will have already received the invite directing you to register.

Click on the date below to register for a place and receive the calendar appointment and PDF agenda via Email. Nearer the date and time you will also receive a brief Email reminder.

Save the date: 10am, Friday 19th March 2010, to be held online.

Agenda

"How Virtualization Demands Are Changing Hardware Choices, The Emergence Of A Virtualization Appliance"

Registered attendees will receive further information on the event and a full agenda. A recording of the material will be available for a period after the initial broadcast.

Updated 19-03-10 10am

A recording of the webinar (40MB) is now available. If you cannot read PowerPoint (.pps) files then please contact us for an alternative format.

Wednesday, February 03, 2010

Solaris vs RedHat vs Windows

Its been a while since our last Solaris/OpenSolaris post. Now, like buses, suddenly 2 come along at once. Both are related to performance, the first (this one) is about benchmarking Solaris versus other mainstream Operating Systems. The second will follow in a few days, and is about maximising Solaris performance when virtualized on XenServer.

Most people see Solaris as extremely reliable, but one misconception that many hold about Sun's OS is that it is slow. Unfortunately, few take the trouble to assess their choice of OS with an open mind, so Solaris remains a "well kept secret", known only to those who work regularly with all the major server Operating Systems.

One of our consultants has been doing some benchmarks for a client, and came up with results we published here.


Solaris x86 was almost always the best performer above Linux (RedHat), and Microsoft, on HP hardware. From 2 cores to 24 cores with a couple of exceptions, it outperformed the others on Geekbench, a CPU and Memory intensive benchmark.

Not a lot of people know that.