dcsimg

Automated Failover and Recovery in RHEL 5.1′s Advanced Platform

You’ve long been aware of the failover capability of clustering for physical servers, but when you started to add virtual machines, no one thought about failover. It’s never been possible to even do that. Right? Well, with the RHEL Advanced Platform, now you can.

You’ve spent the last several years trying to reduce your IT costs. You’ve consolidated labs and saved money on floor space, heating, cooling and electricity. You’ve automated system administration processes and have reduced costs on sysadmin staff. You’ve reduced hardware costs by replacing large numbers of small servers with a smaller number of blade servers. And finally, you’ve begun to replace physical servers with virtual machines.

Virtualization has become the promised land for you and your CIO. You’re running more and more software, including mission critical applications on virtual machines. Everything is going great. Your costs keep on dropping. And then, someone trips over a power cord in a lab and brings down one physical server and all the virtual machines that it hosted. Now you’ve got a problem, and it’s not a virtual one. Some of those virtual machines supported customer-facing applications, while others ran the company’s inventory control system. What you need is a failover capability for virtual hosts. You’ve long been aware of the failover capability of clustering for physical servers, but when you started to add virtual machines, no one thought about failover. It’s never been possible to even do that. Right? Well, with the RHEL Advanced Platform, now you can.

The solution – cluster failover

Let’s examine what it looks like to provide the failover protection to a virtual machine environment using Red Hat Enterprise Linux 5.0 Advanced Platform. In the diagram below, we have two Enterprise Linux guests on each of three machines. On the left, we see that a server physically fails. Ordinarily, guest instances A & B would stay down until an administrator is able to take action. But since we have Advanced Platform configured to manage these instances, the physical machine failure is automatically detected, and guests A & B are automatically restarted on the two other servers.

auto failover

Once the administrator is able to repair the problem with the machine on the left, we can then ask for those two guests to be moved, live, back to the now functioning system. This is an example of zero downtime failback. So not only did the virtual guests quickly and automatically get restarted, but there was no disruption to the application when re-balancing the configuration.

Details, please…

Let’s look at how this magic is constructed and then see the system in action. The key, fundamental change in the machine setup is to treat the three physical machines as shared resources. In this discussion, we’ll be using the new web-based management tool, Conga, to set up the cluster, shared storage, and failover configuration. We’ll also be using iSCSI SAN devices, which are a great alternative to Fibre Channel since they use standard Ethernet adapters, switches, and connections. Here’s the big picture:

virtual hosts

This solution makes use of GFS2, Advanced Platform’s cluster filesystem, to provide block level, high performance operation across a set of machines. The /guest_roots GFS2 filesystem is where the configuration and guest boot images are stored. This enables us to start a guest on any of the machines as well as perform live migration. Et-virt05, et-virt06, and et-virt07 are the physical machines hosting three virtual instances: guest1, guest2, and guest3.

Here are the five steps we’ll follow in creating and validating our example:

1.Form a cluster of the three physical machines.
2.Create the shared area for maintaining the virtual guests.
3.Move the virtual guests to a shared area, visible to all the nodes.
4.Have the cluster control the guests.
5.Try an example failover and recovery.

Step 1: Form a cluster

create cluster

Using Conga’s web-based management interface to form the cluster is easy. Make sure that your machines are on the same subnet, and then log in to Conga and navigate to the “Create a new cluster” page. Enter the machine names and root password, make sure “Enable shared storage” is selected, and click Submit. Conga will deal with installing packages and establishing the cluster. And don’t worry, the root password is encrypted over the network.

The submit step will restart the machines and within a few minutes, the cluster will be established. Next we will create the shared filesystem which holds the guest configurations and boot image files.

Step 2: Create the shared area for maintaining virtual guests

Conga makes it easy to create a shared filesystem across the machine set using the Cluster Logical Volume Manager (CLVM). CLVM provides volume management across a set of machines and enables concatenation, striping, mirroring, and expansion of storage underneath a filesystem. Start from the Storage tab and then select one of the machines; in our case et-virt05. Note that when initially viewing the storage page for a server, Conga will probe for a current, accurate view of disk and volume configurations. This will take a few seconds.

shared drives

A volume group is the logical storage area from which we draw space for the creation of our filesystem. Use of a volume group instead of a raw partition allows us to later extend the volume and the filesystem in case we need more space.

Once we’ve created the volume group, CLVM will provide a common name across the machine set which persists between reboots. For this case we have created a 100 GB volume group and will use a portion of that for the shared area. Specifically, we’ll create a volume group using two LUNs (storage partitions) from the array: /dev/sdd2 and /dev/sdg2. We’ll use the default extent size of 4 MB and name the volume group guest_vg. And since this is shared storage, we set the clustered state to “true.”

Note: It’s always a best practice to keep your servers time synchronized to a common source, like an ntp server. This is even more important for clusters and machines migrating guests.

create new volume

Having clicked “Create”, we then see all of the details of the new volume group.

volume group

Now click “New Logical Volume” to build the filesystem specification. There is a one-to-one correlation of logical volumes to filesystems. In one step we’ll create a new logical volume with a filesystem on top. Initially, the plan is to create three guests. Select GFS2 as the file system choice. Considering that each guest needs 6GB for the root volume and that we will need more guests later, we’ll carve out 60GB from the volume group. The filesystem is called guest_roots and we can use the same name for the logical volume name, filesystem name, and mount point. Let Conga create the fstab entry and also set the mount to occur at boot.

GFS2 uses a journal for filesystem crash recovery. Conga already knows that there are three machines involved, so it defaults to three journals, one for each server machine. Easy journal addition is one of the new features of GFS2, so it’s easy to add more journals later when expanding the cluster.

Now click “Create”.

unused space

You can see that we’ve got our filesystem. Note that we can later remove the filesystem from this page. Next, go and create a mount point with the same name (/guest_roots) and add an /etc/fstab entry on each machine.

logical volume

Step 3: Move the virtual guests to the shared area

At this point the guests can now be created and placed in the shared area. This article is not focused on the creation of virtual machine guests, but let’s review the settings needed for our configurations. Virt-manager is the easiest way to create new guests. In particular, specify that the disk image resides on the shared area, /guest_roots in our example. Also, you need to note that your system is in a networked environment, which will create the network bridge to your guest. Take a look at the configuration summary screen for virt-manager.

ready install

Once a guest has been constructed, its configuration file will be created in /etc/xen. In this case, it is /etc/xen/guest1. We need this file to also reside in the shared area. So, copy it over to /guests_root, right next to the disk image. Note that if you use virt-manager to modify the configuration, the /etc/xen configuration file needs to be re-copied to the shared area. Now /guests_roots will look like the listing shown.

guest root

Step 4: Have the cluster control the guests

We are now ready to place the control of the guests under cluster management. This includes starting a guest OS, restarting it if it crashes, and failover to another node in the event of machine failure. There are a few parts to this setup; disabling Xen, starting the guest, turning on live migration, and enabling the control of guests by the cluster management.

First disable the Xen script which starts the guest at boot time for the physical machine. On each of the servers issue the commands:

[root@et-virt06 ~]# chkconfig xendomains off
[root@et-virt06 ~]# service xendomains stop

This will immediately stop the service and prevent it from starting at the next boot time. Note that this will also stop currently running guests on the machine.

Next, enable live migration of a guest from one machine to another. This capability is off by default. On each of the machines, edit /etc/xen/xend-config.sxp. Uncomment the two relocation lines and set xend-relocation-server to “yes.”

(xend-relocation-server yes)
(xend-relocation-port 8002)

Now place the management of the virtual guest under the cluster. The cluster manager deals with services– whether they are a web service, database, or, as in this case, a virtual machine service. Since we are now in a cluster, we will specify how and where the guest OS is to be started. The first task is to configure a failover domain.

The failover domain simply states which machines can run the guest as well as which we prefer to have the guest run on. We will pick one machine out of the three to bias the running of the guest. We also will allow the guest to run on any other machine. You can get fancier with this configuration, but let’s keep it simple here. Look at the following failover domain.

We construct three in total; one for each physical machine. When later adding more guests, associate them to one of these three failover domains.

We don’t need fine-grained prioritization; only one machine is preferred and all others are treated equally. So we don’t check “Prioritized” or set the Priority fields.. Again, since we allow the guest to run on any machine, we don’t check “Restrict failover to the domain’s members.” This specification simply sets a bias to one machine. So we end up with three failover domain configurations: “bias-et-virt05”, “bias-et-virt06”, and “bias-et-virt07”.

add failover

failover list

Next construct the virtual service entries. Enter the name of the guest (guest1), the shared area (/guest_roots), that we want the service to automatically start, the failover domain (bias-etvirt05), and the recovery policy (restart the service if it fails).

properties for guest1

That’s it. Once you click “Update Virtual Machine Service,” the guest is under cluster management. If it’s not currently running, it will immediately be started. Notice that names are green, indicating that they are running and that the status lists which physical machine they are running on. They are shown running on the “bias” machine. Guest1 is on et-virt05 etc. Here’s what the service list now looks like:

services list

Step 5: Try an example failover and recovery

Now it’s time to validate our setup and see the system respond to various failure scenarios. First, let’s simply simulate a guest crash. This is easily performed by issuing the command “xm destroy” on one of the machines. Log in to one of the physical machines and try it. Here we used xm destroy to kill the guest.

[root@et-virt06 ~]# xm list
Name......................................ID.Mem(MiB).VCPUs.State...Time(s)
Domain-0...................................0.....1505.....8.r-----...6972.4
guest2.....................................5......499.....1.-b----.....21.0
[root@et-virt06 ~]# xm destroy guest2
[root@et-virt06 ~]# xm list
Name......................................ID.Mem(MiB).VCPUs.State...Time(s)
Domain-0...................................0.....1505.....8.r-----...6975.7
[root@et-virt06 ~]# xm list
Name......................................ID.Mem(MiB).VCPUs.State...Time(s)
Domain-0...................................0.....1505.....8.r-----...6988.7
guest2.....................................6......499.....1.-b----.....17.7

Note that about 10 seconds2 after destroying the guest, it restarts. The cluster manager has detected the failure and restarted the guest for you.

Let’s now simulate a physical machine failure. For this we’ll simply reboot one of the machines in the cluster. First note the report that Conga provides about the state and location of the guests. Also note that guest1 is running on et-virt05.

Now by simply logging into et-virt05 and issuing a reboot, we can watch the recovery kick in. With the absence of et-virt05, guest1 is restarted to run on et-virt07.

recovery

After we wait for et-virt05 to reboot, we can now use the same screen and choose the migrate task for guest1 to move it back to et-virt05. Remember, this failback happens while the guest is active with no further disruption in service.

Wrap Up

This article has shown the power of combining clustering and virtualization. We’ve demonstrated the construction of a robust, high-availability system, while taking advantage of the utilization efficiencies of processor virtualization. This was made very easy through the use of GFS2, the shared filesystem, for storing the boot images and configuration files for the guest OS’s. Further, the Conga web-based interface provided an easy tool for the configuration and management of the Advanced Platform.

What’s Next

In an upcoming article, we’ll cover:

  • Creating a virtualized cluster.
  • Managing shared partitions using CLVM across the Host OS’s.
  • Sharing GFS2 filesystems across a cluster of guests.

Acknowledgments

I’d like to tip my hat to the hard working teams at Red Hat and the open source community. The synergy of Xen virtualization, Linux, failover clustering and GFS make for a truly powerful operational environment. Thanks also to the reviewers of this article. In particular, Len DiMaggio was effectively a co-author as he sharpened the text and provided the “readers’ eye.” Be sure to check out Len’s earlier Red Hat Magazine article on using Conga!

About the Author

Rob Kenna is Senior Product Manager for Red Hat’s Storage and Clustering Products, including GFS (cluster file system), and RHCS (application failover). He brings a rich background as developer and manager for the creation of storage software.

Comments on "Automated Failover and Recovery in RHEL 5.1′s Advanced Platform"

datadink

It is also possible to use a LV as the “virtual hard drive” for each virtual machine. The main advantage that this has over using files is that, if for some reason, you ever need to increase the size of the partition, you can use the LVM tools to resize the LV on the host server. Inside of the VM, the same holds true – fdisk and LVM tools can be used to resize partitions.

http://onlinemedistore.com/6.jpg
pharmacy technician schools kamloops http://englandpharmacy.co.uk/catalogue/x.htm pharmacy contracting agency harass pharmacy
how to open up a pharmacy http://englandpharmacy.co.uk/products/metoclopramide.htm caswell massey pharmacy pepcid
mexican pharmacy nolvadex http://englandpharmacy.co.uk/products/xenical.htm brook pharmacy louisiana law in pharmacy
internet pharmacy human growth hormone mexico htm http://englandpharmacy.co.uk/products/torsemide.htm weber state pharmacy eulexin

Excellent goods from you, man. I’ve take into account your stuff prior to and you are simply extremely wonderful. I really like what you have obtained here, really like what you’re stating and the way in which you are saying it. You are making it entertaining and you continue to take care of to keep it wise. I cant wait to learn much more from you. That is actually a great web site.

Thanks for the purpose of giving these types of terrific content

Leave a Reply