Redhat Cluster High Availability Installation – Part 3


In this post, we are going to see how to add the fence, resource and service group to the cluster. Before doing that please do the below steps to find the iscsi shared disk.

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:00:40 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1.201 1 Online, Local
192.168.1.202 2 Online

[root@cnode1 ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes

[root@cnode1 ~]# ls -l /dev/disk/by-path/
..
lrwxrwxrwx 1 root root 9 Oct 5 14:12 ip-192.168.1.100:3260-iscsi-iqn.2015-09.com.tcs:haqdisk-lun-1 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 5 14:12 ip-192.168.1.100:3260-iscsi-iqn.2015-09.com.tcs:hawebdisk-lun-1 -> ../../sdb

..
..

In above example, iscsi hawebdisk is mounted in /dev/sdb partition. To identify the iscsi disk id, please run the below command.

[root@cnode1 ~]# ls -l /dev/disk/by-id
..
..
lrwxrwxrwx 1 root root 9 Oct 5 14:12 scsi-1IET_00010001 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 5 14:12 scsi-1IET_00020001 -> ../../sdb

..
..

Here, /dev/sdb disk id is scsi-1IET_00020001, please note down this id, we will use this for cluster resource.

Before using the /dev/sdb disk, we need to do some steps.

Step 1: Create the GFS Cluster File System

[root@cnode1 ~]# mkfs.gfs2 -p lock_dlm -t Linux_HA:GFS -j 2 /dev/disk/by-id/scsi-1IET_00020001
This will destroy any data on /dev/disk/by-id/scsi-1IET_00020001.
It appears to contain: symbolic link to `../../sdb'
Are you sure you want to proceed? [y/n] y
Device: /dev/disk/by-id/scsi-1IET_00020001
Blocksize: 4096
Device Size 5.00 GB (1310720 blocks)
Filesystem Size: 5.00 GB (1310718 blocks)
Journals: 2
Resource Groups: 20
Locking Protocol: "lock_dlm"
Lock Table: "Linux_HA:GFS"
UUID: 36d25bd4-9d7c-8ed4-3339-57d948bf0fca

Step 2: Install Apache package on both node

[root@cnode1 ~]# yum -y install httpd

[root@cnode2 ~]# yum -y install httpd

Step 3: Configure Fence, Resouce and Service Group

Open the web browser and type “https://192.168.1..100:8084“, enter cluster server root crendentials. There you can see the Linux_HA cluster is up and running.

  1. Click the Linux_HA cluster, in “Nodes” tab, you can see cnode1 and cnode2 are connected with cluster.
  2. Click “Fence Devices” tab, and click “Add” and the popup window will appear, in that select “Fence virt (Multicase Mode) and enter the Fence name “Linux_Fence”
  3. And now click on “Nodes” tab, click cnode1 go down and click “Add Fence Method”, the new popup window will appear, in that select “xvm Virtual Machine Fencing” and enter your domain “example.com” and click submit. Please do the same steps for cnode2 server.
  4. Click “Failover Domains” tab, click “Add”, in that new popup windows enter the Failover domain name “Linux_Failover” and click “Prioritized” check box then set the 1 and 2 priority for both nodes.
  5. Now click “Resources” tab, click “Add” and add the below resource type
    “IP Address”, enter unused IP Address “192.168.1..203”
    “File System” enter the file system name “Linux_Filesystem”, Mount point as “/var/www/html/” and Devices, FS or UUID value as “/dev/disk/by-id/scsi-1IET_00020001”
    “Script” enter the script name “Linux_HA_Script” and script file path is “/etc/init.d/httpd”
  6. Finally click “Service Groups” tab, click “Add”, in that dialogue window, enter the service name as “Linux_GFS” and select the “Automatically Start This Services”, then select the Failover domain “Linux_Failover”
    Click “Add Resource” and select the IPAddress, Filesystem and Script resources.

If you propery did the above steps then the apache webserver will automatically start, to check this open the web browser and type “http://192.168.1..203“, it will direct you web page.

Step 4: Check Cluster status command line

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:22:14 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1.201 1 Online, Local, rgmanager
192.168.1.202 2 Online, rgmanager

Service Name Owner (Last) State
——- —- —– —— —–
service:Linux_GFS 192.168.1.201 started

[root@cnode1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 19G 1003M 17G 6% /
tmpfs 246M 32M 215M 13% /dev/shm
/dev/sda1 485M 33M 427M 8% /boot
/dev/sdb 5.0G 259M 4.8G 6% /var/www/html

[root@cnode1 ~]# vi /var/www/html/index.html

Add the below lines

Welcome to Redhat HA Cluster with GFS

Save and exit (Esc :wq!)

Now open your web browser and type “http://192.168.1.203“, you will get the above index content.

By executing below command you can relocate the cluster running web services to another node.

[root@cnode1 ~]# clusvcadm -r Linux_HA_SGroup -m 192.168.1..202
Trying to relocate service:Linux_HA_SGroup to 192.168.1..202...Success
service:Linux_HA_SGroup is now running on 192.168.1..202

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:30:43 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1..201 1 Online, Local, rgmanager
192.168.1..202 2 Online, rgmanager

Service Name Owner (Last) State
——- —- —– —— —–
service:Linux_GFS 192.168.1..202 started

That’s all. If cnode1 goes down then the redhat cluster will run the apache service in cnode2 without service fail.