Redhat Cluster High Availability Installation – Part 3


In this post, we are going to see how to add the fence, resource and service group to the cluster. Before doing that please do the below steps to find the iscsi shared disk.

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:00:40 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1.201 1 Online, Local
192.168.1.202 2 Online

[root@cnode1 ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes

[root@cnode1 ~]# ls -l /dev/disk/by-path/
..
lrwxrwxrwx 1 root root 9 Oct 5 14:12 ip-192.168.1.100:3260-iscsi-iqn.2015-09.com.tcs:haqdisk-lun-1 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 5 14:12 ip-192.168.1.100:3260-iscsi-iqn.2015-09.com.tcs:hawebdisk-lun-1 -> ../../sdb

..
..

In above example, iscsi hawebdisk is mounted in /dev/sdb partition. To identify the iscsi disk id, please run the below command.

[root@cnode1 ~]# ls -l /dev/disk/by-id
..
..
lrwxrwxrwx 1 root root 9 Oct 5 14:12 scsi-1IET_00010001 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 5 14:12 scsi-1IET_00020001 -> ../../sdb

..
..

Here, /dev/sdb disk id is scsi-1IET_00020001, please note down this id, we will use this for cluster resource.

Before using the /dev/sdb disk, we need to do some steps.

Step 1: Create the GFS Cluster File System

[root@cnode1 ~]# mkfs.gfs2 -p lock_dlm -t Linux_HA:GFS -j 2 /dev/disk/by-id/scsi-1IET_00020001
This will destroy any data on /dev/disk/by-id/scsi-1IET_00020001.
It appears to contain: symbolic link to `../../sdb'
Are you sure you want to proceed? [y/n] y
Device: /dev/disk/by-id/scsi-1IET_00020001
Blocksize: 4096
Device Size 5.00 GB (1310720 blocks)
Filesystem Size: 5.00 GB (1310718 blocks)
Journals: 2
Resource Groups: 20
Locking Protocol: "lock_dlm"
Lock Table: "Linux_HA:GFS"
UUID: 36d25bd4-9d7c-8ed4-3339-57d948bf0fca

Step 2: Install Apache package on both node

[root@cnode1 ~]# yum -y install httpd

[root@cnode2 ~]# yum -y install httpd

Step 3: Configure Fence, Resouce and Service Group

Open the web browser and type “https://192.168.1..100:8084“, enter cluster server root crendentials. There you can see the Linux_HA cluster is up and running.

  1. Click the Linux_HA cluster, in “Nodes” tab, you can see cnode1 and cnode2 are connected with cluster.
  2. Click “Fence Devices” tab, and click “Add” and the popup window will appear, in that select “Fence virt (Multicase Mode) and enter the Fence name “Linux_Fence”
  3. And now click on “Nodes” tab, click cnode1 go down and click “Add Fence Method”, the new popup window will appear, in that select “xvm Virtual Machine Fencing” and enter your domain “example.com” and click submit. Please do the same steps for cnode2 server.
  4. Click “Failover Domains” tab, click “Add”, in that new popup windows enter the Failover domain name “Linux_Failover” and click “Prioritized” check box then set the 1 and 2 priority for both nodes.
  5. Now click “Resources” tab, click “Add” and add the below resource type
    “IP Address”, enter unused IP Address “192.168.1..203”
    “File System” enter the file system name “Linux_Filesystem”, Mount point as “/var/www/html/” and Devices, FS or UUID value as “/dev/disk/by-id/scsi-1IET_00020001”
    “Script” enter the script name “Linux_HA_Script” and script file path is “/etc/init.d/httpd”
  6. Finally click “Service Groups” tab, click “Add”, in that dialogue window, enter the service name as “Linux_GFS” and select the “Automatically Start This Services”, then select the Failover domain “Linux_Failover”
    Click “Add Resource” and select the IPAddress, Filesystem and Script resources.

If you propery did the above steps then the apache webserver will automatically start, to check this open the web browser and type “http://192.168.1..203“, it will direct you web page.

Step 4: Check Cluster status command line

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:22:14 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1.201 1 Online, Local, rgmanager
192.168.1.202 2 Online, rgmanager

Service Name Owner (Last) State
——- —- —– —— —–
service:Linux_GFS 192.168.1.201 started

[root@cnode1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 19G 1003M 17G 6% /
tmpfs 246M 32M 215M 13% /dev/shm
/dev/sda1 485M 33M 427M 8% /boot
/dev/sdb 5.0G 259M 4.8G 6% /var/www/html

[root@cnode1 ~]# vi /var/www/html/index.html

Add the below lines

Welcome to Redhat HA Cluster with GFS

Save and exit (Esc :wq!)

Now open your web browser and type “http://192.168.1.203“, you will get the above index content.

By executing below command you can relocate the cluster running web services to another node.

[root@cnode1 ~]# clusvcadm -r Linux_HA_SGroup -m 192.168.1..202
Trying to relocate service:Linux_HA_SGroup to 192.168.1..202...Success
service:Linux_HA_SGroup is now running on 192.168.1..202

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:30:43 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1..201 1 Online, Local, rgmanager
192.168.1..202 2 Online, rgmanager

Service Name Owner (Last) State
——- —- —– —— —–
service:Linux_GFS 192.168.1..202 started

That’s all. If cnode1 goes down then the redhat cluster will run the apache service in cnode2 without service fail.

Redhat Cluster High Availability Installation – Part 1


In this article, we are going to see how to setup the Linux-HA on 2 node clusters. Here I’m not going to explain the concept of HA, installation and configuration steps only.

As we are going to implement two node cluster, we need the minimum hardware configuration as below.

Cluster Server:

OS: Redhat Enterprise Linux - 6.5 ( x64 )
Hardware: Oracle VirtualBox ( 1 CPU / 1GB RAM / 2 NIC Cards / 3 HDD with 20GB )
Hostname: cluster.example.com
IP Address: 192.168.1.100/24

Node 1:

OS : Redhat Enterprise Linux - 6.5 ( x64 )
Hardware: Oracle VirtualBox ( 1 CPU / 512MB RAM / 2 NIC Cards / 1 HDD with 20GB )
Hostname: cnode1.example.com
IP Address: 192.168.1.201/24

Node 2:

OS : Redhat Enterprise Linux - 6.5 ( x64 )
Hardware: Oracle VirtualBox ( 1 CPU / 512MB RAM / 2 NIC Cards / 1 HDD with 20GB )
Hostname: cnode2.example.com
IP Address: 192.168.1.202/24

Step 1: Yum Server Configuration

Mount the CD-ROM to /mnt filesystem

[root@cluster ~]# mount -t iso9660 /dev/sr0 /mnt/
mount: block device /dev/sr0 is write-protected, mounting read-only

Install the required packages

[root@cluster ~]# cd /mnt/Packages/

[root@cluster Packages]# rpm -ivh deltarpm-3.5-0.5.20090913git.el6.x86_64.rpm
warning: deltarpm-3.5-0.5.20090913git.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:deltarpm ########################################### [100%]

[root@cluster Packages]# rpm -ivh python-deltarpm-3.5-0.5.20090913git.el6.x86_64.rpm
warning: python-deltarpm-3.5-0.5.20090913git.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:python-deltarpm ########################################### [100%]

[root@cluster Packages]# rpm -ivh createrepo-0.9.9-18.el6.noarch.rpm
warning: createrepo-0.9.9-18.el6.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:createrepo ########################################### [100%]

[root@cluster Packages]# rpm -ivh vsftpd-2.2.2-11.el6_4.1.x86_64.rpm
warning: vsftpd-2.2.2-11.el6_4.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:vsftpd ########################################### [100%]

[root@cluster ~]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]

[root@cluster ~]# /etc/init.d/ip6tables stop
ip6tables: Setting chains to policy ACCEPT: filter [ OK ]
ip6tables: Flushing firewall rules: [ OK ]
ip6tables: Unloading modules: [ OK ]

[root@cluster ~]# chkconfig iptables off
[root@cluster ~]# chkconfig ip6tables off
[root@cluster ~]# chkconfig vsftpd on

Copy the DVD files to /var/ftp/pub/ directory to setup local yum server

[root@cluster ~]# cp -r /mnt/* /var/ftp/pub/

Create the YUM Repository

[root@cluster ~]# createrepo -v /var/ftp/pub/
Spawning worker 0 with 3763 pkgs
..
..
Sqlite DBs complete

[root@cluster ~]# vi /etc/yum.repos.d/rhel-source.repo

Edit the baseurl and enabled lines as below

baseurl=file:///var/ftp/pub/
enabled=1

Check the YUM repository

[root@cluster ~]# yum clean all
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Cleaning repos: rhel-source
Cleaning up Everything

[root@cluster ~]# yum repolist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
rhel-source | 2.9 kB 00:00 ...
rhel-source/primary_db | 3.2 MB 00:00 ...
repo id repo name status
rhel-source Red Hat Enterprise Linux 6Server - x86_64 - Source 3,763
repolist: 3,763

Step 2: Install and configure ISCSI target

[root@cluster ~]# yum -y install scsi-target-utils

[root@cluster ~]# mkdir -p /etc/tgt/mytargets/

[root@cluster ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 26.8 GB, 26843545600 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 3264 25701376 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes

Here we are going to use /dev/sdb and /dev/sdc as shared storage pool.

[root@cluster ~]# vi /etc/tgt/mytargets/san.conf
<target iqn.2015-09.com.tcs:hawebdisk>
backing-store /dev/sdb
</target>
<target iqn.2015-09.com.tcs:haqdisk>
backing-store /dev/sdc
</target>

[root@cluster ~]# vi /etc/tgt/targets.conf

Change the below line

From:

#include /etc/tgt/temp/*.conf

To:

include /etc/tgt/mytargets/*.conf

Start the iSCSI target service

[root@cluster ~]# /etc/init.d/tgtd start
Starting SCSI target daemon: [ OK ]

[root@cluster ~]# chkconfig tgtd on

Check whether iSCSI target is working or not.

[root@cluster ~]# tgtadm --mode target --op show | grep /dev/sd
Backing store path: /dev/sdc
Backing store path: /dev/sdb

Step 3: Install luci package

[root@cluster ~]# yum -y install luci

[root@cluster ~]# /etc/init.d/luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `cluster.example.com' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
(none suitable found, you can still do it manually as mentioned above)
Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Starting saslauthd: [ OK ]
Start luci... [ OK ]
Point your web browser to https://cluster.example.com:8084 (or equivalent) to access luci

[root@cluster ~]# chkconfig luci on

Now you can open CONGA console in web browser. Type https://ipaddress:8084 (more…)