Redhat Cluster High Availability Installation – Part 3


In this post, we are going to see how to add the fence, resource and service group to the cluster. Before doing that please do the below steps to find the iscsi shared disk.

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:00:40 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1.201 1 Online, Local
192.168.1.202 2 Online

[root@cnode1 ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes

[root@cnode1 ~]# ls -l /dev/disk/by-path/
..
lrwxrwxrwx 1 root root 9 Oct 5 14:12 ip-192.168.1.100:3260-iscsi-iqn.2015-09.com.tcs:haqdisk-lun-1 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 5 14:12 ip-192.168.1.100:3260-iscsi-iqn.2015-09.com.tcs:hawebdisk-lun-1 -> ../../sdb

..
..

In above example, iscsi hawebdisk is mounted in /dev/sdb partition. To identify the iscsi disk id, please run the below command.

[root@cnode1 ~]# ls -l /dev/disk/by-id
..
..
lrwxrwxrwx 1 root root 9 Oct 5 14:12 scsi-1IET_00010001 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 5 14:12 scsi-1IET_00020001 -> ../../sdb

..
..

Here, /dev/sdb disk id is scsi-1IET_00020001, please note down this id, we will use this for cluster resource.

Before using the /dev/sdb disk, we need to do some steps.

Step 1: Create the GFS Cluster File System

[root@cnode1 ~]# mkfs.gfs2 -p lock_dlm -t Linux_HA:GFS -j 2 /dev/disk/by-id/scsi-1IET_00020001
This will destroy any data on /dev/disk/by-id/scsi-1IET_00020001.
It appears to contain: symbolic link to `../../sdb'
Are you sure you want to proceed? [y/n] y
Device: /dev/disk/by-id/scsi-1IET_00020001
Blocksize: 4096
Device Size 5.00 GB (1310720 blocks)
Filesystem Size: 5.00 GB (1310718 blocks)
Journals: 2
Resource Groups: 20
Locking Protocol: "lock_dlm"
Lock Table: "Linux_HA:GFS"
UUID: 36d25bd4-9d7c-8ed4-3339-57d948bf0fca

Step 2: Install Apache package on both node

[root@cnode1 ~]# yum -y install httpd

[root@cnode2 ~]# yum -y install httpd

Step 3: Configure Fence, Resouce and Service Group

Open the web browser and type “https://192.168.1..100:8084“, enter cluster server root crendentials. There you can see the Linux_HA cluster is up and running.

  1. Click the Linux_HA cluster, in “Nodes” tab, you can see cnode1 and cnode2 are connected with cluster.
  2. Click “Fence Devices” tab, and click “Add” and the popup window will appear, in that select “Fence virt (Multicase Mode) and enter the Fence name “Linux_Fence”
  3. And now click on “Nodes” tab, click cnode1 go down and click “Add Fence Method”, the new popup window will appear, in that select “xvm Virtual Machine Fencing” and enter your domain “example.com” and click submit. Please do the same steps for cnode2 server.
  4. Click “Failover Domains” tab, click “Add”, in that new popup windows enter the Failover domain name “Linux_Failover” and click “Prioritized” check box then set the 1 and 2 priority for both nodes.
  5. Now click “Resources” tab, click “Add” and add the below resource type
    “IP Address”, enter unused IP Address “192.168.1..203”
    “File System” enter the file system name “Linux_Filesystem”, Mount point as “/var/www/html/” and Devices, FS or UUID value as “/dev/disk/by-id/scsi-1IET_00020001”
    “Script” enter the script name “Linux_HA_Script” and script file path is “/etc/init.d/httpd”
  6. Finally click “Service Groups” tab, click “Add”, in that dialogue window, enter the service name as “Linux_GFS” and select the “Automatically Start This Services”, then select the Failover domain “Linux_Failover”
    Click “Add Resource” and select the IPAddress, Filesystem and Script resources.

If you propery did the above steps then the apache webserver will automatically start, to check this open the web browser and type “http://192.168.1..203“, it will direct you web page.

Step 4: Check Cluster status command line

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:22:14 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1.201 1 Online, Local, rgmanager
192.168.1.202 2 Online, rgmanager

Service Name Owner (Last) State
——- —- —– —— —–
service:Linux_GFS 192.168.1.201 started

[root@cnode1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 19G 1003M 17G 6% /
tmpfs 246M 32M 215M 13% /dev/shm
/dev/sda1 485M 33M 427M 8% /boot
/dev/sdb 5.0G 259M 4.8G 6% /var/www/html

[root@cnode1 ~]# vi /var/www/html/index.html

Add the below lines

Welcome to Redhat HA Cluster with GFS

Save and exit (Esc :wq!)

Now open your web browser and type “http://192.168.1.203“, you will get the above index content.

By executing below command you can relocate the cluster running web services to another node.

[root@cnode1 ~]# clusvcadm -r Linux_HA_SGroup -m 192.168.1..202
Trying to relocate service:Linux_HA_SGroup to 192.168.1..202...Success
service:Linux_HA_SGroup is now running on 192.168.1..202

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Mon Oct 5 15:30:43 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1..201 1 Online, Local, rgmanager
192.168.1..202 2 Online, rgmanager

Service Name Owner (Last) State
——- —- —– —— —–
service:Linux_GFS 192.168.1..202 started

That’s all. If cnode1 goes down then the redhat cluster will run the apache service in cnode2 without service fail.

Redhat Cluster High Availability Installation – Part 2


In previous article, we have discussed how to install the Redhat HA cluster setup. Here we are going to see how to add the nodes to cluster.

Please follow the below steps in both nodes (cnode1 & cnode2).

Step 1: YUM client configuation

[root@cnode1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.100 cluster.example.com cluster
192.168.1.201 cnode1.example.com cnode1
192.168.1.202 cnode2.example.com cnode2

[root@cnode1 ~]# mount -t iso9660 /dev/sr0 /mnt/
mount: block device /dev/sr0 is write-protected, mounting read-only

[root@cnode1 ~]# rpm -ivh /mnt/Packages/vsftpd-2.2.2-11.el6_4.1.x86_64.rpm
warning: /mnt/Packages/vsftpd-2.2.2-11.el6_4.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:vsftpd ########################################### [100%]

[root@cnode1 ~]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]

[root@cnode1 ~]# /etc/init.d/ip6tables stop
ip6tables: Setting chains to policy ACCEPT: filter [ OK ]
ip6tables: Flushing firewall rules: [ OK ]
ip6tables: Unloading modules: [ OK ]

[root@cnode1 ~]# chkconfig iptables off
[root@cnode1 ~]# chkconfig ip6tables off
[root@cnode1 ~]# chkconfig vsftpd on

Disable SELINUX Policy

[root@cnode1 ~]# setenforce 0

Step 2: Install ricci package

Install ricci and set the password for the ricci user

[root@cnode1 ~]# yum -y install ricci

It is very important to set the password for ricci user. If you didn’t do this we can’t add the host to the cluster. For safety purpose, assign your root password for ricci user.

[root@cnode1 ~]# passwd ricci

[root@cnode1 ~]# /etc/init.d/ricci start
Starting system message bus: [ OK ]
Starting oddjobd: [ OK ]
generating SSL certificates... done
Generating NSS database... done
Starting ricci: [ OK ]

[root@cnode1 ~]# chkconfig ricci on

Step 3: Install and configure iSCSI initiator

[root@cnode1 ~]# yum -y install iscsi-initiator-utils

[root@cnode1 ~]# iscsiadm --mode discovery -t st -p 192.168.1.100
Starting iscsid: [ OK ]
192.168.1.100:3260,1 iqn.2015-09.com.tcs:haqdisk
192.168.1.100:3260,1 iqn.2015-09.com.tcs:hawebdisk

[root@cnode1 ~]# iscsiadm --mode node -P 192.168.1.100:3260,1 iqn.2015-09.com.tcs:hawebdisk --login
Logging in to [iface: default, target: iqn.2015-09.com.tcs:haqdisk, portal: 192.168.1.100,3260] (multiple)
Logging in to [iface: default, target: iqn.2015-09.com.tcs:hawebdisk, portal: 192.168.1.100,3260] (multiple)
Login to [iface: default, target: iqn.2015-09.com.tcs:haqdisk, portal: 192.168.1.100,3260] successful.
Login to [iface: default, target: iqn.2015-09.com.tcs:hawebdisk, portal: 192.168.1.100,3260] successful.

Step 4: Add the nodes to HA Cluster

Open the web browser and type “https://192.168.1.100:8084 or https://cluster.example.com:8084” and accept the certificate warning message.

Now, use your cluster host root credentials to login in to CONGA console.

  1. Click “Managae Cluster” in Right Side column
  2. Click “Create” button
  3. Enter the “Cluster Name” ie., Linux_HA
  4. Enter Node name and password in respective column, and use “Add Another Node” button to add additonal nodes ie., 192.168.1.201 and 192.168.1.202
  5. Select “Download Packages”
  6. Click “Enabled Shared Storage Support”
  7. Finally click “Create Cluster”

Now your nodes are started to add in the Cluster. Using ” tail -f /var/log/messages” command you can see the installation progress on both nodes.

[root@cnode1 ~]# tail -f /var/log/messages
...
...
Sep 22 15:45:54 cnode1 yum[1485]: Installed: libtalloc-2.0.7-2.el6.x86_64
Sep 22 15:45:55 cnode1 yum[1485]: Installed: libgssglue-0.1-11.el6.x86_64
Sep 22 15:45:55 cnode1 yum[1485]: Installed: libtevent-0.9.18-3.el6.x86_64
...
...
Sep 22 15:47:12 cnode1 rgmanager[3208]: Initializing Services
Sep 22 15:47:12 cnode1 rgmanager[3208]: Services Initialized
Sep 22 15:47:12 cnode1 rgmanager[3208]: State change: Local UP
Sep 22 15:47:12 cnode1 rgmanager[3208]: State change: 192.168.1.202 UP

[root@cnode1 ~]# clustat
Cluster Status for Linux_HA @ Tue Sep 22 16:18:29 2015
Member Status: Quorate

Member Name ID Status
—— —- —- ——
192.168.1.201 1 Online, Local
192.168.1.202 2 Online

That’s all. Now both nodes are added to your cluster machine.

Redhat Cluster High Availability Installation – Part 1


In this article, we are going to see how to setup the Linux-HA on 2 node clusters. Here I’m not going to explain the concept of HA, installation and configuration steps only.

As we are going to implement two node cluster, we need the minimum hardware configuration as below.

Cluster Server:

OS: Redhat Enterprise Linux - 6.5 ( x64 )
Hardware: Oracle VirtualBox ( 1 CPU / 1GB RAM / 2 NIC Cards / 3 HDD with 20GB )
Hostname: cluster.example.com
IP Address: 192.168.1.100/24

Node 1:

OS : Redhat Enterprise Linux - 6.5 ( x64 )
Hardware: Oracle VirtualBox ( 1 CPU / 512MB RAM / 2 NIC Cards / 1 HDD with 20GB )
Hostname: cnode1.example.com
IP Address: 192.168.1.201/24

Node 2:

OS : Redhat Enterprise Linux - 6.5 ( x64 )
Hardware: Oracle VirtualBox ( 1 CPU / 512MB RAM / 2 NIC Cards / 1 HDD with 20GB )
Hostname: cnode2.example.com
IP Address: 192.168.1.202/24

Step 1: Yum Server Configuration

Mount the CD-ROM to /mnt filesystem

[root@cluster ~]# mount -t iso9660 /dev/sr0 /mnt/
mount: block device /dev/sr0 is write-protected, mounting read-only

Install the required packages

[root@cluster ~]# cd /mnt/Packages/

[root@cluster Packages]# rpm -ivh deltarpm-3.5-0.5.20090913git.el6.x86_64.rpm
warning: deltarpm-3.5-0.5.20090913git.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:deltarpm ########################################### [100%]

[root@cluster Packages]# rpm -ivh python-deltarpm-3.5-0.5.20090913git.el6.x86_64.rpm
warning: python-deltarpm-3.5-0.5.20090913git.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:python-deltarpm ########################################### [100%]

[root@cluster Packages]# rpm -ivh createrepo-0.9.9-18.el6.noarch.rpm
warning: createrepo-0.9.9-18.el6.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:createrepo ########################################### [100%]

[root@cluster Packages]# rpm -ivh vsftpd-2.2.2-11.el6_4.1.x86_64.rpm
warning: vsftpd-2.2.2-11.el6_4.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing... ########################################### [100%]
1:vsftpd ########################################### [100%]

[root@cluster ~]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]

[root@cluster ~]# /etc/init.d/ip6tables stop
ip6tables: Setting chains to policy ACCEPT: filter [ OK ]
ip6tables: Flushing firewall rules: [ OK ]
ip6tables: Unloading modules: [ OK ]

[root@cluster ~]# chkconfig iptables off
[root@cluster ~]# chkconfig ip6tables off
[root@cluster ~]# chkconfig vsftpd on

Copy the DVD files to /var/ftp/pub/ directory to setup local yum server

[root@cluster ~]# cp -r /mnt/* /var/ftp/pub/

Create the YUM Repository

[root@cluster ~]# createrepo -v /var/ftp/pub/
Spawning worker 0 with 3763 pkgs
..
..
Sqlite DBs complete

[root@cluster ~]# vi /etc/yum.repos.d/rhel-source.repo

Edit the baseurl and enabled lines as below

baseurl=file:///var/ftp/pub/
enabled=1

Check the YUM repository

[root@cluster ~]# yum clean all
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Cleaning repos: rhel-source
Cleaning up Everything

[root@cluster ~]# yum repolist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
rhel-source | 2.9 kB 00:00 ...
rhel-source/primary_db | 3.2 MB 00:00 ...
repo id repo name status
rhel-source Red Hat Enterprise Linux 6Server - x86_64 - Source 3,763
repolist: 3,763

Step 2: Install and configure ISCSI target

[root@cluster ~]# yum -y install scsi-target-utils

[root@cluster ~]# mkdir -p /etc/tgt/mytargets/

[root@cluster ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 26.8 GB, 26843545600 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 3264 25701376 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes

Here we are going to use /dev/sdb and /dev/sdc as shared storage pool.

[root@cluster ~]# vi /etc/tgt/mytargets/san.conf
<target iqn.2015-09.com.tcs:hawebdisk>
backing-store /dev/sdb
</target>
<target iqn.2015-09.com.tcs:haqdisk>
backing-store /dev/sdc
</target>

[root@cluster ~]# vi /etc/tgt/targets.conf

Change the below line

From:

#include /etc/tgt/temp/*.conf

To:

include /etc/tgt/mytargets/*.conf

Start the iSCSI target service

[root@cluster ~]# /etc/init.d/tgtd start
Starting SCSI target daemon: [ OK ]

[root@cluster ~]# chkconfig tgtd on

Check whether iSCSI target is working or not.

[root@cluster ~]# tgtadm --mode target --op show | grep /dev/sd
Backing store path: /dev/sdc
Backing store path: /dev/sdb

Step 3: Install luci package

[root@cluster ~]# yum -y install luci

[root@cluster ~]# /etc/init.d/luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `cluster.example.com' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
(none suitable found, you can still do it manually as mentioned above)
Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Starting saslauthd: [ OK ]
Start luci... [ OK ]
Point your web browser to https://cluster.example.com:8084 (or equivalent) to access luci

[root@cluster ~]# chkconfig luci on

Now you can open CONGA console in web browser. Type https://ipaddress:8084 (more…)