How to speed up your website


Varnish is an HTTP accelerator designed for content-heavy dynamic web sites as well as heavily consumed APIs.

Below steps explain how to install and configure Varnish cache server.

Operating System: CentOS 6.4
Hostname: server
IP Address: 192.168.1.6

Step 1: Install the varnish package

[root@server ~]# rpm -ivh http://repo.varnish-cache.org/redhat/varnish-3.0/el5/noarch/varnish-release/varnish-release-3.0-1.el5.centos.noarch.rpm

[root@server ~]# yum install -y varnish

Step 2: Start apache server

[root@server ~]# /etc/init.d/httpd start
Starting httpd: [ OK ]

[root@server ~]# chkconfig --level 35 httpd on

Step 3: Configuring Varnish

[root@server ~]# vi /etc/varnish/default.vcl

Change the below parameters

From:

backend default {
.host = "127.0.0.1";
.port = "80";

To:

backend default {
.host = "127.0.0.1";
.port = "88";

Save and Exit (:wq!)

[root@server ~]# vi /etc/sysconfig/varnish

#Uncomment the below settings and change the parameter

VARNISH_VCL_CONF=/etc/varnish/default.vcl
VARNISH_LISTEN_ADDRESS=
VARNISH_LISTEN_PORT=80
VARNISH_SECRET_FILE=/etc/varnish/secret
VARNISH_MIN_THREADS=1
VARNISH_MAX_THREADS=1000
VARNISH_THREAD_TIMEOUT=120
VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin
VARNISH_STORAGE_SIZE=512M
VARNISH_STORAGE=โ€malloc,${VARNISH_STORAGE_SIZE}โ€
VARNISH_TTL=120

Save and Exit (:wq!)

Step 4: Apache Configuration

Change the apache listen port from 80 to 88

[root@server ~]# vi /etc/httpd/conf/httpd.conf

From:

Listen 80

To:

Listen 88

Step 5: Run apache and varnish daemon

[root@server ~]# /etc/init.d/varnish start

[root@server ~]# /etc/init.d/varnish status
varnishd (pid 2587) is running...

[root@server ~]# chkconfig --level 35 varnish on

[root@server ~]# /etc/init.d/httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]

[root@server ~]# chkconfig --level 35 httpd on

Step 6: Test your website

[root@server ~]# curl -I http://localhost
HTTP/1.1 200 OK
Server: Apache/2.2.15 (CentOS)
Last-Modified: Sun, 12 Apr 2015 07:15:32 IST
ETag: "3942-2a-51381c35c67f1"
Content-Type: text/html; charset=UTF-8
Content-Length: 42
Accept-Ranges: bytes
Date: Sun, 12 Apr 2015 07:16:55 IST
X-Varnish: 612706973
Age: 0
Via: 1.1 varnish
Connection: keep-alive

[root@server ~]# varnishlog

[root@server ~]# varnishstat

Enjoy… Cheers ๐Ÿ™‚

LUN Centralized Storage – Using Logical Partition – Part II


In our previous post, we have discsussed how to use our LVM partition as ISCSI LUN. Now we need to assign that lun to some server/initiator. Please follow below steps to attach the lun to the machine.

Client/Initiator:
Operating System: CentOS release 6.4
Hostname: client
IP Address: 192.168.1.6

Step 1: Install ISCSI initiator packages

[root@client ~]# yum install iscsi-initiator-utils.x86_64

Step 2: Initiator configuration setup

[root@client ~]# vi /etc/iscsi/iscsid.conf

#Uncomment and change this parameter

node.session.auth.authmethod = CHAP

node.session.auth.username = chapuser
node.session.auth.password = chappwd

Step 3: Discover the storage LUN

[root@client ~]# iscsiadm -m node -o show
# BEGIN RECORD 6.2.0-873.2.el6
node.name = iqn.2008-09.com.example:server.target1
node.tpgt = 1
node.startup = automatic
........
........
node.discovery_address = 192.168.1.5
node.discovery_port = 3260
node.discovery_type = send_targets
........
........
node.session.auth.authmethod = CHAP
node.session.auth.username = chapuser
node.session.auth.password = ********

........
........
node.conn[0].address = 192.168.1.5
node.conn[0].port = 3260
........
........
# END RECORD

[root@client ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.1.5 --discover
192.168.1.5:3260,1 iqn.2008-09.com.example:server.target1

Step 4: Add/Attach the LUN

[root@client ~]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2008-09.com.example:server.target1, portal: 192.168.1.5,3260] (multiple)
Login to [iface: default, target: iqn.2008-09.com.example:server.target1, portal: 192.168.1.5,3260] successful.

Now check your server, the new storage disk (/dev/sdb) is attached in the server.

[root@client ~]# fdisk -l | grep /dev
Disk /dev/sda: 8589 MB, 8589934592 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 1045 7875584 8e Linux LVM
Disk /dev/mapper/VolGroup-lv_root: 3833 MB, 3833593856 bytes
Disk /dev/mapper/VolGroup-lv_swap: 4227 MB, 4227858432 bytes
Disk /dev/sdb: 104 MB, 104857600 bytes

Format and use the LUN

[root@client ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xb960394f.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1024, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1024, default 1024): +50M

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@client ~]# fdisk -l | grep /dev
Disk /dev/sda: 8589 MB, 8589934592 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 1045 7875584 8e Linux LVM
Disk /dev/mapper/VolGroup-lv_root: 3833 MB, 3833593856 bytes
Disk /dev/mapper/VolGroup-lv_swap: 4227 MB, 4227858432 bytes
Disk /dev/sdb: 104 MB, 104857600 bytes
/dev/sdb1 1 513 51275 83 Linux

That’s all.. Cheers ๐Ÿ™‚

LUN Centralized Storage – Using Logical Partition – Part I


We are able to share logical partition to another machine using ISCSI method. So LVM can be act as a storage server and share the LUNS to clients.

Server/Target:
Operating System: CentOS release 6.4
Hostname: server
IP Address: 192.168.1.5

Step 1: Install ISCSI target package

[root@server ~]# yum install scsi-target-utils

Step 2: Create PV/VG/LV

[root@server ~]# pvcreate /dev/sdb{1,2}
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created

[root@server ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 VolGroup00 lvm2 a-- 19.53g 2.88g
/dev/sdb1 lvm2 a-- 509.84m 509.84m
/dev/sdb2 lvm2 a-- 109.82m 109.82m

[root@server ~]# vgcreate vg01 /dev/sdb{1,2}
Volume group "vg01" successfully created

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 2 0 0 wz--n- 616.00m 616.00m

[root@server ~]# lvcreate -L 100M -n lv01 /dev/vg01
Logical volume "lv01" created

[root@server ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao--- 14.65g
LogVol01 VolGroup00 -wi-ao--- 2.00g
lv01 vg01 -wi-a---- 100.00m

Step 3: Target configuration

[root@server ~]# vi /etc/tgt/targets.conf


target iqn.2008-09.com.example:server.target1
backing-store /dev/vg01/lv01
initiator-address 192.168.1.6
incominguser chapuser chappwd
/target

Start the ISCSI daemon

[root@server ~]# /etc/init.d/tgtd start
Starting SCSI target daemon: [ OK ]

Check the target status

[root@server ~]# tgtadm --mode target --op show
Target 1: iqn.2008-09.com.example:server.target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 105 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/vg01/lv01
Backing store flags:
Account information:
chapuser
ACL information:
192.168.1.6

Migrate the logical partition to new logical disk drive


LVM have the feature of migrate the existing logical volumes to a new logical volume without any data loss and downtime.

With the help of this feature we can move our data from error disk to a new disk.

Lets assume, one of our hard disk is error then we need to move the error disk data to new logical volume. Please follow the below steps to migrate the disk.

Step 1: Create the new PV/VG/LV

[root@server ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created

[root@server ~]# vgcreate vg01 /dev/sdb1
Volume group "vg01" successfully created

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 1 0 0 wz--n- 508.00m 508.00m

[root@server ~]# vgs -o+devices
VG #PV #LV #SN Attr VSize VFree Devices
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(0)
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(512)

[root@server ~]# lvcreate -L 100M -n lv01 /dev/vg01
Logical volume "lv01" created

[root@server ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao--- 14.65g
LogVol01 VolGroup00 -wi-ao--- 2.00g
lv01 vg01 -wi-a---- 100.00m

[root@server ~]# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
LogVol00 VolGroup00 -wi-ao--- 14.65g /dev/sda1(512)
LogVol01 VolGroup00 -wi-ao--- 2.00g /dev/sda1(0)
lv01 vg01 -wi-a---- 100.00m /dev/sdb1(0)

[root@server ~]# vgs -o+devices
VG #PV #LV #SN Attr VSize VFree Devices
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(0)
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(512)
vg01 1 1 0 wz--n- 508.00m 408.00m /dev/sdb1(0)

Format the logical volume and create some files.

[root@server ~]# mkfs.ext4 /dev/vg01/lv01

[root@server ~]# mkdir /data

[root@server ~]# mount /dev/vg01/lv01 /data
[root@server ~]# cd /data
[root@server data]# touch a1.txt
[root@server data]# echo "LVM Disk Migration" > a1.txt
[root@server data]# cat a1.txt
LVM Disk Migration

Step 2: Migrate the disk from /dev/sdb1 to /dev/sdc1

Create new PV from new HDD ( here it is sdc hard drive)

[root@server ~]# pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created

Extend the volume group

[root@server ~]# vgextend vg01 /dev/sdc1
Volume group "vg01" successfully extended

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 2 1 0 wz--n- 1016.00m 916.00m

Check which hard drive used by logical volume. Here the logical volume (lv01) use the /dev/sdb1 hard disk.

[root@server ~]# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
LogVol00 VolGroup00 -wi-ao--- 14.65g /dev/sda1(512)
LogVol01 VolGroup00 -wi-ao--- 2.00g /dev/sda1(0)
lv01 vg01 -wi-ao--- 100.00m /dev/sdb1(0)

Convert the existing logical volume data to new disk.

[root@server ~]# lvconvert -m 1 /dev/vg01/lv01 /dev/sdc1
vg01/lv01: Converted: 4.0%
vg01/lv01: Converted: 100.0%

Now the logical volume is mirrored in two disk /dev/sdb1 and /dev/sdc1. So all the datas from /dev/sdb1 have mirrored to /dev/sdc1.

[root@server ~]# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
LogVol00 VolGroup00 -wi-ao--- 14.65g /dev/sda1(512)
LogVol01 VolGroup00 -wi-ao--- 2.00g /dev/sda1(0)
lv01 vg01 mwi-aom-- 100.00m lv01_mlog 100.00 lv01_mimage_0(0),lv01_mimage_1(0)

Remove the error disk from logical volume by following command

[root@server ~]# lvconvert -m 0 /dev/vg01/lv01 /dev/sdb1
Logical volume lv01 converted

Now see the logical volume lv01 migrated to the new disk /dev/sdc1.

[root@server ~]# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
LogVol00 VolGroup00 -wi-ao--- 14.65g /dev/sda1(512)
LogVol01 VolGroup00 -wi-ao--- 2.00g /dev/sda1(0)
lv01 vg01 -wi-ao--- 100.00m /dev/sdc1(0)

Check whether the datas are available in the logical partition.

[root@server ~]# cd /data
[root@server data]# cat a1.txt
LVM Disk Migration

Remove the disk from volume group.

[root@server ~]# vgreduce vg01 /dev/sdb1
Removed "/dev/sdb1" from volume group "vg01"

[root@server ~]# vgs -o+devices
VG #PV #LV #SN Attr VSize VFree Devices
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(0)
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(512)
vg01 1 1 0 wz--n- 508.00m 408.00m /dev/sdc1(0)

[root@server ~]# pvremove /dev/sdb1
Labels on physical volume "/dev/sdb1" successfully wiped

That’s all. Now we have replaced the errored disk with new hard drive without any downtime.

Backup and Restore LVM partition using Snapshot


A snapshot is the state of a system at a particular point in time and it is also called as photography. LVM also have the feature of snapshot. So that we can easily take the backup of current logical partition and moved to somewhere.

It is mainly used in live production environment, if you need to clone the exact copy of production environment then you will prefer this method.

Here we are going to take a backup of one logical volume and restore it in another machine on same network.

Step 1: Careate a new partition

[root@server ~]# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (265-652, default 265):
Using default value 265
Last cylinder, +cylinders or +size{K,M,G} (265-652, default 652): +1G

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Step 2: Run “partprobe” command to inform the OS/Kernel of partition table changes

[root@server ~]# partprobe

Step 3: LVM setup for taking snapshot for existing logical partition

Create the physical volume (PV)

[root@server ~]# pvcreate /dev/sdb3
Physical volume "/dev/sdb3" successfully created

Extend Volume Group (VG)

[root@server ~]# vgextend vg01 /dev/sdb3
Volume group "vg01" successfully extended

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 3 1 0 wz--n- 3.02g 2.54g

Creating snapshot of existing partition

[root@server ~]# lvcreate -L 1G -s -n lv_snap /dev/vg01/lv01
Logical volume "lv_snap" created

Note: Snapshot size should be larger than actual current size of logical volume.

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 3 2 1 wz--n- 3.02g 1.54g

[root@server ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao--- 14.65g
LogVol01 VolGroup00 -wi-ao--- 2.00g
lv01 vg01 owi-a-s-- 500.00m
lv_snap vg01 swi-aos-- 1.00g lv01 0.00

[root@server ~]# mkdir /snapdata

Check whether all datas from existing logical volume (/dev/vg01/lv01) copied to lvm snapshot (/dev/vg01/lv_snap)

[root@server ~]# mount /dev/vg01/lv_snap /snapdata
[root@server ~]# cd /snapdata/
[root@server snapdata]# ls
fstab lost+found passwd

So above commands shows that the datas are copied to the new logical volume. Now we need to compress and send this backup to other machine.

[root@server ~]# dd if=/dev/vg01/lv_snap | gzip > /data/lvdata.gz
1024000+0 records in
1024000+0 records out
524288000 bytes (524 MB) copied, 8.07202 s, 65.0 MB/s

Transfer the backup file to client machine

[root@server ~]# scp -r /data/lvdata.gz client:/tmp

Client Side:

Step 1: Create PV, VG, LV

[root@client ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created

[root@client ~]# vgcreate vgclient /dev/sdb1
Volume group "vgclient" successfully created

[root@client ~]# lvcreate -L 500M -n restoredata /dev/vgclient
Logical volume "restoredata" created

Step 2: Restore the datas to current lvm

[root@client ~]# gzip -dc /tmp/lvdata.gz | dd of=/dev/vgclient/restoredata
1024000+0 records in
1024000+0 records out
524288000 bytes (524 MB) copied, 63.7254 s, 8.2 MB/s

Execute the below command to make the lvm changes reflect

[root@client ~]# pvscan && vgscan && lvscan

Step 3: Mount the partition

[root@client ~]# mkdir /backup
[root@client ~]# mount /dev/vgclient/restoredata /backup
[root@client ~]# cd /backup
[root@client backup]# ls
fstab lost+found passwd

That’s it!! Enjoy..