How to recover deleted LVM partitions


Some times system admin mistakenly delete LVM partitions while working on production servers. By using vgcfgrestore command we can recover deleted LVM partitions.

How it works:

Linux server store the lvm configuration backup copies in the /etc/lvm/archive directory. Here I have created the new 2G logical volume and deleted the same for pratical example.

Step 1: Create the physical volume (PV), extend the volume group (VG) and create new logical volume (LVM)

[root@sclient ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 8589 MB, 8589934592 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 1045 7875584 8e Linux LVM

[root@sclient ~]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@sclient ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@sclient ~]# echo "- - -" > /sys/class/scsi_host/host2/scan
[root@sclient ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 8589 MB, 8589934592 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 1045 7875584 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes

[root@sclient ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created

[root@sclient ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz--n- 7.51g 0

[root@sclient ~]# vgextend VolGroup /dev/sdb
Volume group "VolGroup" successfully extended

[root@sclient ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 2 2 0 wz--n- 12.50g 5.00g

[root@sclient ~]# lvcreate -L 2G -n lvtest VolGroup
Logical volume "lvtest" created.

[root@sclient ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 6.71g
lv_swap VolGroup -wi-ao---- 816.00m
lvtest VolGroup -wi-a----- 2.00g

[root@sclient ~]# mkdir -p /data
[root@sclient ~]# mkfs.ext4 /dev/VolGroup/lvtest
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

[root@sclient ~]# mount -t ext4 /dev/VolGroup/lvtest /data
[root@sclient ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
6.5G 1.8G 4.5G 28% /
tmpfs 242M 0 242M 0% /dev/shm
/dev/sda1 477M 66M 382M 15% /boot
/dev/mapper/VolGroup-lvtest
2.0G 3.0M 1.8G 1% /data

[root@sclient ~]# cp /etc/passwd /etc/group /data
[root@sclient ~]# ls -l /data/
total 24
-rw-r--r-- 1 root root 495 Sep 11 22:57 group
drwx------ 2 root root 16384 Sep 11 22:57 lost+found
-rw-r--r-- 1 root root 1080 Sep 11 22:57 passwd

Step 2: Remove the newly created logical volume

Below command is not reccommended to exectue in Prodcution Linux Server, be careful while executing these commands.

[root@sclient ~]# umount /data
[root@sclient ~]# lvremove /dev/VolGroup/lvtest
Logical volume "lvtest" successfully removed

Now we have removed the newly created locical volume, at this moment the data’s in the volume has destroyed.

[root@sclient ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 6.71g
lv_swap VolGroup -wi-ao---- 816.00m

Step 3: Recover the Logical Volume by using vgcfgrestore command

Whenever logical volume group get modified the linux server will take the backup of existing configuration and save it in /etc/lvm/archive directory.

[root@sclient ~]# vgcfgrestore --list VolGroup

File: /etc/lvm/archive/VolGroup_00000-304429941.vg
VG name: VolGroup
Description: Created *before* executing '/sbin/vgs --noheadings -o name --ignoreskippedcluster --config 'log{command_names=0 prefix=" "}''
Backup Time: Sun Jul 9 03:23:31 2017

File: /etc/lvm/archive/VolGroup_00001-1788852006.vg
VG name: VolGroup
Description: Created *before* executing 'lvcreate -L 2G -n lvtest VolGroup'
Backup Time: Mon Sep 11 22:55:26 2017

File: /etc/lvm/archive/VolGroup_00003-2103841493.vg
VG name: VolGroup
Description: Created *before* executing 'lvremove /dev/VolGroup/lvtest'
Backup Time: Mon Sep 11 23:04:52 2017

File: /etc/lvm/backup/VolGroup
VG name: VolGroup
Description: Created *after* executing 'lvremove /dev/VolGroup/lvtest'
Backup Time: Mon Sep 11 23:04:53 2017

In above example, linux server took the lvm configuration backup and save it in file /etc/lvm/archive/VolGroup_00003-2103841493.vg. This file is human readable in text format.

Below command will help us to recover the deleted logical volume

[root@sclient ~]# vgcfgrestore -f /etc/lvm/archive/VolGroup_00003-2103841493.vg VolGroup
Restored volume group VolGroup

where,

/etc/lvm/archive/VolGroup_00003-2103841493.vg – LVM configuration backup file name
VolGroup – Volume group

[root@sclient ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 6.71g
lv_swap VolGroup -wi-ao---- 816.00m
lvtest VolGroup -wi------- 2.00g

Now it’s a time to mount the logical volume and see the exisiting data’s.

[root@sclient ~]# mount -t ext4 /dev/VolGroup/lvtest /data
mount: special device /dev/VolGroup/lvtest does not exist

Ohhhhhhh What happened?????? We are not able to mount the logical volume because recovered volume is not activated at this moment. We need to activate the volumes before mounting to system. Below command will help us to activate the volumes.

[root@sclient ~]# lvchange -ay /dev/VolGroup/lvtest

[root@sclient ~]# mount -t ext4 /dev/VolGroup/lvtest /data
[root@sclient ~]# ls -l /data
total 24
-rw-r--r-- 1 root root 495 Sep 11 22:57 group
drwx------ 2 root root 16384 Sep 11 22:57 lost+found
-rw-r--r-- 1 root root 1080 Sep 11 22:57 passwd

LUN Centralized Storage – Using Logical Partition – Part II


In our previous post, we have discsussed how to use our LVM partition as ISCSI LUN. Now we need to assign that lun to some server/initiator. Please follow below steps to attach the lun to the machine.

Client/Initiator:
Operating System: CentOS release 6.4
Hostname: client
IP Address: 192.168.1.6

Step 1: Install ISCSI initiator packages

[root@client ~]# yum install iscsi-initiator-utils.x86_64

Step 2: Initiator configuration setup

[root@client ~]# vi /etc/iscsi/iscsid.conf

#Uncomment and change this parameter

node.session.auth.authmethod = CHAP

node.session.auth.username = chapuser
node.session.auth.password = chappwd

Step 3: Discover the storage LUN

[root@client ~]# iscsiadm -m node -o show
# BEGIN RECORD 6.2.0-873.2.el6
node.name = iqn.2008-09.com.example:server.target1
node.tpgt = 1
node.startup = automatic
........
........
node.discovery_address = 192.168.1.5
node.discovery_port = 3260
node.discovery_type = send_targets
........
........
node.session.auth.authmethod = CHAP
node.session.auth.username = chapuser
node.session.auth.password = ********

........
........
node.conn[0].address = 192.168.1.5
node.conn[0].port = 3260
........
........
# END RECORD

[root@client ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.1.5 --discover
192.168.1.5:3260,1 iqn.2008-09.com.example:server.target1

Step 4: Add/Attach the LUN

[root@client ~]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2008-09.com.example:server.target1, portal: 192.168.1.5,3260] (multiple)
Login to [iface: default, target: iqn.2008-09.com.example:server.target1, portal: 192.168.1.5,3260] successful.

Now check your server, the new storage disk (/dev/sdb) is attached in the server.

[root@client ~]# fdisk -l | grep /dev
Disk /dev/sda: 8589 MB, 8589934592 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 1045 7875584 8e Linux LVM
Disk /dev/mapper/VolGroup-lv_root: 3833 MB, 3833593856 bytes
Disk /dev/mapper/VolGroup-lv_swap: 4227 MB, 4227858432 bytes
Disk /dev/sdb: 104 MB, 104857600 bytes

Format and use the LUN

[root@client ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xb960394f.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1024, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1024, default 1024): +50M

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@client ~]# fdisk -l | grep /dev
Disk /dev/sda: 8589 MB, 8589934592 bytes
/dev/sda1 * 1 64 512000 83 Linux
/dev/sda2 64 1045 7875584 8e Linux LVM
Disk /dev/mapper/VolGroup-lv_root: 3833 MB, 3833593856 bytes
Disk /dev/mapper/VolGroup-lv_swap: 4227 MB, 4227858432 bytes
Disk /dev/sdb: 104 MB, 104857600 bytes
/dev/sdb1 1 513 51275 83 Linux

That’s all.. Cheers 🙂

LUN Centralized Storage – Using Logical Partition – Part I


We are able to share logical partition to another machine using ISCSI method. So LVM can be act as a storage server and share the LUNS to clients.

Server/Target:
Operating System: CentOS release 6.4
Hostname: server
IP Address: 192.168.1.5

Step 1: Install ISCSI target package

[root@server ~]# yum install scsi-target-utils

Step 2: Create PV/VG/LV

[root@server ~]# pvcreate /dev/sdb{1,2}
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created

[root@server ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 VolGroup00 lvm2 a-- 19.53g 2.88g
/dev/sdb1 lvm2 a-- 509.84m 509.84m
/dev/sdb2 lvm2 a-- 109.82m 109.82m

[root@server ~]# vgcreate vg01 /dev/sdb{1,2}
Volume group "vg01" successfully created

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 2 0 0 wz--n- 616.00m 616.00m

[root@server ~]# lvcreate -L 100M -n lv01 /dev/vg01
Logical volume "lv01" created

[root@server ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao--- 14.65g
LogVol01 VolGroup00 -wi-ao--- 2.00g
lv01 vg01 -wi-a---- 100.00m

Step 3: Target configuration

[root@server ~]# vi /etc/tgt/targets.conf


target iqn.2008-09.com.example:server.target1
backing-store /dev/vg01/lv01
initiator-address 192.168.1.6
incominguser chapuser chappwd
/target

Start the ISCSI daemon

[root@server ~]# /etc/init.d/tgtd start
Starting SCSI target daemon: [ OK ]

Check the target status

[root@server ~]# tgtadm --mode target --op show
Target 1: iqn.2008-09.com.example:server.target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 105 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/vg01/lv01
Backing store flags:
Account information:
chapuser
ACL information:
192.168.1.6

Migrate the logical partition to new logical disk drive


LVM have the feature of migrate the existing logical volumes to a new logical volume without any data loss and downtime.

With the help of this feature we can move our data from error disk to a new disk.

Lets assume, one of our hard disk is error then we need to move the error disk data to new logical volume. Please follow the below steps to migrate the disk.

Step 1: Create the new PV/VG/LV

[root@server ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created

[root@server ~]# vgcreate vg01 /dev/sdb1
Volume group "vg01" successfully created

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 1 0 0 wz--n- 508.00m 508.00m

[root@server ~]# vgs -o+devices
VG #PV #LV #SN Attr VSize VFree Devices
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(0)
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(512)

[root@server ~]# lvcreate -L 100M -n lv01 /dev/vg01
Logical volume "lv01" created

[root@server ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao--- 14.65g
LogVol01 VolGroup00 -wi-ao--- 2.00g
lv01 vg01 -wi-a---- 100.00m

[root@server ~]# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
LogVol00 VolGroup00 -wi-ao--- 14.65g /dev/sda1(512)
LogVol01 VolGroup00 -wi-ao--- 2.00g /dev/sda1(0)
lv01 vg01 -wi-a---- 100.00m /dev/sdb1(0)

[root@server ~]# vgs -o+devices
VG #PV #LV #SN Attr VSize VFree Devices
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(0)
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(512)
vg01 1 1 0 wz--n- 508.00m 408.00m /dev/sdb1(0)

Format the logical volume and create some files.

[root@server ~]# mkfs.ext4 /dev/vg01/lv01

[root@server ~]# mkdir /data

[root@server ~]# mount /dev/vg01/lv01 /data
[root@server ~]# cd /data
[root@server data]# touch a1.txt
[root@server data]# echo "LVM Disk Migration" > a1.txt
[root@server data]# cat a1.txt
LVM Disk Migration

Step 2: Migrate the disk from /dev/sdb1 to /dev/sdc1

Create new PV from new HDD ( here it is sdc hard drive)

[root@server ~]# pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created

Extend the volume group

[root@server ~]# vgextend vg01 /dev/sdc1
Volume group "vg01" successfully extended

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 2 1 0 wz--n- 1016.00m 916.00m

Check which hard drive used by logical volume. Here the logical volume (lv01) use the /dev/sdb1 hard disk.

[root@server ~]# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
LogVol00 VolGroup00 -wi-ao--- 14.65g /dev/sda1(512)
LogVol01 VolGroup00 -wi-ao--- 2.00g /dev/sda1(0)
lv01 vg01 -wi-ao--- 100.00m /dev/sdb1(0)

Convert the existing logical volume data to new disk.

[root@server ~]# lvconvert -m 1 /dev/vg01/lv01 /dev/sdc1
vg01/lv01: Converted: 4.0%
vg01/lv01: Converted: 100.0%

Now the logical volume is mirrored in two disk /dev/sdb1 and /dev/sdc1. So all the datas from /dev/sdb1 have mirrored to /dev/sdc1.

[root@server ~]# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
LogVol00 VolGroup00 -wi-ao--- 14.65g /dev/sda1(512)
LogVol01 VolGroup00 -wi-ao--- 2.00g /dev/sda1(0)
lv01 vg01 mwi-aom-- 100.00m lv01_mlog 100.00 lv01_mimage_0(0),lv01_mimage_1(0)

Remove the error disk from logical volume by following command

[root@server ~]# lvconvert -m 0 /dev/vg01/lv01 /dev/sdb1
Logical volume lv01 converted

Now see the logical volume lv01 migrated to the new disk /dev/sdc1.

[root@server ~]# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
LogVol00 VolGroup00 -wi-ao--- 14.65g /dev/sda1(512)
LogVol01 VolGroup00 -wi-ao--- 2.00g /dev/sda1(0)
lv01 vg01 -wi-ao--- 100.00m /dev/sdc1(0)

Check whether the datas are available in the logical partition.

[root@server ~]# cd /data
[root@server data]# cat a1.txt
LVM Disk Migration

Remove the disk from volume group.

[root@server ~]# vgreduce vg01 /dev/sdb1
Removed "/dev/sdb1" from volume group "vg01"

[root@server ~]# vgs -o+devices
VG #PV #LV #SN Attr VSize VFree Devices
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(0)
VolGroup00 1 2 0 wz--n- 19.53g 2.88g /dev/sda1(512)
vg01 1 1 0 wz--n- 508.00m 408.00m /dev/sdc1(0)

[root@server ~]# pvremove /dev/sdb1
Labels on physical volume "/dev/sdb1" successfully wiped

That’s all. Now we have replaced the errored disk with new hard drive without any downtime.

Extend/Shrink Logical Volume


In our previous section, we have discussed about how to create and use the lvm file system. Now we will see how to extend and reduce the file system on the fly.

Step 1: Check your existing lvm partition.

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 1 0 0 wz--n- 508.00m 508.00m

[root@server ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao--- 14.65g
LogVol01 VolGroup00 -wi-ao--- 2.00g
lv01 vg01 -wi-a---- 100.00m

[root@server ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
ext4 15G 4.4G 9.4G 32% /
tmpfs tmpfs 499M 0 499M 0% /dev/shm
/dev/sda2 ext4 97M 29M 64M 31% /boot
/dev/mapper/vg01-lv01
ext4 97M 5.6M 87M 7% /data

The above 3 commands shows the Volume Group, Logical Volume and mounted file system size respectively.

Step 2: Extend the Volume Group & Logical Volume

Convert the new HDD into physical volume

[root@server ~]# pvcreate /dev/sdb2
Physical volume "/dev/sdb2" successfully created

Extend the volume group by executing below command

[root@server ~]# vgextend vg01 /dev/sdb2
Volume group "vg01" successfully extended

Check the Volume Group size after extend process, it will extend by 100M

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 2 1 0 wz--n- 616.00m 516.00m

To extend the Logical Volume run the below command

[root@server ~]# lvextend -L +100M /dev/vg01/lv01
Extending logical volume lv01 to 200.00 MiB
Logical volume lv01 successfully resized

Above command will extend the LV size by 100M. So the total size of LV is 200M now.

[root@server ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
LogVol00 VolGroup00 -wi-ao--- 14.65g
LogVol01 VolGroup00 -wi-ao--- 2.00g
lv01 vg01 -wi-ao--- 200.00m

Check whether the LV size is increased or not. By default it will not increase automatically, we need to manually resize the partition.

[root@server ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
ext4 15G 4.4G 9.4G 32% /
tmpfs tmpfs 499M 0 499M 0% /dev/shm
/dev/sda2 ext4 97M 29M 64M 31% /boot
/dev/mapper/vg01-lv01
ext4 97M 5.6M 87M 7% /data

In above example, you can see the LV size is not increased automatically after logical partition extend. So run the below command to resize the partition manually.

[root@server ~]# resize2fs /dev/vg01/lv01
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg01/lv01 is mounted on /data; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/vg01/lv01 to 204800 (1k) blocks.
The filesystem on /dev/vg01/lv01 is now 204800 blocks long.

Check it now, the /data file system is increased by 100M now.

[root@server ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
ext4 15G 4.4G 9.4G 32% /
tmpfs tmpfs 499M 0 499M 0% /dev/shm
/dev/sda2 ext4 97M 29M 64M 31% /boot
/dev/mapper/vg01-lv01
ext4 194M 5.6M 179M 3% /data

Step 3: Reduce/Shrink the Logical Volume size

Before reducing the logical volume, it is recommended to umount the file system for safer way.

[root@server ~]# umount /data

[root@server ~]# e2fsck -f /dev/vg01/lv01
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg01/lv01: 11/49400 files (9.1% non-contiguous), 11884/204800 blocks

Below command will resize your logical volume file system by 50M

[root@server ~]# resize2fs /dev/vg01/lv01 50M
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/vg01/lv01 to 51200 (1k) blocks.
The filesystem on /dev/vg01/lv01 is now 51200 blocks long.

Now we can easily reduce the file system by following command

[root@server ~]# lvreduce -L 50M /dev/vg01/lv01
Rounding size to boundary between physical extents: 52.00 MiB
WARNING: Reducing active logical volume to 52.00 MiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lv01? [y/n]: y
Reducing logical volume lv01 to 52.00 MiB
Logical volume lv01 successfully resized

[root@server ~]# mount /dev/vg01/lv01 /data

[root@server ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
ext4 15G 4.4G 9.4G 32% /
tmpfs tmpfs 499M 0 499M 0% /dev/shm
/dev/sda2 ext4 97M 29M 64M 31% /boot
/dev/mapper/vg01-lv01
ext4 49M 5.1M 41M 11% /data

To reduce the volume group,

[root@server ~]# vgreduce vg01 /dev/sdb2
Removed "/dev/sdb2" from volume group "vg01"

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.53g 2.88g
vg01 1 1 0 wz--n- 508.00m 456.00m

Step 4: Remove/Delete logical partition

[root@server ~]# umount /data

[root@server ~]# lvremove /dev/vg01/lv01
Do you really want to remove active logical volume lv01? [y/n]: y
Logical volume "lv01" successfully removed

[root@server ~]# vgremove vg01
Volume group "vg01" successfully removed

[root@server ~]# pvremove /dev/sdb1
Labels on physical volume "/dev/sdb1" successfully wiped