在前面的环境基础上进行进一步测试:
扩展LVM:
从网上下载一个测试文件,放到LVM2分区中:
[root@DanCentOS67 LV2]# wget http://daneaststorage.blob.core.chinacloudapi.cn/demo/Azure.pdf --2017-03-09 15:13:21-- http://daneaststorage.blob.core.chinacloudapi.cn/demo/Azure.pdf Resolving daneaststorage.blob.core.chinacloudapi.cn... 42.159.208.78 Connecting to daneaststorage.blob.core.chinacloudapi.cn|42.159.208.78|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 7670041 (7.3M) [application/pdf] Saving to: “Azure.pdf”
100%[=============================================================================================================================>] 7,670,041 --.-K/s in 0.06s
2017-03-09 15:13:21 (123 MB/s) - “Azure.pdf” saved [7670041/7670041]
[root@DanCentOS67 LV2]# ll total 7508 -rw-r--r--. 1 root root 7670041 Jul 6 2016 Azure.pdf drwx------. 2 root root 16384 Mar 9 15:02 lost+found |
将之前创建好的 Physical Volume /dev/md125 扩展到VolGroup1中:
[root@DanCentOS67 LV2]# vgextend VolGroup1 /dev/md125 Volume group "VolGroup1" successfully extended |
查看 Volume Group的状态,可以看到已经加入的新的空间:
[root@DanCentOS67 LV2]# vgdisplay --- Volume group --- VG Name VolGroup1 System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 3 Act PV 3 VG Size 24.94 GiB PE Size 4.00 MiB Total PE 6385 Alloc PE / Size 5108 / 19.95 GiB Free PE / Size 1277 / 4.99 GiB VG UUID dsTlXU-D3Y5-Fqb3-wCOl-LHmj-7Qyf-hXU186 |
扩展 Logic Volume 的大小:
[root@DanCentOS67 LV2]# lvextend /dev/mapper/VolGroup1-LogicalVol2 /dev/md125 Size of logical volume VolGroup1/LogicalVol2 changed from 10.19 GiB (2608 extents) to 15.18 GiB (3885 extents). Logical volume LogicalVol2 successfully resized |
也可以将分区扩展到某个大小(下面这条命令将分区扩展到1024MiB):
lvextend -L1024 /dev/mapper/VolGroup1-LogicalVol2 |
或者在分区现在的大小上进一步扩展固定大小的空间(下面这条命令在现有大小的基础上继续增加100 MiB):
lvextend -L+100 /dev/mapper/VolGroup1-LogicalVol2 |
扩展完成后,可以看到 /mnt/LV2 的大小实际没有发生变化:
[root@DanCentOS67 LV2]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 30G 2.0G 26G 7% / tmpfs 6.9G 0 6.9G 0% /dev/shm /dev/sdb1 133G 60M 126G 1% /mnt/resource /dev/mapper/VolGroup1-LogicalVol1 9.5G 22M 9.0G 1% /mnt/LV1 /dev/mapper/VolGroup1-LogicalVol2 10G 33M 9.4G 1% /mnt/LV2 |
进一步使用 resize2fs 工具将文件系统的大小进行扩展:
[root@DanCentOS67 LV2]# resize2fs /dev/mapper/VolGroup1-LogicalVol2 resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/mapper/VolGroup1-LogicalVol2 is mounted on /mnt/LV2; on-line resizing required old desc_blocks = 1, new_desc_blocks = 1 Performing an on-line resize of /dev/mapper/VolGroup1-LogicalVol2 to 3978240 (4k) blocks. The filesystem on /dev/mapper/VolGroup1-LogicalVol2 is now 3978240 blocks long. |
再次查看发现文件系统的大小已经变为新的大小(扩展成功):
[root@DanCentOS67 LV2]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 30G 2.0G 26G 7% / tmpfs 6.9G 0 6.9G 0% /dev/shm /dev/sdb1 133G 60M 126G 1% /mnt/resource /dev/mapper/VolGroup1-LogicalVol1 9.5G 22M 9.0G 1% /mnt/LV1 /dev/mapper/VolGroup1-LogicalVol2 15G 33M 15G 1% /mnt/LV2 |
文件也没有损坏或丢失:
[root@DanCentOS67 LV2]# ll total 7508 -rw-r--r--. 1 root root 7670041 Jul 6 2016 Azure.pdf drwx------. 2 root root 16384 Mar 9 15:02 lost+found |
将 LVM 迁移到其他机器上:
将4块磁盘从原来的机器上分离下来,按照 c,d,e,f 的顺序依次挂载到另外一台机器上(另外一台机器原有一个分区 /dev/sdc,所以所有的分区号都向后串了一个)。
fdisk -l 查看分区的情况:
[root@DanCentOS65 daniel]# fdisk -l
……
Disk /dev/sdd: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xbd293e5b
Device Boot Start End Blocks Id System /dev/sdd1 1 653 5245191 fd Linux raid autodetect /dev/sdd2 654 1305 5237190 fd Linux raid autodetect
Disk /dev/md127: 10.7 GB, 10716446720 bytes 2 heads, 4 sectors/track, 2616320 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup1-LogicalVol1: 10.5 GB, 10485760000 bytes 255 heads, 63 sectors/track, 1274 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/sde: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x13c704fe
Device Boot Start End Blocks Id System /dev/sde1 1 653 5245191 fd Linux raid autodetect /dev/sde2 654 1305 5237190 fd Linux raid autodetect
Disk /dev/md126: 10.7 GB, 10716446720 bytes 2 heads, 4 sectors/track, 2616320 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/sdf: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x3273d383
Device Boot Start End Blocks Id System /dev/sdf1 1 653 5245191 fd Linux raid autodetect /dev/sdf2 654 1305 5237190 fd Linux raid autodetect
Disk /dev/sdg: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6283a8a2
Device Boot Start End Blocks Id System /dev/sdg1 1 653 5245191 fd Linux raid autodetect /dev/sdg2 654 1305 5237190 fd Linux raid autodetect
Disk /dev/md125: 5358 MB, 5358223360 bytes 2 heads, 4 sectors/track, 1308160 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 524288 bytes Disk identifier: 0x00000000 |
可以看到没有识别出 Logical Volume 2,所以我们从RAID 开始再重新检查一下:
[root@DanCentOS65 daniel]# mdadm --misc --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Mar 9 14:43:25 2017 Raid Level : raid5 Array Size : 10465280 (9.98 GiB 10.72 GB) Used Dev Size : 5232640 (4.99 GiB 5.36 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent
Update Time : Thu Mar 9 15:37:38 2017 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : DanCentOS67:127 UUID : 1a86a744:b01bd085:ee05717d:8ba86e4a Events : 18
Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 2 0 0 2 removed 3 8 50 2 active sync /dev/sdd2 [root@DanCentOS65 daniel]# mdadm --misc --detail /dev/md126 /dev/md126: Version : 1.2 Creation Time : Thu Mar 9 14:43:46 2017 Raid Level : raid5 Array Size : 10465280 (9.98 GiB 10.72 GB) Used Dev Size : 5232640 (4.99 GiB 5.36 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent
Update Time : Fri Mar 10 02:26:26 2017 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : DanCentOS67:126 UUID : 72b40d3f:78790af8:d369e502:8547a8b1 Events : 21
Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 82 1 active sync /dev/sdf2 3 8 81 2 active sync /dev/sdf1 [root@DanCentOS65 daniel]# mdadm --misc --detail /dev/md125 /dev/md125: Version : 1.2 Creation Time : Thu Mar 9 14:43:55 2017 Raid Level : raid5 Array Size : 5232640 (4.99 GiB 5.36 GB) Used Dev Size : 5232640 (4.99 GiB 5.36 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent
Update Time : Thu Mar 9 15:37:38 2017 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : DanCentOS67:125 UUID : e74d96ff:12e1173d:f2d398ce:744d5f47 Events : 18
Number Major Minor RaidDevice State 0 8 97 0 active sync /dev/sdg1 2 8 98 1 active sync /dev/sdg2 |
可以看到 /dev/md127 和 /dev/md126 没有识别到/dev/sde1 和 /dev/sde2 两个分区。
查看这两个分区的状态:
[root@DanCentOS65 daniel]# mdadm --examine /dev/sde1 /dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 1a86a744:b01bd085:ee05717d:8ba86e4a Name : DanCentOS67:127 Creation Time : Thu Mar 9 14:43:25 2017 Raid Level : raid5 Raid Devices : 3
Avail Dev Size : 10482190 (5.00 GiB 5.37 GB) Array Size : 10465280 (9.98 GiB 10.72 GB) Used Dev Size : 10465280 (4.99 GiB 5.36 GB) Data Offset : 8192 sectors Super Offset : 8 sectors Unused Space : before=8104 sectors, after=16910 sectors State : clean Device UUID : 3275b1fe:c1c7935c:e3d4928e:1740fc0c
Update Time : Fri Mar 10 02:22:57 2017 Bad Block Log : 512 entries available at offset 72 sectors Checksum : b236682c - correct Events : 23
Layout : left-symmetric Chunk Size : 512K
Device Role : Active device 1 Array State : .A. ('A' == active, '.' == missing, 'R' == replacing) [root@DanCentOS65 daniel]# mdadm --examine /dev/sde2 /dev/sde2: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 72b40d3f:78790af8:d369e502:8547a8b1 Name : DanCentOS67:126 Creation Time : Thu Mar 9 14:43:46 2017 Raid Level : raid5 Raid Devices : 3
Avail Dev Size : 10466188 (4.99 GiB 5.36 GB) Array Size : 10465280 (9.98 GiB 10.72 GB) Used Dev Size : 10465280 (4.99 GiB 5.36 GB) Data Offset : 8192 sectors Super Offset : 8 sectors Unused Space : before=8104 sectors, after=908 sectors State : clean Device UUID : 46862c7b:4db1cdb0:17a40a6c:91254c2e
Update Time : Thu Mar 9 15:52:42 2017 Bad Block Log : 512 entries available at offset 72 sectors Checksum : db02d450 - correct Events : 18
Layout : left-symmetric Chunk Size : 512K
Device Role : Active device 0 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing) |
可以看到,状态是好的,分区并未出现异常。
重新将两个分区加回 RAID 5 阵列中:
[root@DanCentOS65 daniel]# mdadm --manage /dev/md127 --add /dev/sde1 mdadm: re-added /dev/sde1 [root@DanCentOS65 daniel]# mdadm --manage /dev/md126 --add /dev/sde2 mdadm: added /dev/sde2 |
经过一段时间的Rebuild后,查看重新检查可以看到两个 RAID 5 阵列已经恢复:
[root@DanCentOS65 daniel]# mdadm --misc --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Mar 9 14:43:25 2017 Raid Level : raid5 Array Size : 10465280 (9.98 GiB 10.72 GB) Used Dev Size : 5232640 (4.99 GiB 5.36 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent
Update Time : Fri Mar 10 05:51:01 2017 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : DanCentOS67:127 UUID : 1a86a744:b01bd085:ee05717d:8ba86e4a Events : 38
Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 65 1 active sync /dev/sde1 3 8 50 2 active sync /dev/sdd2 [root@DanCentOS65 daniel]# mdadm --misc --detail /dev/md126 /dev/md126: Version : 1.2 Creation Time : Thu Mar 9 14:43:46 2017 Raid Level : raid5 Array Size : 10465280 (9.98 GiB 10.72 GB) Used Dev Size : 5232640 (4.99 GiB 5.36 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent
Update Time : Fri Mar 10 05:58:03 2017 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : DanCentOS67:126 UUID : 72b40d3f:78790af8:d369e502:8547a8b1 Events : 41
Number Major Minor RaidDevice State 4 8 66 0 active sync /dev/sde2 1 8 82 1 active sync /dev/sdf2 3 8 81 2 active sync /dev/sdf1 |
重新扫描 Physical Volume:
[root@DanCentOS65 daniel]# pvscan PV /dev/md127 VG VolGroup1 lvm2 [9.98 GiB / 0 free] PV /dev/md126 VG VolGroup1 lvm2 [9.98 GiB / 0 free] PV /dev/md125 VG VolGroup1 lvm2 [4.99 GiB / 0 free] Total: 3 [24.94 GiB] / in use: 3 [24.94 GiB] / in no VG: 0 [0 ] |
扫描 Logical Volume:
[root@DanCentOS65 daniel]# lvscan ACTIVE '/dev/VolGroup1/LogicalVol1' [9.77 GiB] inherit inactive '/dev/VolGroup1/LogicalVol2' [15.18 GiB] inherit |
看到 Logical Volume 2 inactive了,重新激活这个 Logical Volume:
[root@DanCentOS65 daniel]# lvchange -a y /dev/VolGroup1/LogicalVol2 |
激活后再次查看状态:
[root@DanCentOS65 daniel]# lvscan ACTIVE '/dev/VolGroup1/LogicalVol1' [9.77 GiB] inherit ACTIVE '/dev/VolGroup1/LogicalVol2' [15.18 GiB] inherit |
fdisk -l 查看发现可以识别出 Logical Volume 2了:
[root@DanCentOS65 daniel]# fdisk -l
……
Disk /dev/mapper/VolGroup1-LogicalVol1: 10.5 GB, 10485760000 bytes 255 heads, 63 sectors/track, 1274 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup1-LogicalVol2: 16.3 GB, 16294871040 bytes 255 heads, 63 sectors/track, 1981 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 |
将两个分区挂在到目录下:
[root@DanCentOS65 daniel]# mkdir /mnt/LV1 [root@DanCentOS65 daniel]# mkdir /mnt/LV2 [root@DanCentOS65 daniel]# mount /dev/mapper/VolGroup1-LogicalVol1 /mnt/LV1 [root@DanCentOS65 daniel]# mount /dev/mapper/VolGroup1-LogicalVol2 /mnt/LV2 |
进入目录中查看文件未丢失或损坏,将文档下载到本地后可以正常打开:
[root@DanCentOS65 daniel]# cd /mnt/LV2 [root@DanCentOS65 LV2]# ll total 7508 -rw-r--r--. 1 root root 7670041 Jul 6 2016 Azure.pdf drwx------ 2 root root 16384 Mar 9 15:02 lost+found |
至此,迁移工作结束。
删除 LVM 环境:
最后来做一点破坏性工作,将整个环境拆解掉。
首先将两个分区卸载:
[root@DanCentOS65 daniel]# umount /dev/mapper/VolGroup1-LogicalVol1 [root@DanCentOS65 daniel]# umount /dev/mapper/VolGroup1-LogicalVol2 |
接着可以删除 Logical Volume:
[root@DanCentOS65 daniel]# lvremove -f /dev/mapper/VolGroup1-LogicalVol1 Logical volume "LogicalVol1" successfully removed [root@DanCentOS65 daniel]# lvremove -f /dev/mapper/VolGroup1-LogicalVol2 Logical volume "LogicalVol2" successfully removed |
删除 Volume Group:
[root@DanCentOS65 daniel]# vgremove VolGroup1 Volume group "VolGroup1" successfully removed |
删除 Physical Volume:
[root@DanCentOS65 daniel]# pvremove /dev/md125 /dev/md126 /dev/md127 Labels on physical volume "/dev/md125" successfully wiped Labels on physical volume "/dev/md126" successfully wiped Labels on physical volume "/dev/md127" successfully wiped |
停止 RAID 5 阵列:
[root@DanCentOS65 daniel]# mdadm --stop /dev/md125 mdadm: stopped /dev/md125 [root@DanCentOS65 daniel]# mdadm --stop /dev/md126 mdadm: stopped /dev/md126 [root@DanCentOS65 daniel]# mdadm --stop /dev/md127 mdadm: stopped /dev/md127 |
停止后发现已经没有 RAID 5 Device 了:
[root@DanCentOS65 daniel]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] unused devices: <none> |
注:正常应该还执行下面两个命令完全删除,不过这里已经检测不到 md Device 了,所以就不需要了。
mdadm --remove /dev/md125 mdadm --zero-superblock /dev/sdg1 /dev/sdg2 |