Server Fault Asked by David Janes on November 7, 2021
I have 25+ VM’s running OL7.5. As they grow/get repurposed I sometimes need to add more HDD to them.
I have added space to ol-root and ol-swap many times before, following [in summary] :
fdisk to create partition
partprobe -s
pvcreate /dev/sdb1
vgextend ol /dev/sdb1
pvscan
lvextend /dev/mapper/ol-root /dev/sdb1
but on some VM’s I can no longer add that ol-root space to the file system:
xfs_growfs /dev/mapper/ol-root
Error: xfs_growfs: /dev/mapper/ol-root is not a mounted XFS filesystem
I have done this many many times before with no issue on the same VM’s. On this VM I built from scratch and have added sda3 and sda4.
On this particular box I need to upgrade 11g to 18c. I need 10G ol-swap and more HDD to install the 18c database.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 55G 0 disk
├─sda4 8:4 0 10G 0 part
│ └─ol-root 252:0 0 46G 0 lvm /
├─sda2 8:2 0 15G 0 part
│ ├─ol-swap 252:1 0 8G 0 lvm [SWAP]
│ └─ol-root 252:0 0 46G 0 lvm /
├─sda3 8:3 0 29G 0 part
│ ├─ol-swap 252:1 0 8G 0 lvm [SWAP]
│ └─ol-root 252:0 0 46G 0 lvm /
└─sda1 8:1 0 1G 0 part /boot
$ df -Th /dev/mapper/ol-root
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/ol-root xfs 46G 45G 1.5G 97% /
Any suggestions please?
Many thanks
Darren
p.s. The box im trying to expand is a clone of the VM above. After multiple attempts to grow the file system i killed the box. The source box shares the same issue with xfs_growfs even when i dont add an sdb partition. The xfs_growfs should return a much different error when there is no space to allocate.
You need to target the mount-point not the LVM mapper for the xfs_growfs command. This seems to be a new "feature" since we used to be able to xfs_growfs on the mapper.
But the man page specifically refers to mount-point:
xfs_growfs(8) System Manager's Manual xfs_growfs(8)
NAME xfs_growfs, xfs_info - expand an XFS filesystem
SYNOPSIS xfs_growfs [ -dilnrx ] [ -D size ] [ -e rtextsize ] [ -L size ] [ -m maxpct ] [ -t mtab ] [ -R size ] mount-point
Answered by Jason Richling on November 7, 2021
OK so doing -L and xxfs_growfs has worked. VM had ol-root 6GB free
Code for the actual VM, step by step: Adding 16GB to new HDD in vSphere
# fdisk -l
Disk /dev/sda: 59.1 GB, 59055800320 bytes, 115343360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000cbfb1
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 33554431 15727616 8e Linux LVM
/dev/sda3 33554432 94371839 30408704 8e Linux LVM
/dev/sda4 94371840 115343359 10485760 8e Linux LVM
Disk /dev/sdb: 17.2 GB, 17179869184 bytes, 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/ol-root: 49.4 GB, 49379540992 bytes, 96444416 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/ol-swap: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
# ls /sys/class/scsi_device
1:0:0:0 2:0:0:0 2:0:1:0
# echo 1 > /sys/class/scsi_device/1:0:0:0/device/rescan
# echo 1 > /sys/class/scsi_device/2:0:0:0/device/rescan
# echo 1 > /sys/class/scsi_device/2:0:1:0/device/rescan
fdisk /dev/sdb, n, p, 1, t, 1, 8e, w
# fdisk -l
Disk /dev/sda: 59.1 GB, 59055800320 bytes, 115343360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000cbfb1
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 33554431 15727616 8e Linux LVM
/dev/sda3 33554432 94371839 30408704 8e Linux LVM
/dev/sda4 94371840 115343359 10485760 8e Linux LVM
Disk /dev/sdb: 17.2 GB, 17179869184 bytes, 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x86979f60
Device Boot Start End Blocks Id System
/dev/sdb1 2048 33554431 16776192 8e Linux LVM
Disk /dev/mapper/ol-root: 49.4 GB, 49379540992 bytes, 96444416 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/ol-swap: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Partprobe
# partprobe -s
/dev/sda: msdos partitions 1 2 3 4
/dev/sdb: msdos partitions 1
pvcreate
# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
vgextend
# vgextend ol /dev/sdb1
Volume group "ol" successfully extended
pvscan
# pvscan
PV /dev/sda2 VG ol lvm2 [<15.00 GiB / 0 free]
PV /dev/sda3 VG ol lvm2 [<29.00 GiB / 0 free]
PV /dev/sda4 VG ol lvm2 [<10.00 GiB / 0 free]
PV /dev/sdb1 VG ol lvm2 [<16.00 GiB / <16.00 GiB free]
Total: 4 [69.98 GiB] / in use: 4 [69.98 GiB] / in no VG: 0 [0 ]
lvextend
# lvextend -L+16380M /dev/mapper/ol-root /dev/sdb1
Size of logical volume ol/root changed from <45.99 GiB (11773 extents) to 61.98 GiB (15868 extents).
Logical volume ol/root successfully resized.
xfs_growfs
# xfs_growfs /
meta-data=/dev/mapper/ol-root isize=256 agcount=14, agsize=877824 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0 rmapbt=0
= reflink=0
data = bsize=4096 blocks=12055552, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 12055552 to 16248832
# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.8G 0 4.8G 0% /dev
tmpfs 4.8G 0 4.8G 0% /dev/shm
tmpfs 4.8G 9.4M 4.8G 1% /run
tmpfs 4.8G 0 4.8G 0% /sys/fs/cgroup
/dev/mapper/ol-root 62G 42G 21G 68% /
/dev/sda1 1014M 419M 596M 42% /boot
tmpfs 973M 12K 973M 1% /run/user/42
tmpfs 973M 0 973M 0% /run/user/0
I followed this: https://ma.ttias.be/increase-expand-xfs-filesystem-in-red-hat-rhel-7-cento7/
Why would it work before but not now ?
Answered by David Janes on November 7, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP