How to extend virtual drive VMDK and physical volume (PV) in a LVM setup on a VMware VM

Written by - 0 comments

Published on - last updated on August 12th 2022 - Listed in Linux VMware LVM


The VM's (running in a VMware environment) serving as LXC hosts were setup in a way, that the volume group (VG) used for the containers, can be dynamically increased without downtime.

The classical and probably easiest way is to just add a new virtual disk to the VM, create a new PV of this disk and use vgextend to add the new PV to the existing VG.

A nicer method however (as least imho) is to increase the already used virtual disk/PV.

Before I touched anything, I check the current available space of the volume group:

# vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  vglxc      1   5   0 wz--n- 50.00g 6.00g
  vgsystem   1   2   0 wz--n-  6.61g 1.96g

1. In VMware you can increase the disk's size given you still have space in your volume.

2. You need to tell your OS (in my case this is an Ubuntu 14.04) that it should rescan the scsi bus to detect changes. As I increased the second disk, which is seen as /dev/sdb, I launched the following command for this:

# echo 1 > /sys/class/scsi_device/2\:0\:1\:0/device/rescan

If you're not sure which device number your disk (again, in my case sdb) has, you can easily double-check the device path by checking /sys/block/sdb/device:

# ll /sys/block/sdb/device
lrwxrwxrwx 1 root root 0 Jun 28 09:08 /sys/block/sdb/device -> ../../../2:0:1:0

On newer Linux releases you can directly run the echo command using the drive (sdb for example) :

# echo 1 > /sys/class/scsi_device/2\:0\:1\:0/device/rescan

3. Now you can verify that your disk's size has changed. In dmesg you should see a message that the drive size has changed:

# dmesg | tail
[17006442.602408] sdb: detected capacity change from 53687091200 to 75161927680

You can also use fdisk to check the new capacity:

# fdisk -l /dev/sdb

Disk /dev/sdb: 75.2 GB, 75161927680 bytes
255 heads, 63 sectors/track, 9137 cylinders, total 146800640 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

(the previous size was 25GB smaller)

4. All you need to do now is to tell LVM that the physical volume (ergo /dev/sdb in my case) has changed it's size:

# pvresize /dev/sdb
  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

5. Check again the available space of the volume group:

# vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  vglxc      1   5   0 wz--n- 70.00g 26.00g
  vgsystem   1   2   0 wz--n-  6.61g  1.96g

VoilĂ . 20GB more space for my containers without downtime.

Note: This only works "that easy" because I used the complete disk /dev/sdb as physical volume. There are no partitions on sdb. If there were, it would be mandatory to increase the partition as well.


Add a comment

Show form to leave a comment

Comments (newest first)

No comments yet.

RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Icingaweb   Icingaweb2   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder   


Update cookies preferences