This documentation supports the 11.2 version of BMC Discovery.

To view an earlier version of the product, select the version from the Product version menu.

Resizing a partition and file system

This manual procedure must only be undertaken by an experienced Linux system administrator.

These are potentially destructive operations and there is the risk that the datastore or filesystems will be unrecoverable. Any errors or warnings encountered that are not documented here must be resolved before continuing to the next step.

The partition tables in the examples below are from a clean Discovery 11.2 deployment. Upgrades from earlier versions have a /usr partition, and partition numbers will be different.


This procedure assumes the following:

  • the datastore is stored in /mnt/addm/db_data
  • /mnt/addm/db_data is mounted on the /dev/sdb2 partition
    • the result of moving the datastore, including the creation of a swap partition, to a second disk using Disk Management
  • LVM has not been used to partition the disk

The following examples use the host testhost, and the tideway or root user as applicable.

To resize a partition and file system

  1. Create a backup of the appliance. The backup should be stored on a different machine.
  2. Log in to the appliance command line as the tideway user.
  3. Stop the Discovery services from starting on reboot. Enter the following commands:

    [tideway@testhost ~]$ sudo /sbin/chkconfig appliance off
    [tideway@testhost ~]$ sudo /sbin/chkconfig omniNames off
    [tideway@testhost ~]$ sudo /sbin/chkconfig cluster off
    [tideway@testhost ~]$ sudo /sbin/chkconfig tideway off

    Example output to show current partition size:

    [root@testhost ~]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda9        16G  5.1G  9.4G  35% /
    tmpfs           3.9G   32K  3.9G   1% /dev/shm
    /dev/sda1       239M   35M  192M  16% /boot
    /dev/sda6       1.5G  2.3M  1.4G   1% /home
    /dev/sda3       1.9G   46M  1.8G   3% /tmp
    /dev/sda5       1.7G  141M  1.5G   9% /var
    /dev/sda7       852M   13M  795M   2% /var/log
    /dev/sda8       852M   13M  795M   2% /var/log/audit
    /dev/sdb2       187G  2.8G  174G   2% /mnt/addm/db_data
    [root@testhost ~]#
  4. Shutdown the Discovery virtual machine and wait for the system to completely power off.

  5. Increase disk size. In the examples, it is /dev/sdb that is enlarged.
  6. Start the Discovery virtual machine.
  7. Log in to the appliance command line as the tideway user and become the root user.
  8. Unmount the datastore partition:

    [root@testhost ~]# umount /mnt/addm/db_data
  9. Perform a check on the datastore filesystem:

    [root@testhost ~]# e2fsck -f /dev/sdb2
    e2fsck 1.41.12 (17-May-2010)
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    /dev/sdb2: 850/25247744 files (3.2% non-contiguous), 2344921/50475264 blocks
    [root@testhost ~]#

    Any errors reported must be resolved before continuing.

  10. Disable any swap on the same disk:

    1. Find swap partitions running on /dev/sdb

      [root@testhost ~]# swapon --summary
      Filename Type Size Used Priority
      /dev/sda2 partition 8388604 0 -1
      /dev/sdb1 partition 7811068 0 -2
      [root@testhost ~]#
    2. The swap partition /dev/sdb1 shares the disk, /dev/sdb, with the datastore. This must be disabled.

      [root@testhost ~]# swapoff /dev/sdb1
      [root@testhost ~]# swapon --summary
      Filename                                Type            Size    Used    Priority
      /dev/sda2                               partition       8388604 0       -1
      [root@testhost ~]#
  11. Recreate the partition, using all available space at the end of the disk.

    1. Find the required disk information. Take note of the partition Number and the Start sector from the output, which are 2 and 15626240 respectively in the example. The example shows user input after the (parted) prompt.

      [root@testhost ~]# parted /dev/sdb
      GNU Parted 2.1
      Using /dev/sdb
      Welcome to GNU Parted! Type 'help' to view a list of commands.
      (parted) unit s
      (parted) print
      Model: VMware Virtual disk (scsi)
      Disk /dev/sdb: 536870912s
      Sector size (logical/physical): 512B/512B
      Partition Table: gpt
      
      Number  Start      End         Size        File system     Name     Flags
       1      2048s      15624191s   15622144s   linux-swap(v1)  primary
       2      15626240s  419428351s  403802112s  ext4            primary
      

      You may receive the errors and warnings noted below. You must enter Fix and <enter> to both options as these are expected when increasing disk size. The example shows user input after the Fix/Cancel prompt and the Warning: text.

      Error: The backup GPT table is not at the end of the disk, as it should be.  This
      might mean that another operating system believes the disk is smaller.  Fix, by
      moving the backup to the end (and removing the old backup)?
      Fix/Cancel? Fix
      Warning: Not all of the space available to /dev/sdb appears to be used, you can
      fix the GPT to use all of the space (an extra 117440512 blocks) or continue with
      the current setting? Fix
    2. Remove the partition Number noted above and find the new End of the disk, 536870878 in the example. The example shows user input after the (parted) prompt.

      (parted) rm 2
      (parted) print free
      Model: VMware Virtual disk (scsi)
      Disk /dev/sdb: 536870912s
      Sector size (logical/physical): 512B/512B
      Partition Table: gpt
      
      Number  Start      End         Size        File system     Name     Flags
              34s        2047s       2014s       Free Space
       1      2048s      15624191s   15622144s   linux-swap(v1)  primary
              15624192s  536870878s  521246687s  Free Space
      
    3. Create a new partition, using the Start and End sectors found in the examples above. The example shows user input after the (parted) prompt.

      (parted) mkpart primary 15626240 536870878
      (parted) print
      Model: VMware Virtual disk (scsi)
      Disk /dev/sdb: 536870912s
      Sector size (logical/physical): 512B/512B
      Partition Table: gpt
      
      Number  Start      End         Size        File system     Name     Flags
       1      2048s      15624191s   15622144s   linux-swap(v1)  primary
       2      15626240s  536870878s  521244639s  ext4            primary
      
      (parted) quit
      Information: You may need to update /etc/fstab.
      
      [root@testhost ~]#
    4. Check the partition's filesystem for errors.

      [root@testhost ~]# e2fsck -f /dev/sdb2
      e2fsck 1.41.12 (17-May-2010)
      Pass 1: Checking inodes, blocks, and sizes
      Pass 2: Checking directory structure
      Pass 3: Checking directory connectivity
      Pass 4: Checking reference counts
      Pass 5: Checking group summary information
      /dev/sdb2: 850/25247744 files (3.2% non-contiguous), 2344921/50475264 blocks
      [root@testhost ~]#

      Any errors reported must be resolved before continuing.

    5. Resize the /dev/sdb2 filesystem to use the entire partition:

      [root@testhost ~]# resize2fs /dev/sdb2
      resize2fs 1.41.12 (17-May-2010)
      Resizing the filesystem on /dev/sdb2 to 65155579 (4k) blocks.
      The filesystem on /dev/sdb2 is now 65155579 blocks long.
      
      [root@testhost ~]#
    6. Check the partition's filesystem to ensure the resize operation did not cause any corruption.

      [root@testhost ~]# e2fsck -f /dev/sdb2
      e2fsck 1.41.12 (17-May-2010)
      Pass 1: Checking inodes, blocks, and sizes
      Pass 2: Checking directory structure
      Pass 3: Checking directory connectivity
      Pass 4: Checking reference counts
      Pass 5: Checking group summary information
      /dev/sdb2: 850/32587776 files (3.2% non-contiguous), 2804569/65155579 blocks
      [root@testhost ~]#
    7. Confirm the /etc/fstab file will mount the resized partition correctly. The blkid command will show the UUID of the partition which needs to match the UUID used in /etc/fstab for /mnt/addm/db_data. If the UUID is not the same, the /etc/fstab file will need to be updated. If /dev/sdb2 is used rather than UUID, there is no need to update /etc/fstab.

      [root@testhost ~]# blkid /dev/sdb2
      /dev/sdb2: UUID="fe656744-9995-4494-87f6-90eed0df80e5" TYPE="ext4"
      [root@testhost ~]# cat /etc/fstab
      #
      # /etc/fstab
      # Created by tw_disk_utils on Tue Sep 12 12:04:12 2017
      #
      UUID=1576356f-176a-4b0e-b53f-c49a704715f2 /     ext4    defaults        1 1
      UUID=3416460c-f809-493e-96a3-5ae610ed4914 /boot ext4    defaults,nosuid 1 2
      UUID=7eb6064b-221b-4717-86af-9de3cc1bdd24 /home ext4    defaults,nosuid 1 2
      UUID=c7aa3648-5593-4d43-a3b0-b755748b8016 /tmp  ext4    defaults,nosuid,noexec 1 2
      UUID=2eb9f3c8-1362-4763-b932-ea747ed2e813 /var  ext4    defaults,noexec 1 2
      UUID=7ac0ea8b-4457-4b79-a4d9-99e865089361 /var/log/audit        ext4    defaults,noexec 1 2
      UUID=daecc7b0-c3ee-4de2-a85a-aa6f94e38023 /var/log      ext4    defaults,noexec1 2
      UUID=4d3b7113-caa3-4956-9cae-653070a32599 swap  swap    defaults        0 0
      UUID=749ba802-545c-4c37-bc12-0adcdf12133e swap  swap    defaults        0 0
      UUID=fe656744-9995-4494-87f6-90eed0df80e5 /mnt/addm/db_data     ext4    defaults1 2
      # Old fstab entries related to unidentified devices (not created by tw_disk_utils):
      tmpfs                   /dev/shm                tmpfs   defaults        0 0
      devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
      sysfs                   /sys                    sysfs   defaults        0 0
      proc                    /proc                   proc    defaults        0 0
      [root@testhost ~]#
    8. Mount the new partition using mount -a. This will mount all partitions listed in /etc/fstab

      [root@testhost ~]# mount
      /dev/sda9 on / type ext4 (rw)
      proc on /proc type proc (rw)
      sysfs on /sys type sysfs (rw)
      devpts on /dev/pts type devpts (rw,gid=5,mode=620)
      tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
      /dev/sda1 on /boot type ext4 (rw,nosuid)
      /dev/sda6 on /home type ext4 (rw,nosuid)
      /dev/sda3 on /tmp type ext4 (rw,noexec,nosuid)
      /dev/sda5 on /var type ext4 (rw,noexec)
      /dev/sda7 on /var/log type ext4 (rw,noexec)
      none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
      /dev/sda8 on /var/log/audit type ext4 (rw,noexec)
      [root@testhost ~]#
      [root@testhost ~]# mount -a
      [root@testhost ~]#
      [root@testhost ~]# mount
      /dev/sda9 on / type ext4 (rw)
      proc on /proc type proc (rw)
      sysfs on /sys type sysfs (rw)
      devpts on /dev/pts type devpts (rw,gid=5,mode=620)
      tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
      /dev/sda1 on /boot type ext4 (rw,nosuid)
      /dev/sda6 on /home type ext4 (rw,nosuid)
      /dev/sda3 on /tmp type ext4 (rw,noexec,nosuid)
      /dev/sda5 on /var type ext4 (rw,noexec)
      /dev/sda7 on /var/log type ext4 (rw,noexec)
      none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
      /dev/sda8 on /var/log/audit type ext4 (rw,noexec)
      /dev/sdb2 on /mnt/addm/db_data type ext4 (rw)
      [root@testhost ~]#
    9. Confirm the new size:

      [root@testhost ~]# df -h
      Filesystem      Size  Used Avail Use% Mounted on
      /dev/sda9        16G  5.1G  9.4G  35% /
      tmpfs           3.9G     0  3.9G   0% /dev/shm
      /dev/sda1       239M   35M  192M  16% /boot
      /dev/sda6       1.5G  2.3M  1.4G   1% /home
      /dev/sda3       1.9G   46M  1.8G   3% /tmp
      /dev/sda5       1.7G  139M  1.5G   9% /var
      /dev/sda7       852M   42M  766M   6% /var/log
      /dev/sda8       852M   25M  783M   4% /var/log/audit
      /dev/sdb2       241G  2.8G  226G   2% /mnt/addm/db_data
      [root@testhost ~]#
  12. Reboot the Discovery virtual machine.

  13. Log in to the appliance command line as the tideway user.

  14. Confirm after restart that the partition has mounted as expected:

    [tideway@testhost ~]$ sudo /bin/mount
    /dev/sda9 on / type ext4 (rw)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
    /dev/sda1 on /boot type ext4 (rw,nosuid)
    /dev/sda6 on /home type ext4 (rw,nosuid)
    /dev/sda3 on /tmp type ext4 (rw,noexec,nosuid)
    /dev/sda5 on /var type ext4 (rw,noexec)
    /dev/sda7 on /var/log type ext4 (rw,noexec)
    /dev/sdb2 on /mnt/addm/db_data type ext4 (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    /dev/sda8 on /var/log/audit type ext4 (rw,noexec)
    [tideway@testhost ~]$
    
  15. Start the tideway services:

    [tideway@testhost ~]$ sudo /sbin/service appliance start
    Local appliance start up tasks
        Clearing appliance temporary files                     [  OK  ]
    [tideway@testhost ~]$ sudo /sbin/service omniNames start
    Starting omniNames                                         [  OK  ]
    [tideway@testhost ~]$ sudo /sbin/service cluster start
    Starting local Discovery cluster services
        Starting Privileged service:                           [  OK  ]
        Starting Cluster Manager service:                      [  OK  ]
    [tideway@testhost ~]$ sudo /sbin/service tideway start
    Starting local BMC Discovery application services
        Starting Security service:                             [  OK  ]
        Starting Model service:                                [  OK  ]
        Starting Vault service:                                [  OK  ]
        Starting Discovery service:                            [  OK  ]
        Starting Mainframe Provider service:                   [  OK  ]
        Starting SQL Provider service:                         [  OK  ]
        Starting CMDB Sync (Exporter) service:                 [  OK  ]
        Starting CMDB Sync (Transformer) service:              [  OK  ]
        Starting Reasoning service:                            [  OK  ]
        Starting Tomcat:                                       [  OK  ]
        Starting Reports service:                              [  OK  ]
        Starting External API service:                         [  OK  ]
        Starting Application Server service:                   [  OK  ]
        Updating cron tables:                                  [  OK  ]
        Updating baseline:                                     [  OK  ]
    [tideway@testhost ~]$
  16. Log on to the Discovery UI. If the UI is available and not showing any errors then re-enable the tideway services on startup. chkconfig should show all are enabled - "3:on".

    [tideway@testhost ~]$ sudo /sbin/chkconfig appliance on
    [tideway@testhost ~]$ sudo /sbin/chkconfig omniNames on
    [tideway@testhost ~]$ sudo /sbin/chkconfig cluster on
    [tideway@testhost ~]$ sudo /sbin/chkconfig tideway on
    [tideway@testhost ~]$ sudo /sbin/chkconfig --list | grep "appliance\|omniNames\|cluster\|tideway"
    appliance       0:off   1:off   2:off   3:on    4:off   5:off   6:off
    cluster         0:off   1:off   2:off   3:on    4:off   5:off   6:off
    omniNames       0:off   1:off   2:off   3:on    4:off   5:off   6:off
    tideway         0:off   1:off   2:off   3:on    4:off   5:off   6:off
    [tideway@testhost ~]$
  17. Finally, reboot the appliance.

Was this page helpful? Yes No Submitting... Thank you

Comments

  1. Lisa Keeler

    These steps are very risky.  

    Apr 16, 2018 03:37