Contents
Distribution List
1. Overview
This document is to maintain a procedure for doing LVM related activities on AIX servers in LBG Environment for both standalone and HACMP systems.
2. Important points to Remember
- Always adhere to the naming standards when working with VG/LV
- Make sure hcheck_interval is set to 60 on all the VIOC before using them
- Please make sure to read Notes in each section and follow them.
- Adding disks to RAC nodes are not covered in this document. This will be covered in a separate document along with a procedure to create ASM/OCR & redo devices
3. Mirroring Standards
- Both Standalone and HACMP nodes have mirroring only for rootvg (same site) but no mirroring for data VGs.
- Quorum should be disabled for mirrored VGs and should be enabled for non-mirrored VGs
- All shared volume groups are cross site mirrored
- rootvg & local volume groups are locally mirrored
- Quorum should be disabled for mirrored VGs and should be enabled for non-mirrored VGs
So, based on the above standards we will follow the respective sections to do the LVM related activities.
4. LVM Naming Standards
In General, USPP team creates the initial VG/LV as per their tick list based on the role type (should start with lo_/sh_ & end with _major number) except for the Datastage servers where IDD create specific volume group per project.
So, please follow the below link for Datastage LVM standards.
VG standards
Local VG: lo_<Role>vg_<VG Major Number>
Where
Role – is based on the role (was, bas, ora, http)
VG Major Number – is based on the following
ELAB – starts with 101 and so on
P2 Node – starts with 201 and so on
P3 Node – starts with 301 and so on
Example: lo_basvg_201
Shared VG: sh_<Role><Cluster ID>vg_<VG Major Number>
Where
Role – is based on the role type
Cluster ID – is the two digit number from the cluster ID
VG Major Number – is based on the following
P2 Node – starts with 201 and so on
P3 Node – starts with 301 and so on
Note: Shared volume group should not have more than 15 characters.
Example: sh_basc84vg_201
Quorum Standards
Quorum should be enabled for non-mirrored volume groups
Quorum should be disabled for mirrored volume groups
5. Adding additional storage for VIOC
Request for additional storage from storage P&C team by providing the storage request form (template as below) and follow the procedure to map LUNs from VIOS to VIOC
6. Adding additional storage for Non-VIOC
Request for additional storage from storage P&C team by providing the storage request form by providing the WWPN of the server where additional storage is required.
As soon as the LUNs are assigned by Storage team, just login to the server and run cfgmgr to identify the new LUNs and continue
7. ELAB/Assurance Systems
If there are enough disks to create volume group, go ahead to create new VG as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# mkvg –G –s 256 -V <major no> –y <vgname> <pvname1> <pvname2> ….
Note:
ENABLE/DISABLE quorum as per quorum standards for the newly created VGs
Make sure to do necessary modifications to check_lvm_mirrors.ksh script to ignore any local volume groups from checking cross site mirroring
If there are enough disks in the system, go ahead to extend the VG as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# extendvg <vgname> <pvname1> <pvname2> …..
If there are enough space in the VG, go ahead to create the file system as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# mklv -y <lvname> -t jfs2 -c2 -s’s’ <vgname> <no.of.LP> <1st copy disk> <2nd copy disk> è rootvg
# mklv -y <lvname> -t jfs2 -c1 <vgname> <no.of.LP> <pvname> è Non-rootvg
# crfs -v jfs2 -d <lvname> -a logname=INLINE -m <mount point> -Ay -t yes
Note:
1. After creating the file system, make sure to set the correct permission/ownership
2. The default block size is 4096 bytes. In case, project requires different block size, we need to specify the block size as below,
# crfs -v jfs2 -d <lvname> -a logname=INLINE -m <mount point> -Ay -t yes
-a agblksize=<new blk size>
If there are enough space in the VG, go ahead to extend the file system as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# chfs –a size=+<size> <FS name>
Before removing any filesystem, please take a backup of the contents. Then have appropriate approvals before proceeding
# unmount <FS name>
# rmfs –r <FS name>
Before you do this, make sure the PV you are going to reduce is free, then go ahead to execute the below command
# reducevg <vgname> <pvname>
Before you do this, make sure all the filesystems from the VG are backed up in case it’s needed for future use. If it’s not needed, then we can remove all the filesystems and remove the VG as below,
# lsvgfs <vgname> |xargs –i unmount {}
# lsvgfs <vgname> |xargs -i rmfs -r {}
# reducevg <vgname> <pvname1> <pvname2> <pvname3> ….
8. PROD Systems Local VG/rootvg:
If there are enough disks to create volume group, go ahead to create new VG as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# mkvg –G –s 256 -V <major no> –y <vgname> <1st copy disk> <2nd copy disk> ….
Note:
ENABLE/DISABLE quorum as per quorum standards for the newly created VGs
Make sure to do necessary modifications to check_lvm_mirrors.ksh script to ignore any local volume groups from checking cross site mirroring
If there are enough disks in the system, go ahead to extend the VG as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# extendvg <vgname> <1st copy disk> <2nd copy disk> …..
If there are enough space in the VG, go ahead to create the file system as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# mklv -y <lvname> -t jfs2 -c2 -s’s’ <vgname> <no.of.LP> <1st copy disk> <2nd copy disk>
# crfs -v jfs2 -d <lvname> -a logname=INLINE -m <mount point> -Ay -t yes
Note:
1. After creating the file system, make sure to set the correct permission/ownership
2. The default block size is 4096 bytes. In case, project requires different block size, we need to specify the block size as below,
# crfs -v jfs2 -d <lvname> -a logname=INLINE -m <mount point> -Ay -t yes
-a agblksize=<new blk size>
If there are enough space in the VG, go ahead to extend the file system as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# extendlv <no.of.LPs> <1st copy disk> <2nd copy disk>
# chfs –a size=+<size> <FS name>
Note: ENABLE/DISABLE quorum as per quorum standards for the newly created VGs
Before removing any file system, please take a backup of the contents. Then have appropriate approvals before proceeding
# unmount <FS name>
# rmfs –r <FS name>
Before you do this, make sure the PV you a7re going to reduce is free, then go ahead to execute the below command
# reducevg <vgname> <pvname>
Before you do this, make sure all the file systems from the VG are backed up in case it’s needed for future use. If its not needed, then we can remove all the filesystems and remove the VG as below,
# lsvgfs <vgname> |xargs –i unmount {}
# lsvgfs <vgname> |xargs -i rmfs -r {}
# reducevg <vgname> <pvname1> <pvname2> <pvname3> ….
9. PROD Systems – Shared VG:
Important
Before continuing, please make sure LUNs are assigned to both cluster nodes and PVID is visible. If PVID is not visible, please use the following command to set the PVID,
If LUNs are not added, go to step to add additional storage
# /usr/es/sbin/cluster/sbin/cl_assign_pvids <pvname> (On Primary node)
After assigning PVID, we need to run discovery from primary node
# /usr/es/sbin/cluster/utilities/clharvest_vg -w à To run discovery
Then add the newly added LUNs to respective sites as below & sync the cluster
# /usr/es/sbin/cluster/utilities/cllssite à To identify the site names
# /usr/es/sbin/cluster/utilities/cl_xslvmm -s'<site>' -a'<pvid> <pvid> <pvid> <pvid>'
# /usr/es/sbin/cluster/utilities/cldare -rt -V 'normal' à To sync cluster
Note: In some ELAB systems, HACMP is configured but there is no cross site mirroring. Instead there will be a mirroring from the same site. In such scenarios, we still need to add the disks to respective sites based on the I/O group it belongs to. There should be two different I/O groups and LUN should be added to respective I/O group before doing LVM operations.
If there are enough disks to create volume group, go ahead to create new VG as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# /usr/es/sbin/cluster/sbin/cl_mkvg -cspoc "-n<nodea>,<nodeb>" -C -G –s 256 –l true -V <major no> –y <vgname> <1st copy disk> <2nd copy disk> ….
Note: After creating the VG, we need to add them to RG configuration and sync the cluster as per below. Wait for few minutes to get the VG varied on.
# smitty hacmp -> Extended Configuration -> Extended Resource Configuration -> HACMP Extended Resource Group Configuration -> Change/Show Resources and Attributes for a Resource Group
Select the RG name to be changed and press enter,
Then add the new VG separated by space in the following parameter and press enter
Volume Groups [sh_basc84vg_201 ]
Then sync the cluster as below
# /usr/es/sbin/cluster/utilities/cldare -rt -V 'normal' à To sync cluster
Note:
ENABLE/DISABLE quorum as per quorum standards for the newly created VGs
Make sure to do necessary modifications to check_lvm_mirrors.ksh script to ignore any local volume groups from checking cross site mirroring
If there are enough disks in the system, go ahead to extend the VG as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
# /usr/es/sbin/cluster/sbin/cl_extendvg -cspoc " -n<nodea>,<nodeb> " –R<nodea> <vgname> <1st copy disk> <2nd copy disk>
If there are enough space in the VG, go ahead to create the file system as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage as per section
IMPORTANT: We should mirror disks cross site P2 & P3. If we try to add multiple disks at the same time, there is no guarantee that the copy will be distributed across sites. There is a chance that both copies are in same site. To avoid this, we need to add always a pair of disks because of cross site mirroring. If we need to add more than one pair of disk, please use extendlv option as per section 9.4. If you do not understand this fully, please seek advice from the team.
# /usr/es/sbin/cluster/sbin/cl_mklv -cspoc "-n<nodea>,<nodeb> " –R<nodea> -tjfs2 -c2 –ss –y<lvname> <vgname> <no.of.LPs> <1st copy disk> <2nd copy disk>
Note: Here we need to use -R option correctly. We have to use the corresponding hostname (nodea is where the VG is active & nodeb is where the RG is passive).
# /usr/es/sbin/cluster/sbin/cl_crfs -cspoc "-n<nodea>,<nodeb>" -v jfs2 –d<lvname> –m <mount point> -t yesstr –a logname=INLINE
Note:
1. After creating the file system, make sure to set the correct permission/ownership
2. The default block size is 4096 bytes. In case, project requires different block size, we need to specify the block size as below,
# /usr/es/sbin/cluster/sbin/cl_crfs -cspoc "-n<nodea>,<nodeb>" -v jfs2 –d<lvname> –m <mount point> -t yesstr –a logname=INLINE -a agblksize=<new blk size>
If there are enough space in the VG, go ahead to extend the file system as below. Else, request for additional storage from storage P&C team and follow the steps from section to add additional storage
IMPORTANT: We should mirror disks cross site P2 & P3. If we try to add multiple disks at the same time, there is no guarantee that the copy will be distributed across sites. There is a chance that both copies are in same site. To avoid this, we need to add always a pair of disks because of cross site mirroring. If you do not understand this fully, please seek advice from the team.
# /usr/es/sbin/cluster/sbin/cl_extendlv -cspoc "-n<nodea>,<nodeb>" –R<nodea> <lvname> <no.of.LPs> <1st copy disk> <2nd copy disk>
# chfs –a size=+<size> <FS name>
Before removing any file system, please take a backup of the contents. Then have appropriate approvals before proceeding
# unmount <FS name>
# /usr/es/sbin/cluster/sbin/cl_rmfs -cspoc " -n<nodea>,<nodeb>" –r <mount point>
Note: Please make sure the removed file system entries are removed from /etc/file systems as well.
Before you do this, make sure the PV you are going to reduce is free, then go ahead to execute the below command
# /usr/es/sbin/cluster/sbin/cl_reducevg -cspoc " -n<nodea>,<nodeb>" –R<nodea> <vgname> <1st copy disk> <2nd copy disk> ….
Before you do this, make sure all the file systems from the VG are backed up in case it’s needed for future use. If it’s not needed, then we can remove all the filesystems and remove the VG as below,
# /usr/es/sbin/cluster/sbin/cl_rmfs -cspoc " -n<nodea>,<nodeb>" –r <mount point> à Do this step for all the FS in the VG
# /usr/es/sbin/cluster/sbin/cl_reducevg -cspoc " -n<nodea>,<nodeb>" –R<nodea> <vgname> <1st copy disk> <2nd copy disk> …
Note: After removing the VG, we need to remove from RG configuration and sync the cluster as per below..
# smitty hacmp -> Extended Configuration -> Extended Resource Configuration -> HACMP Extended Resource Group Configuration -> Change/Show Resources and Attributes for a Resource Group
Select the RG name to be changed and press enter,
Then remove the VG which was removed from the following parameter and press enter
Volume Groups [sh_basc84vg_201 ]
Then sync the cluster as below
# /usr/es/sbin/cluster/utilities/cldare -rt -V 'normal' à To sync cluster
10. Miscellaneous
In some cases, we will not be able to extend the FS when the maximum number of LP for an LV is reached maximum or when the UPPER bound is reached maximum. In such case, we can use the following commands to change the values
# chlv -x <max LPs> <lvname>
# /usr/es/sbin/cluster/sbin/cl_chlv -cspoc "-n<nodea>,<nodeb>" -x <no of LPs> <lvname>
# chlv –u <upperbound> <lvname>
# /usr/es/sbin/cluster/sbin/cl_chlv -cspoc "-n<nodea>,<nodeb>" -u <no of PVs> <lvname>
# chvg –Q[y/n] <vgname> è On Standalone systems. Select Y/N as per Quorum standards
# /usr/es/sbin/cluster/sbin/cl_chvg -cspoc "-n<nodea>,<nodeb>" –Q[y/n] <vgname> è On HA nodes, select Y/N as per quorum standards
No comments:
Post a Comment