Linux: GFS / GFS2 Sharing filesystems multiple servers / nodes

Posted by kairo on Thu 09 April 2009

1 - Creating the Volume Group

# pvs PV         VG     Fmt  Attr PSize  PFree /dev/sda3  rootvg lvm2 a-   62.75G 38.97G /dev/dm-10        lvm2 --   70.00G 70.00G /dev/dm-13        lvm2 --   70.00G 70.00G /dev/dm-14        lvm2 --   70.00G 70.00G /dev/dm-9         lvm2 --   70.00G 70.00G

# vgcreate vg_cluster00 /dev/dm-10 /dev/dm-13  /dev/dm-14 /dev/dm-9 Volume group "vg_cluster00" successfully created

# pvs PV         VG          Fmt  Attr PSize  PFree /dev/dm-10 vg_cluster00 lvm2 a-   70.00G 70.00G /dev/dm-13 vg_cluster00 lvm2 a-   70.00G 70.00G /dev/dm-14 vg_cluster00 lvm2 a-   70.00G 70.00G /dev/dm-9  vg_cluster00 lvm2 a-   70.00G 70.00G /dev/sda3  rootvg      lvm2 a-   62.75G 38.97G

# vgs VG          #PV #LV #SN Attr   VSize   VFree rootvg        1   9   0 wz--n-  62.75G  38.97G vg_cluster00   4   0   0 wz--n- 279.98G 279.98G

2 - Creating the Logical Volumes

# vcreate -L180G vg_cluster00 -n lvuserapp

3 - Making the Cluster

Particulary I like system-config-cluster

This is my simple /etc/cluster/cluster.conf

<?xml version="1.0"?> <cluster alias="CLUSTER00" config_version="23" name="CLUSTER00"> <fence_daemon post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="node001" nodeid="1" votes="1"> <fence> <method name="1"/> </fence> </clusternode> <clusternode name="node003" nodeid="3" votes="1"> <fence> <method name="1"/> </fence> </clusternode> <clusternode name="node004" nodeid="4" votes="1"> <fence> <method name="1"/> </fence> </clusternode> <clusternode name="node002" nodeid="2" votes="1"> <fence> <method name="1"/> </fence> </clusternode> </clusternodes> <fencedevices/> </cluster>

4 - making gfs2 filesystems

# mkfs -t gfs2 -p lock_dlm -t CLUSTER00:lvuserapp -j 8 /dev/vg_cluster00/lvuserapp

5 - mounting GFS2 filesystems

Put on the /etc/fstab file

/dev/vg_cluster00/lvuserapp      /home/userapp           gfs2    defaults       0 0

6 - Start the cluster services

Note: For complete startup start the service on all nodes.

service cman start service rgmanager start

7 - Check the nodes

# cman_tool nodes Node  Sts   Inc   Joined               Name 1   M    196   2009-04-09 11:57:16  node001 2   M    216   2009-04-09 11:57:32  node002 3   M    212   2009-04-09 11:58:02  node003 4   M    214   2009-04-09 11:58:32  node004

8 - mounting the filesystems

Mount filesystems on all nodes

# mount /home/userapp

9 - testing the read/write files on nodes

# touch /home/userapp/teste.txt

Check on all servers if this file exist.

Sources:


Comments !