This example configures a cluster with three GFS nodes and two GFS file systems. It will require three nodes for the GFS cluster, one node to run a LOCK_GULM server, and another node for a GNBD server. (A total of five nodes are required in this example.)
This section provides the following information about the example:
This example configuration has the following key characteristics:
Fencing device — An APC MasterSwitch (single-switch configuration). Refer to Table C-15 for switch information.
Number of GFS nodes — 3. Refer to Table C-16 for node information.
Number of lock server nodes — 1. The lock server is run on one of the GFS nodes (embedded). Refer to Table C-17 for node information.
Number of GNBD server nodes — 1. Refer to Table C-18 for node information.
Locking protocol — LOCK_GULM. The LOCK_GULM server is run on a node (the lock server node) that is not mounting GFS but is dedicated as a LOCK_GULM server.
Number of shared storage devices — 2. GNBD will be used as the transport layer for the storage devices. Refer to Table C-19 for storage device information.
Number of file systems — 2.
File system names — gfs01 and gfs02.
File system mounting — Each GFS node mounts the two file systems.
Cluster name — alpha.
Host Name | IP Address | APC Port Number |
---|---|---|
n01 | 10.0.1.1 | 1 |
n02 | 10.0.1.2 | 2 |
n03 | 10.0.1.3 | 3 |
Table C-16. GFS Node Information
Major | Minor | #Blocks | Name |
---|---|---|---|
8 | 16 | 8388608 | sda |
8 | 17 | 8001 | sda1 |
8 | 18 | 8377897 | sda2 |
8 | 32 | 8388608 | sdb |
8 | 33 | 8388608 | sdb1 |
Table C-19. Storage Device Information
Notes | |
---|---|
The storage must only be visible on the GNBD server node. The GNBD server node will ensure that the storage is visible to the GFS cluster nodes via the GNBD protocol. For shared storage devices to be visible to the nodes, it may be necessary to load an appropriate device driver. If the shared storage devices are not visible on each node, confirm that the device driver is loaded and that it loaded without errors. The small partition (/dev/sda1) is used to store the cluster configuration information. The two remaining partitions (/dev/sda2, sdb1) are used for the GFS file systems. You can display the storage device information at each node in your GFS cluster by running the following command: cat /proc/partitions. Depending on the hardware configuration of the GFS nodes, the names of the devices may be different on each node. If the output of the cat /proc/partitions command shows only entire disk devices (for example, /dev/sda instead of /dev/sda1), then the storage devices have not been partitioned. To partition a device, use the fdisk command. |
Each node must have the following kernel modules loaded:
gfs.o
gnbd.o
lock_harness.o
lock_gulm.o
pool.o
The setup process for this example consists of the following steps:
Create and export GNBD devices.
Create and export a GNBD device for the storage on the GNBD server (gnbdsrv) to be used for the GFS file systems and CCA device. In the following example, gfs01 is the GNBD device used for the pool of the first GFS file system, gfs02 is the device used for the pool of the second GFS file system, and cca is the device used for the CCA device.
gnbdsrv# gnbd_export -e cca -d /dev/sda1 -c gnbdsrv# gnbd_export -e gfs01 -d /dev/sda2 -c gnbdsrv# gnbd_export -e gfs02 -d /dev/sdb1 -c |
Caution | |
---|---|
The GNBD server should not attempt to use the cached devices it exports — either directly or by importing them. Doing so can cause cache coherency problems. |
Import GNBD devices on all GFS nodes and the lock server node.
Use gnbd_import to import the GNBD devices from the GNBD server (gnbdsrv):
n01# gnbd_import -i gnbdsrv n02# gnbd_import -i gnbdsrv n03# gnbd_import -i gnbdsrv lcksrv# gnbd_import -i gnbdsrv |
Create pool configurations for the two file systems.
Create pool configuration files for each file system's pool: pool_gfs01 for the first file system, and pool_gfs02 for the second file system. The two files should look like the following:
poolname pool_gfs01 subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/gnbd/gfs01 |
poolname pool_gfs02 subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/gnbd/gfs02 |
Create a pool configuration for the CCS data.
Create a pool configuration file for the pool that will be used for CCS data. The pool does not need to be very large. The name of the pool will be alpha_cca. (The name of the cluster, alpha, followed by _cca). The file should look like the following:
poolname alpha_cca subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/gnbd/cca |
Create the pools using the pool_tool command.
Note | |
---|---|
This operation must take place on a GNBD client node. |
Use the pool_tool command to create all the pools as follows:
n01# pool_tool -c pool_gfs01.cf pool_gfs02.cf alpha_cca.cf Pool label written successfully from pool_gfs01.cf Pool label written successfully from pool_gfs02.cf Pool label written successfully from alpha_cca.cf |
Activate the pools on all nodes.
Note | |
---|---|
This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible. |
Activate the pools using the pool_assemble -a command for each node as follows:
n01# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled n02# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled n03# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled lcksrv# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled |
Create CCS files.
Create a directory called /root/alpha on node n01 as follows:
n01# mkdir /root/alpha n01# cd /root/alpha |
Create the cluster.ccs file. This file contains the name of the cluster and the name of the nodes where the LOCK_GULM server is run. The file should look like the following:
cluster { name = "alpha" lock_gulm { servers = ["lcksrv"] } } |
Create the nodes.ccs file. This file contains the name of each node, its IP address, and node-specific I/O fencing parameters. The file should look like the following:
nodes { n01 { ip_interfaces { eth0 = "10.0.1.1" } fence { power { apc { port = 1 } } } } n02 { ip_interfaces { eth0 = "10.0.1.2" } fence { power { apc { port = 2 } } } } n03 { ip_interfaces { eth0 = "10.0.1.3" } fence { power { apc { port = 3 } } } } lcksrv { ip_interfaces { eth0 = "10.0.1.4" } fence { power { apc { port = 4 } } } } gnbdsrv { ip_interfaces { eth0 = "10.0.1.5" } fence { power { apc { port = 5 } } } } } |
Note | |
---|---|
If your cluster is running Red Hat GFS 6.0 for Red Hat Enterprise Linux 3 Update 5 and later, you can use the optional usedev parameter to explicitly specify an IP address rather than relying on an IP address from libresolv. For more information about the optional usedev parameter, refer to the file format in Figure 6-23 and the example in Example 6-26. Refer to Table 6-3 for syntax description of the usedev parameter. |
Create the fence.ccs file. This file contains information required for the fencing method(s) used by the GFS cluster. The file should look like the following:
fence_devices { apc { agent = "fence_apc" ipaddr = "10.0.1.10" login = "apc" passwd = "apc" } } |
Create the CCS Archive on the CCA Device.
Note | |
---|---|
This step only needs to be done once and from a single node. It should not be performed every time the cluster is restarted. |
Use the ccs_tool command to create the archive from the CCS configuration files:
n01# ccs_tool create /root/alpha /dev/pool/alpha_cca Initializing device for first time use... done. |
Start the CCS daemon (ccsd) on all the nodes.
Note | |
---|---|
This step must be performed each time the cluster is rebooted. |
The CCA device must be specified when starting ccsd.
n01# ccsd -d /dev/pool/alpha_cca n02# ccsd -d /dev/pool/alpha_cca n03# ccsd -d /dev/pool/alpha_cca lcksrv# ccsd -d /dev/pool/alpha_cca |
At each node, start the LOCK_GULM server. For example:
n01# lock_gulmd lcksrv# lock_gulmd |
Create the GFS file systems.
Create the first file system on pool_gfs01 and the second on pool_gfs02. The names of the two file systems are gfs01 and gfs02, respectively, as shown in the example:
n01# gfs_mkfs -p lock_gulm -t alpha:gfs01 -j 3 /dev/pool/pool_gfs01 Device: /dev/pool/pool_gfs01 Blocksize: 4096 Filesystem Size:1963216 Journals: 3 Resource Groups:30 Locking Protocol:lock_gulm Lock Table: alpha:gfs01 Syncing... All Done n01# gfs_mkfs -p lock_gulm -t alpha:gfs02 -j 3 /dev/pool/pool_gfs02 Device: /dev/pool/pool_gfs02 Blocksize: 4096 Filesystem Size:1963416 Journals: 3 Resource Groups:30 Locking Protocol:lock_gulm Lock Table: alpha:gfs02 Syncing... All Done |
Mount the GFS file systems on all the nodes.
Mount points /gfs01 and /gfs02 are used on each node:
n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n02# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n02# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n03# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n03# mount -t gfs /dev/pool/pool_gfs02 /gfs02 |