Friday 16 December 2011

A RHEL 6/Centos 6 HA Cluster for LAN Services switching to EXT4 (Part 1)

Introduction
Sadly I have had to revisit my cluster configuration to remove GFS2 from my setup. This is for several reasons. Firstly I have recently discovered that you cannot share a filesystem out on NFS and simultaneously use it for local access (e.g Samba or backups). Red Hat do not support you doing this and it can lead to kernel panics and file system corruption. This is to do with a bad interaction between the three locking systems in use (see my notes under the Clustered Samba setup in my original cluster setup articles). Sadly this is what I want to do, so users can share files between Linux and Windows. The RH recommendation of sharing the NFS export in samba seems possible, but the Samba people don't recommend that. Sharing other filesystems (ext3/4) on NFS and local access can cause file corruption if you were to rely on file locking between NFS and local access, but none of our applications or users expect this to work. So I can live with any potential danger of individual file corruption but not filesystem corruption.

Secondly, I have seen a certain amount of instability on GFS2 when not sharing on NFS (just pure local access used for cups). Thirdly, the performance I have seen from GFS2 has been an issue for the sort of workloads I have (home directories and shared project directories). This has been especially true for anything that requires directory traversals (e.g. backups). So perhaps GFS2 isn't ideal for my sorts of workload. Also I guess, I can also save the license for "resilient storage" on RHEL6.

So I have decided to re-implement my cluster using a non clustered filesystem (ext4) and have the mounts failover to the nodes running the service.

Initial Setup
The hardware setup, OS install, NTP setup, partitioning, DRBD, Clustered Logical Volume Manager and Fence setup (on the APC device and DRAC) are identical to my original setup (so please refer back to my original postings).

As I'm switching to a non-clustered filesystem Linbit would recommend using a primary/secondary DRBD. I haven't done this for two reasons. Firstly, I like to manage all my DRBD storage on one drbd device for simplicity (all my cluster storage in one place). Secondly, I'd like to option of switching back to GFS2 in the future (when my issues with it are repaired).

The ext4 Filesystem and Clustering
I have chosen to use ext4 in a failover setup. This means I can't mount it on both nodes simultaneously. So basically what will happen is that when a service comes up it will get demand mount it on that node and the service will come up.

There are issues with this. Firstly there is no protection from double mounting the filesystem on both nodes, doing so will likely corrupt both if you allow this to happen. The cluster itself though will not let this happen in itself when it is managing things. If a service is moved, the cluster will attempt to umount the filesystem from the first node before mounting it onto the second node. If it fails to umount it, the service will be marked "failed". Then the administrator can look at why it failed to umount. The admin can then disable the service again (all you can do from a failed service) but this time the cluster will not check that it has been umounted, so you need to be really sure it is umounted on the previous node before starting it elsewhere.

There was a resource agent that could prevent this nasty double mounting by exclusively locking the Logical Volume. Sadly this hasn't made it into the RH supported or distributed resource agents. It is here however and has been reported to work well (though I haven't tried it):

https://www.redhat.com/archives/cluster-devel/2009-June/msg00065.html

So after all this information and caveats to create the service filesystems all we need to do is the usual (as the GFS2 cluster) e.g.

/sbin/mkfs.ext4 /dev/cluvg00/lv00dhcpd

I would *NOT* add this to fstab, that makes double mounting inevitable, if you were daft enough to make it mount at boot time or very likely as the entry makes mounting too easy. Having to manually type in the paths for mounting these filesystems hopefully gives some extra thinking time to check that it isn't mounted on the other node.

Then like the GFS version of this cluster setup the filesystem is created:


mkdir /data
mkdir /data/dhcpd
mount /dev/mapper/cluvg00-lv00dhcpd /data/dhcpd


Then setup the file tree under here for this service (as in my GFS cluster blog).

Next replacing clustered samba with a failover samba setup.....

Saturday 30 April 2011

A Secure PPP Dial-In Centos machine with SELinux

I did this a while ago but I thought this was worth sharing. I wanted a backup connection to the office when all other links are down. So I created a dial-up PPP modem machine. I was however concerned about security. After all to make this machine useful it will need to be on the internal network, but also have a modem connected to a process running as root (a getty and then a pppd). Finding a flaw in these could allow you to obtain a root shell on this internal machine. Obviously not very likely if the machine is up to date, but lets try and help with this. So I decided to look at SELinux to help with my concern.

There are two getty's shipped with RHEL/Centos , the mingetty and mgetty. The "mgetty" is more suitable for use with modems, whereas the "mingetty" is better for the virtual consoles.

Sadly for me in the standard targeted SELinux policy the mgetty and mingetty share a policy, which is fine for most applications. But I'd like mingetty to work normally (with logins on virtual consoles etc) and mgetty to be more limited. All I need my mgetty to do is spawn a pppd, and this will handle my authentication and then talk PPP for me, but not be able to spawn a login or shell.

I started with a machine with a very cut down Centos 5 install. I installed mgetty and selinux-policy-devel. I also installed  selinux-policy srpm so I could take getty.fc, getty.if and getty.te from here.
I put these files in a directory /root/mgettyse (not sure if this is the best practice way of doing this) and renamed them with a prefix of lcl. I edited these until I had removed everything not needed for my application (launching pppd), I hope. I also changed all the policy names in the files to lclgetty so as not to collide with the existing policy. I have put these files at the end.

Before anything else I'll set up my dial up config. I add to /etc/initab:


# Run mgetty for the modem
7:2345:respawn:/sbin/mgetty /dev/ttyS0

Now edit the config files under /etc/mgetty+sendfax. My dialin.config file is fully commented out. My login.config has only two lines uncommented:

/AutoPPP/ -   a_ppp /usr/sbin/pppd  
* - - /bin/false @

Basically here I want the mgetty to autospawn pppd if it hears a ppp on the other end it. If not it will spawn /bin/false and hang up (well it probably won't even do that with my SELinux policy as it won't spawn false either, but mgetty will get bored and hang up eventually). My mgetty.config has in it the only uncommented lines:

debug 4
fax-id 49 115 xxxxxxxx
speed 115200
port ttyS0
speed 115200
init-chat "" "\d\d\d+++\d\d\dATH" OK "AT&F" OK "ATM0" OK "AT&B1" OK 
data-only yes
login-prompt
issue-file /etc/issue.ppp

These will all likely depend on your modem esp the init-chat (basically I force a hang up (after an escape sequence), a reset to factory defaults, speaker off and AT&B1 tells the modem not to match port speed to connect speed). My issue.ppp basically has a message you want to appear if someone connects in a text session.

I also have my "emerconnect" user in /etc/ppp/chap-secrets:



# Secrets for authentication using CHAP
# client        server  secret                  IP addresses
#
emerconnect     *       "12345678"      10.1.32.20

(obviously you want to change this passord)

Then a few pppd options in  /etc/ppp/options:
-detach
lock
auth
crtscts
chap-max-challenge 5
ms-dns  10.1.10.10
ms-dns  10.1.10.11
asyncmap 0
idle 1800
10.1.32.1:
require-mschap-v2
debug 

Obviously change for your requirements (for example your internal DNS servers).

You need to turn on IP forwarding to allow this to route:
sysctl -w net.ipv4.ip_forward=1

(and probably easiest to add this to the bottom of /etc/rc.local so it's persistent across reboots, I didn't add to /etc/sysctl.conf as I only wanted to turn this on after everything else was up, not sure if this is justified or not paranoia).


I then make the SELinux policy:

# cd /root/mgetty

# make -f /usr/share/selinux/devel/Makefile                                                                                     
Compiling targeted lclgetty module
/usr/bin/checkmodule:  loading policy configuration from tmp/lclgetty.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 6) to tmp/lclgetty.mod
Creating targeted lclgetty.pp policy package
rm tmp/lclgetty.mod.fc tmp/lclgetty.mod


If is all builds install the module:

/usr/sbin/semodule -i lclgetty.pp

(you will probably get some warnings about colliding with exiting policies i.e. the built in getty policy, but it seems happy enough with itself). But if happy you can set the file SELinux contexts:

/sbin/restorecon -rvrvvF /etc/mgetty+sendfax /etc/issue.ppp /sbin/mgetty /var/log/mgetty.log.*

And verify that they set with ls -alZ e.g


-bash-3.2$ ls -alZ /sbin/mgetty 
-rwx------  root root system_u:object_r:lclgetty_exec_t /sbin/mgetty


(but look at the other files too)

If all happy run "init q" to reload the initab file. Then check that mgetty is running in the new context:

-bash-3.2$ ps -efZ | grep -i mgetty                                                                                                                   
system_u:system_r:lclgetty_t    root      4903     1  0 Apr15 ?        00:00:00 /sbin/mgetty /dev/ttyS0


(so now running in the lclgetty context).

Obviously test the normal operation of this by connecting from another machine as a PPP session (say with kppp) to this machine. Then ensure that everything works and you can communicate with your network). But to test SELinux security try replacing the /bin/false line in login.config with:


* - - /bin/login @


With SELinux off, by using "setenforce 0", try dialling in using minicom from another machine. This should  bring up a login prompt (it will fail the autoppp test). With SELinux on by using "setenforce 1" it shouldn't. You may want to try more elaborate tests say trying to launch a shell (/bin/bash) directly (failing with SELinux on), but you'll probably want to ensure a safe testing environment (no open phone line).

After testing return the /bin/login back to /bin/false in login.config and ensure SELinux is enforcing (hopefully 2 layers of protection). 

Hopefully everything here is correct I'm working from memory to some extent. You should hopefully now have a secure PPP dial-in server. Below are the SELinux policy files mentioned above.

::::::::::::::
lclgetty.fc
::::::::::::::
/etc/mgetty+sendfax/(/.*)? gen_context(system_u:object_r:lclgetty_etc_t,s0)
/etc/issue.ppp -- gen_context(system_u:object_r:lclgetty_etc_t,s0)
/sbin/mgetty -- gen_context(system_u:object_r:lclgetty_exec_t,s0)
/var/log/mgetty.log.ttyS0 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.1 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.2 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.3 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.4 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.5 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.6 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.7 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/run/mgetty.pid.ttyS0 -- gen_context(system_u:object_r:lclgetty_var_run_t,s0)
::::::::::::::
lclgetty.if
::::::::::::::
## <summary>Policy for getty.</summary>

########################################
## <summary>
## Execute gettys in the getty domain.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
#
interface(`lclgetty_domtrans',`
gen_require(`
type lclgetty_t, lclgetty_exec_t;
')

corecmd_search_sbin($1)
domain_auto_trans($1,lclgetty_exec_t,lclgetty_t)

allow $1 lclgetty_t:fd use;
allow lclgetty_t $1:fd use;
allow lclgetty_t $1:fifo_file rw_file_perms;
allow getty_t $1:process sigchld;
')

########################################
## <summary>
## Inherit and use getty file descriptors.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
#
interface(`lclgetty_use_fds',`
gen_require(`
type lclgetty_t;
')

allow $1 lclgetty_t:fd use;
')

########################################
## <summary>
## Allow process to read getty log file.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
## <rolecap/>
#
interface(`lclgetty_read_log',`
gen_require(`
type lclgetty_log_t;
')

logging_search_logs($1)
allow $1 lclgetty_log_t:file { getattr read };
')

########################################
## <summary>
## Allow process to read getty config file.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
## <rolecap/>
#
interface(`lclgetty_read_config',`
gen_require(`
type lclgetty_etc_t;
')

files_search_etc($1)
allow $1 lclgetty_etc_t:file { getattr read };
')

########################################
## <summary>
## Allow process to edit getty config file.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
## <rolecap/>
#
interface(`lclgetty_rw_config',`
gen_require(`
type lclgetty_etc_t;
')

files_search_etc($1)
allow $1 lclgetty_etc_t:file rw_file_perms;
')
::::::::::::::
lclgetty.te
::::::::::::::

policy_module(lclgetty,1.2.0)

########################################
#
# Declarations
#

type lclgetty_t;
type lclgetty_exec_t;
init_domain(lclgetty_t,lclgetty_exec_t)
domain_interactive_fd(lclgetty_t)

type lclgetty_etc_t;
typealias lclgetty_etc_t alias etc_lclgetty_t;
files_config_file(lclgetty_etc_t)

type lclgetty_lock_t;
files_lock_file(lclgetty_lock_t)

type lclgetty_log_t;
logging_log_file(lclgetty_log_t)

type lclgetty_tmp_t;
files_tmp_file(lclgetty_tmp_t)

type lclgetty_var_run_t;
files_pid_file(lclgetty_var_run_t)

########################################
#
# Getty local policy
#

# Use capabilities.
#allow lclgetty_t self:capability { dac_override sys_resource sys_tty_config fowner fsetid };
dontaudit lclgetty_t self:capability chown;
#dontaudit lclgetty_t self:capability sys_tty_config;
#allow lclgetty_t self:process { getpgid getsession signal_perms };
#allow lclgetty_t self:capability { sys_tty_config };

allow lclgetty_t lclgetty_etc_t:dir r_dir_perms;
allow lclgetty_t lclgetty_etc_t:file r_file_perms;
allow lclgetty_t lclgetty_etc_t:lnk_file { getattr read };
files_etc_filetrans(lclgetty_t,lclgetty_etc_t,{ file dir })

allow lclgetty_t lclgetty_lock_t:file create_file_perms;
files_lock_filetrans(lclgetty_t,lclgetty_lock_t,file)

allow lclgetty_t lclgetty_log_t:file create_file_perms;
logging_log_filetrans(lclgetty_t,lclgetty_log_t,file)

#allow lclgetty_t lclgetty_tmp_t:file create_file_perms;
#allow lclgetty_t lclgetty_tmp_t:dir create_dir_perms;
#files_tmp_filetrans(lclgetty_t,lclgetty_tmp_t,{ file dir })

allow lclgetty_t lclgetty_var_run_t:file create_file_perms;
allow lclgetty_t lclgetty_var_run_t:dir rw_dir_perms;
files_pid_filetrans(lclgetty_t,lclgetty_var_run_t,file)

#kernel_list_proc(lclgetty_t)
#kernel_read_proc_symlinks(lclgetty_t)

#dev_read_sysfs(lclgetty_t)

#fs_search_auto_mountpoints(lclgetty_t)
# for error condition handling
#fs_getattr_xattr_fs(getty_t)

#mcs_process_set_categories(getty_t)

#mls_file_read_up(getty_t)
#mls_file_write_down(getty_t)

# Chown, chmod, read and write ttys.
term_use_all_user_ttys(lclgetty_t)
#term_use_unallocated_ttys(getty_t)
term_setattr_all_user_ttys(lclgetty_t)
#term_setattr_unallocated_ttys(getty_t)
#term_setattr_console(getty_t)
#term_dontaudit_use_console(getty_t)

auth_rw_login_records(lclgetty_t)

#corecmd_search_bin(lclgetty_t)
#corecmd_search_sbin(lclgetty_t)

#files_rw_generic_pids(lclgetty_t)
###
files_read_etc_runtime_files(lclgetty_t)
files_read_etc_files(lclgetty_t)
#files_search_spool(getty_t)

init_rw_utmp(lclgetty_t)
#init_use_script_ptys(getty_t)
#init_dontaudit_use_script_ptys(getty_t)

libs_use_ld_so(lclgetty_t)
libs_use_shared_libs(lclgetty_t)

###locallogin_domtrans(getty_t)

logging_send_syslog_msg(lclgetty_t)

miscfiles_read_localization(lclgetty_t)

###ifdef(`targeted_policy',`
### term_dontaudit_use_unallocated_ttys(getty_t)
### term_dontaudit_use_generic_ptys(getty_t)
###')

###optional_policy(`
### mta_send_mail(getty_t)
###')

# Keeps nscd quiet in logs but not running so no problem
optional_policy(`
nscd_socket_use(lclgetty_t)
')

optional_policy(`
ppp_domtrans(lclgetty_t)
')

###optional_policy(`
### udev_read_db(lclgetty_t)
###')



Monday 25 April 2011

Building a RHEL 6/Centos 6 HA Cluster for LAN Services (part 6)

Tidying up
That's mostly it. I'd recommend a cron job to run monthly to check that the two node's DRBD's blocks are fully in sync. I thought originally something like this would work:

# Check DRBD integrity every third Sunday of the month
38 6 15-21 * 7 /sbin/drbdadm verify all


However this will OR the two options, so will run every Sunday and every day 15-21. So instead I have:


38 6 15-21 * * /usr/local/sbin/drbdverifysun >/dev/null 2>&1


and then this script (drbdverifysun) has:


#!/bin/bash

# Only run the verify on a Sunday, cron is already restricting date range
if [ "`date +%a`" = "Sun" ] ; then
/sbin/drbdadm verify all
fi

, to cause it only to run if a Sunday.

And also probably an occasional cron job to check that the fence device is still pingable by the nodes would be a good idea.

I'd also make sure that you can run all the services on either node. You don't want to discover that doesn't work when a node fails! You can move them around with clusvcadm or in web console lucci. Which I haven't needed or used up to now but is useful to monitor services or to move services around.

If you want to use iptables on the nodes (and I do), I'd make life easy for yourself and fully open up the bond1 (the back to back connection) interface to ACCEPT on both nodes. There is a lot of multicasting etc going on and you'll just make work for yourself trying to see what needs opened up. I'd just tie down the services allowed on the main network interface.

One thing missing from my cluster setup so far is kerberized NFSv4. I will hopefully get a chance to revisit that.

Here are the final key files at the end. First my cluster.conf:

<?xml version="1.0"?>
<cluster config_version="48" name="bldg1ux01clu">
<cman expected_votes="1" two_node="1"/>
<clusternodes>
<clusternode name="bldg1ux01n1i" nodeid="1" votes="1">
<fence>
<method name="apc7920-dual">
<device action="off" name="apc7920" port="1"/>
<device action="off" name="apc7920" port="2"/>
<device action="on" name="apc7920" port="1"/>
<device action="on" name="apc7920" port="2"/>
</method>
<method name="bldg1ux01n1drac">
<device name="bldg1ux01n1drac"/>
</method>
</fence>
</clusternode>
<clusternode name="bldg1ux01n2i" nodeid="2" votes="1">
<fence>
<method name="apc7920-dual">
<device action="off" name="apc7920" port="3"/>
<device action="off" name="apc7920" port="4"/>
<device action="on" name="apc7920" port="3"/>
<device action="on" name="apc7920" port="4"/>
</method>
<method name="bldg1ux01n2drac">
<device name="bldg1ux01n2drac"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="bldg1ux01A" ordered="1" restricted="1">
<failoverdomainnode name="bldg1ux01n1i" priority="1"/>
<failoverdomainnode name="bldg1ux01n2i" priority="2"/>
</failoverdomain>
<failoverdomain name="bldg1ux01B" ordered="1" restricted="1">
<failoverdomainnode name="bldg1ux01n1i" priority="2"/>
<failoverdomainnode name="bldg1ux01n2i" priority="1"/>
</failoverdomain>
</failoverdomains>
<resources>
<nfsexport name="bldg1cluexports"/>
<ip address="10.1.10.25" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00dhcpd" fstype="gfs2" mountpoint="/data/dhcpd" name="dhcpdfs" options="acl"/>
<ip address="10.1.10.26" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00named" fstype="gfs2" mountpoint="/data/named" name="namedfs" options="acl"/>
<ip address="10.1.10.27" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00cups" fstype="gfs2" mountpoint="/data/cups" name="cupsfs" options="acl"/>
<ip address="10.1.10.28" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00httpd" fstype="gfs2" mountpoint="/data/httpd" name="httpdfs" options="acl"/>
<ip address="10.1.10.29" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00projects" fstype="gfs2" mountpoint="/data/projects" name="projectsfs" options="acl"/>
<nfsclient name="nfsdprojects" options="rw" target="10.0.0.0/8"/>
<ip address="10.1.10.30" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00home" fstype="gfs2" mountpoint="/data/home" name="homefs" options="acl"/>
<nfsclient name="nfsdhome" options="rw" target="10.0.0.0/8"/>
</resources>
<service autostart="1" domain="bldg1ux01A" exclusive="0" name="dhcpd" recovery="relocate">
<script file="/etc/init.d/dhcpd" name="dhcpd"/>
<ip ref="10.1.10.25"/>
<clusterfs ref="dhcpdfs"/>
</service>
<service autostart="1" domain="bldg1ux01A" exclusive="0" name="named" recovery="relocate">
<clusterfs ref="namedfs"/>
<ip ref="10.1.10.26"/>
<script file="/etc/init.d/named" name="named"/>
</service>
<service autostart="1" domain="bldg1ux01B" exclusive="0" name="cups" recovery="relocate">
<script file="/etc/init.d/cups" name="cups"/>
<ip ref="10.1.10.27"/>
<clusterfs ref="cupsfs"/>
</service>
<service autostart="1" domain="bldg1ux01B" exclusive="0" name="httpd" recovery="relocate">
<clusterfs ref="httpdfs"/>
<clusterfs ref="projectsfs"/>
<ip ref="10.1.10.28"/>
<apache config_file="conf/httpd.conf" name="httpd" server_root="/data/httpd/etc/httpd" shutdown_wait="10"/>
</service>
<service autostart="1" domain="bldg1ux01A" exclusive="0" name="nfsdprojects" recovery="relocate">
<ip ref="10.1.10.29"/>
<clusterfs ref="projectsfs">
<nfsexport ref="bldg1cluexports">
<nfsclient ref="nfsdprojects"/>
</nfsexport>
</clusterfs>
</service>
<service autostart="1" domain="bldg1ux01B" exclusive="0" name="nfsdhome" recovery="relocate">
<ip ref="10.1.10.30"/>
<clusterfs ref="homefs">
<nfsexport ref="bldg1cluexports">
<nfsclient ref="nfsdhome"/>
</nfsexport>
</clusterfs>
</service>
</rm>
<fencedevices>
<fencedevice agent="fence_apc" ipaddr="192.168.2.3" login="apc" name="apc7920" passwd="securepassword"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.1.10.22" login="fence" name="bldg1ux01n1drac" passwd="securepassword"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.1.10.23" login="fence" name="bldg1ux01n2drac" passwd="securepassword"/>
</fencedevices>
<fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
</cluster>

My fstab file:

#
# /etc/fstab
# Created by anaconda on Thu Jan 20 17:37:26 2011
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=2ca89192-0dfa-45ab-972d-9fd15e5c6414 /                       ext4    defaults        1 1
UUID=7ab69be7-52fd-4f08-b08b-f9aea7c7ef70 swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/cluvg00/lv00dhcpd /data/dhcpd gfs2 acl 0 0
/dev/cluvg00/lv00named /data/named gfs2 acl 0 0
/dev/cluvg00/lv00cups /data/cups gfs2 acl 0 0
/dev/cluvg00/lv00httpd /data/httpd gfs2 acl 0 0
/dev/cluvg00/lv00projects /data/projects gfs2 acl 0 0
/dev/cluvg00/lv00home /data/home gfs2 acl 0 0
/dev/cluvg00/lv00lclu /data/lclu gfs2 acl             0 0


And /etc/hosts:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.10.20 bldg1ux01n1.lan bldg1ux01n1
10.1.10.21 bldg1ux01n2.lan bldg1ux01n2
192.168.1.1 bldg1ux01n1i
192.168.1.2 bldg1ux01n2i
192.168.2.1 bldg1ux01n1f
192.168.2.2 bldg1ux01n2f
192.168.2.3 bldg1ux01fd
10.1.10.22 bldg1ux01n1drac bldg1ux01n1drac.lan.
10.1.10.23 bldg1ux01n2drac bldg1ux01n2drac.lan.
10.1.10.25 bldg1cludhcp bldg1cludhcp.lan.
10.1.10.26 bldg1cludns bldg1cludns.lan.
10.1.10.27 bldg1clucups bldg1clucups.lan.
10.1.10.28 bldg1cluhttp bldg1cluhttp.lan.
10.1.10.29 bldg1clunfsprojects bldg1clunfsprojects.lan.
10.1.10.30 bldg1clunfshome bldg1clunfshome.lan.
10.1.10.32 bldg1clusmbA bldg1clusmbA.lan.
10.1.10.33 bldg1clusmbB bldg1clusmbB.lan.

Well that should be it. I just wanted to write this as I found no single resource online to get all this going.  Hopefully this will spare someone out there from having to grub around looking for information the way I had to. 

More Power to your Penguins