Saturday 30 April 2011

A Secure PPP Dial-In Centos machine with SELinux

I did this a while ago but I thought this was worth sharing. I wanted a backup connection to the office when all other links are down. So I created a dial-up PPP modem machine. I was however concerned about security. After all to make this machine useful it will need to be on the internal network, but also have a modem connected to a process running as root (a getty and then a pppd). Finding a flaw in these could allow you to obtain a root shell on this internal machine. Obviously not very likely if the machine is up to date, but lets try and help with this. So I decided to look at SELinux to help with my concern.

There are two getty's shipped with RHEL/Centos , the mingetty and mgetty. The "mgetty" is more suitable for use with modems, whereas the "mingetty" is better for the virtual consoles.

Sadly for me in the standard targeted SELinux policy the mgetty and mingetty share a policy, which is fine for most applications. But I'd like mingetty to work normally (with logins on virtual consoles etc) and mgetty to be more limited. All I need my mgetty to do is spawn a pppd, and this will handle my authentication and then talk PPP for me, but not be able to spawn a login or shell.

I started with a machine with a very cut down Centos 5 install. I installed mgetty and selinux-policy-devel. I also installed  selinux-policy srpm so I could take getty.fc, getty.if and getty.te from here.
I put these files in a directory /root/mgettyse (not sure if this is the best practice way of doing this) and renamed them with a prefix of lcl. I edited these until I had removed everything not needed for my application (launching pppd), I hope. I also changed all the policy names in the files to lclgetty so as not to collide with the existing policy. I have put these files at the end.

Before anything else I'll set up my dial up config. I add to /etc/initab:


# Run mgetty for the modem
7:2345:respawn:/sbin/mgetty /dev/ttyS0

Now edit the config files under /etc/mgetty+sendfax. My dialin.config file is fully commented out. My login.config has only two lines uncommented:

/AutoPPP/ -   a_ppp /usr/sbin/pppd  
* - - /bin/false @

Basically here I want the mgetty to autospawn pppd if it hears a ppp on the other end it. If not it will spawn /bin/false and hang up (well it probably won't even do that with my SELinux policy as it won't spawn false either, but mgetty will get bored and hang up eventually). My mgetty.config has in it the only uncommented lines:

debug 4
fax-id 49 115 xxxxxxxx
speed 115200
port ttyS0
speed 115200
init-chat "" "\d\d\d+++\d\d\dATH" OK "AT&F" OK "ATM0" OK "AT&B1" OK 
data-only yes
login-prompt
issue-file /etc/issue.ppp

These will all likely depend on your modem esp the init-chat (basically I force a hang up (after an escape sequence), a reset to factory defaults, speaker off and AT&B1 tells the modem not to match port speed to connect speed). My issue.ppp basically has a message you want to appear if someone connects in a text session.

I also have my "emerconnect" user in /etc/ppp/chap-secrets:



# Secrets for authentication using CHAP
# client        server  secret                  IP addresses
#
emerconnect     *       "12345678"      10.1.32.20

(obviously you want to change this passord)

Then a few pppd options in  /etc/ppp/options:
-detach
lock
auth
crtscts
chap-max-challenge 5
ms-dns  10.1.10.10
ms-dns  10.1.10.11
asyncmap 0
idle 1800
10.1.32.1:
require-mschap-v2
debug 

Obviously change for your requirements (for example your internal DNS servers).

You need to turn on IP forwarding to allow this to route:
sysctl -w net.ipv4.ip_forward=1

(and probably easiest to add this to the bottom of /etc/rc.local so it's persistent across reboots, I didn't add to /etc/sysctl.conf as I only wanted to turn this on after everything else was up, not sure if this is justified or not paranoia).


I then make the SELinux policy:

# cd /root/mgetty

# make -f /usr/share/selinux/devel/Makefile                                                                                     
Compiling targeted lclgetty module
/usr/bin/checkmodule:  loading policy configuration from tmp/lclgetty.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 6) to tmp/lclgetty.mod
Creating targeted lclgetty.pp policy package
rm tmp/lclgetty.mod.fc tmp/lclgetty.mod


If is all builds install the module:

/usr/sbin/semodule -i lclgetty.pp

(you will probably get some warnings about colliding with exiting policies i.e. the built in getty policy, but it seems happy enough with itself). But if happy you can set the file SELinux contexts:

/sbin/restorecon -rvrvvF /etc/mgetty+sendfax /etc/issue.ppp /sbin/mgetty /var/log/mgetty.log.*

And verify that they set with ls -alZ e.g


-bash-3.2$ ls -alZ /sbin/mgetty 
-rwx------  root root system_u:object_r:lclgetty_exec_t /sbin/mgetty


(but look at the other files too)

If all happy run "init q" to reload the initab file. Then check that mgetty is running in the new context:

-bash-3.2$ ps -efZ | grep -i mgetty                                                                                                                   
system_u:system_r:lclgetty_t    root      4903     1  0 Apr15 ?        00:00:00 /sbin/mgetty /dev/ttyS0


(so now running in the lclgetty context).

Obviously test the normal operation of this by connecting from another machine as a PPP session (say with kppp) to this machine. Then ensure that everything works and you can communicate with your network). But to test SELinux security try replacing the /bin/false line in login.config with:


* - - /bin/login @


With SELinux off, by using "setenforce 0", try dialling in using minicom from another machine. This should  bring up a login prompt (it will fail the autoppp test). With SELinux on by using "setenforce 1" it shouldn't. You may want to try more elaborate tests say trying to launch a shell (/bin/bash) directly (failing with SELinux on), but you'll probably want to ensure a safe testing environment (no open phone line).

After testing return the /bin/login back to /bin/false in login.config and ensure SELinux is enforcing (hopefully 2 layers of protection). 

Hopefully everything here is correct I'm working from memory to some extent. You should hopefully now have a secure PPP dial-in server. Below are the SELinux policy files mentioned above.

::::::::::::::
lclgetty.fc
::::::::::::::
/etc/mgetty+sendfax/(/.*)? gen_context(system_u:object_r:lclgetty_etc_t,s0)
/etc/issue.ppp -- gen_context(system_u:object_r:lclgetty_etc_t,s0)
/sbin/mgetty -- gen_context(system_u:object_r:lclgetty_exec_t,s0)
/var/log/mgetty.log.ttyS0 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.1 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.2 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.3 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.4 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.5 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.6 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/log/mgetty.log.ttyS0.7 -- gen_context(system_u:object_r:lclgetty_log_t,s0)
/var/run/mgetty.pid.ttyS0 -- gen_context(system_u:object_r:lclgetty_var_run_t,s0)
::::::::::::::
lclgetty.if
::::::::::::::
## <summary>Policy for getty.</summary>

########################################
## <summary>
## Execute gettys in the getty domain.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
#
interface(`lclgetty_domtrans',`
gen_require(`
type lclgetty_t, lclgetty_exec_t;
')

corecmd_search_sbin($1)
domain_auto_trans($1,lclgetty_exec_t,lclgetty_t)

allow $1 lclgetty_t:fd use;
allow lclgetty_t $1:fd use;
allow lclgetty_t $1:fifo_file rw_file_perms;
allow getty_t $1:process sigchld;
')

########################################
## <summary>
## Inherit and use getty file descriptors.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
#
interface(`lclgetty_use_fds',`
gen_require(`
type lclgetty_t;
')

allow $1 lclgetty_t:fd use;
')

########################################
## <summary>
## Allow process to read getty log file.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
## <rolecap/>
#
interface(`lclgetty_read_log',`
gen_require(`
type lclgetty_log_t;
')

logging_search_logs($1)
allow $1 lclgetty_log_t:file { getattr read };
')

########################################
## <summary>
## Allow process to read getty config file.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
## <rolecap/>
#
interface(`lclgetty_read_config',`
gen_require(`
type lclgetty_etc_t;
')

files_search_etc($1)
allow $1 lclgetty_etc_t:file { getattr read };
')

########################################
## <summary>
## Allow process to edit getty config file.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed access.
## </summary>
## </param>
## <rolecap/>
#
interface(`lclgetty_rw_config',`
gen_require(`
type lclgetty_etc_t;
')

files_search_etc($1)
allow $1 lclgetty_etc_t:file rw_file_perms;
')
::::::::::::::
lclgetty.te
::::::::::::::

policy_module(lclgetty,1.2.0)

########################################
#
# Declarations
#

type lclgetty_t;
type lclgetty_exec_t;
init_domain(lclgetty_t,lclgetty_exec_t)
domain_interactive_fd(lclgetty_t)

type lclgetty_etc_t;
typealias lclgetty_etc_t alias etc_lclgetty_t;
files_config_file(lclgetty_etc_t)

type lclgetty_lock_t;
files_lock_file(lclgetty_lock_t)

type lclgetty_log_t;
logging_log_file(lclgetty_log_t)

type lclgetty_tmp_t;
files_tmp_file(lclgetty_tmp_t)

type lclgetty_var_run_t;
files_pid_file(lclgetty_var_run_t)

########################################
#
# Getty local policy
#

# Use capabilities.
#allow lclgetty_t self:capability { dac_override sys_resource sys_tty_config fowner fsetid };
dontaudit lclgetty_t self:capability chown;
#dontaudit lclgetty_t self:capability sys_tty_config;
#allow lclgetty_t self:process { getpgid getsession signal_perms };
#allow lclgetty_t self:capability { sys_tty_config };

allow lclgetty_t lclgetty_etc_t:dir r_dir_perms;
allow lclgetty_t lclgetty_etc_t:file r_file_perms;
allow lclgetty_t lclgetty_etc_t:lnk_file { getattr read };
files_etc_filetrans(lclgetty_t,lclgetty_etc_t,{ file dir })

allow lclgetty_t lclgetty_lock_t:file create_file_perms;
files_lock_filetrans(lclgetty_t,lclgetty_lock_t,file)

allow lclgetty_t lclgetty_log_t:file create_file_perms;
logging_log_filetrans(lclgetty_t,lclgetty_log_t,file)

#allow lclgetty_t lclgetty_tmp_t:file create_file_perms;
#allow lclgetty_t lclgetty_tmp_t:dir create_dir_perms;
#files_tmp_filetrans(lclgetty_t,lclgetty_tmp_t,{ file dir })

allow lclgetty_t lclgetty_var_run_t:file create_file_perms;
allow lclgetty_t lclgetty_var_run_t:dir rw_dir_perms;
files_pid_filetrans(lclgetty_t,lclgetty_var_run_t,file)

#kernel_list_proc(lclgetty_t)
#kernel_read_proc_symlinks(lclgetty_t)

#dev_read_sysfs(lclgetty_t)

#fs_search_auto_mountpoints(lclgetty_t)
# for error condition handling
#fs_getattr_xattr_fs(getty_t)

#mcs_process_set_categories(getty_t)

#mls_file_read_up(getty_t)
#mls_file_write_down(getty_t)

# Chown, chmod, read and write ttys.
term_use_all_user_ttys(lclgetty_t)
#term_use_unallocated_ttys(getty_t)
term_setattr_all_user_ttys(lclgetty_t)
#term_setattr_unallocated_ttys(getty_t)
#term_setattr_console(getty_t)
#term_dontaudit_use_console(getty_t)

auth_rw_login_records(lclgetty_t)

#corecmd_search_bin(lclgetty_t)
#corecmd_search_sbin(lclgetty_t)

#files_rw_generic_pids(lclgetty_t)
###
files_read_etc_runtime_files(lclgetty_t)
files_read_etc_files(lclgetty_t)
#files_search_spool(getty_t)

init_rw_utmp(lclgetty_t)
#init_use_script_ptys(getty_t)
#init_dontaudit_use_script_ptys(getty_t)

libs_use_ld_so(lclgetty_t)
libs_use_shared_libs(lclgetty_t)

###locallogin_domtrans(getty_t)

logging_send_syslog_msg(lclgetty_t)

miscfiles_read_localization(lclgetty_t)

###ifdef(`targeted_policy',`
### term_dontaudit_use_unallocated_ttys(getty_t)
### term_dontaudit_use_generic_ptys(getty_t)
###')

###optional_policy(`
### mta_send_mail(getty_t)
###')

# Keeps nscd quiet in logs but not running so no problem
optional_policy(`
nscd_socket_use(lclgetty_t)
')

optional_policy(`
ppp_domtrans(lclgetty_t)
')

###optional_policy(`
### udev_read_db(lclgetty_t)
###')



Monday 25 April 2011

Building a RHEL 6/Centos 6 HA Cluster for LAN Services (part 6)

Tidying up
That's mostly it. I'd recommend a cron job to run monthly to check that the two node's DRBD's blocks are fully in sync. I thought originally something like this would work:

# Check DRBD integrity every third Sunday of the month
38 6 15-21 * 7 /sbin/drbdadm verify all


However this will OR the two options, so will run every Sunday and every day 15-21. So instead I have:


38 6 15-21 * * /usr/local/sbin/drbdverifysun >/dev/null 2>&1


and then this script (drbdverifysun) has:


#!/bin/bash

# Only run the verify on a Sunday, cron is already restricting date range
if [ "`date +%a`" = "Sun" ] ; then
/sbin/drbdadm verify all
fi

, to cause it only to run if a Sunday.

And also probably an occasional cron job to check that the fence device is still pingable by the nodes would be a good idea.

I'd also make sure that you can run all the services on either node. You don't want to discover that doesn't work when a node fails! You can move them around with clusvcadm or in web console lucci. Which I haven't needed or used up to now but is useful to monitor services or to move services around.

If you want to use iptables on the nodes (and I do), I'd make life easy for yourself and fully open up the bond1 (the back to back connection) interface to ACCEPT on both nodes. There is a lot of multicasting etc going on and you'll just make work for yourself trying to see what needs opened up. I'd just tie down the services allowed on the main network interface.

One thing missing from my cluster setup so far is kerberized NFSv4. I will hopefully get a chance to revisit that.

Here are the final key files at the end. First my cluster.conf:

<?xml version="1.0"?>
<cluster config_version="48" name="bldg1ux01clu">
<cman expected_votes="1" two_node="1"/>
<clusternodes>
<clusternode name="bldg1ux01n1i" nodeid="1" votes="1">
<fence>
<method name="apc7920-dual">
<device action="off" name="apc7920" port="1"/>
<device action="off" name="apc7920" port="2"/>
<device action="on" name="apc7920" port="1"/>
<device action="on" name="apc7920" port="2"/>
</method>
<method name="bldg1ux01n1drac">
<device name="bldg1ux01n1drac"/>
</method>
</fence>
</clusternode>
<clusternode name="bldg1ux01n2i" nodeid="2" votes="1">
<fence>
<method name="apc7920-dual">
<device action="off" name="apc7920" port="3"/>
<device action="off" name="apc7920" port="4"/>
<device action="on" name="apc7920" port="3"/>
<device action="on" name="apc7920" port="4"/>
</method>
<method name="bldg1ux01n2drac">
<device name="bldg1ux01n2drac"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="bldg1ux01A" ordered="1" restricted="1">
<failoverdomainnode name="bldg1ux01n1i" priority="1"/>
<failoverdomainnode name="bldg1ux01n2i" priority="2"/>
</failoverdomain>
<failoverdomain name="bldg1ux01B" ordered="1" restricted="1">
<failoverdomainnode name="bldg1ux01n1i" priority="2"/>
<failoverdomainnode name="bldg1ux01n2i" priority="1"/>
</failoverdomain>
</failoverdomains>
<resources>
<nfsexport name="bldg1cluexports"/>
<ip address="10.1.10.25" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00dhcpd" fstype="gfs2" mountpoint="/data/dhcpd" name="dhcpdfs" options="acl"/>
<ip address="10.1.10.26" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00named" fstype="gfs2" mountpoint="/data/named" name="namedfs" options="acl"/>
<ip address="10.1.10.27" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00cups" fstype="gfs2" mountpoint="/data/cups" name="cupsfs" options="acl"/>
<ip address="10.1.10.28" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00httpd" fstype="gfs2" mountpoint="/data/httpd" name="httpdfs" options="acl"/>
<ip address="10.1.10.29" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00projects" fstype="gfs2" mountpoint="/data/projects" name="projectsfs" options="acl"/>
<nfsclient name="nfsdprojects" options="rw" target="10.0.0.0/8"/>
<ip address="10.1.10.30" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00home" fstype="gfs2" mountpoint="/data/home" name="homefs" options="acl"/>
<nfsclient name="nfsdhome" options="rw" target="10.0.0.0/8"/>
</resources>
<service autostart="1" domain="bldg1ux01A" exclusive="0" name="dhcpd" recovery="relocate">
<script file="/etc/init.d/dhcpd" name="dhcpd"/>
<ip ref="10.1.10.25"/>
<clusterfs ref="dhcpdfs"/>
</service>
<service autostart="1" domain="bldg1ux01A" exclusive="0" name="named" recovery="relocate">
<clusterfs ref="namedfs"/>
<ip ref="10.1.10.26"/>
<script file="/etc/init.d/named" name="named"/>
</service>
<service autostart="1" domain="bldg1ux01B" exclusive="0" name="cups" recovery="relocate">
<script file="/etc/init.d/cups" name="cups"/>
<ip ref="10.1.10.27"/>
<clusterfs ref="cupsfs"/>
</service>
<service autostart="1" domain="bldg1ux01B" exclusive="0" name="httpd" recovery="relocate">
<clusterfs ref="httpdfs"/>
<clusterfs ref="projectsfs"/>
<ip ref="10.1.10.28"/>
<apache config_file="conf/httpd.conf" name="httpd" server_root="/data/httpd/etc/httpd" shutdown_wait="10"/>
</service>
<service autostart="1" domain="bldg1ux01A" exclusive="0" name="nfsdprojects" recovery="relocate">
<ip ref="10.1.10.29"/>
<clusterfs ref="projectsfs">
<nfsexport ref="bldg1cluexports">
<nfsclient ref="nfsdprojects"/>
</nfsexport>
</clusterfs>
</service>
<service autostart="1" domain="bldg1ux01B" exclusive="0" name="nfsdhome" recovery="relocate">
<ip ref="10.1.10.30"/>
<clusterfs ref="homefs">
<nfsexport ref="bldg1cluexports">
<nfsclient ref="nfsdhome"/>
</nfsexport>
</clusterfs>
</service>
</rm>
<fencedevices>
<fencedevice agent="fence_apc" ipaddr="192.168.2.3" login="apc" name="apc7920" passwd="securepassword"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.1.10.22" login="fence" name="bldg1ux01n1drac" passwd="securepassword"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.1.10.23" login="fence" name="bldg1ux01n2drac" passwd="securepassword"/>
</fencedevices>
<fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
</cluster>

My fstab file:

#
# /etc/fstab
# Created by anaconda on Thu Jan 20 17:37:26 2011
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=2ca89192-0dfa-45ab-972d-9fd15e5c6414 /                       ext4    defaults        1 1
UUID=7ab69be7-52fd-4f08-b08b-f9aea7c7ef70 swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/cluvg00/lv00dhcpd /data/dhcpd gfs2 acl 0 0
/dev/cluvg00/lv00named /data/named gfs2 acl 0 0
/dev/cluvg00/lv00cups /data/cups gfs2 acl 0 0
/dev/cluvg00/lv00httpd /data/httpd gfs2 acl 0 0
/dev/cluvg00/lv00projects /data/projects gfs2 acl 0 0
/dev/cluvg00/lv00home /data/home gfs2 acl 0 0
/dev/cluvg00/lv00lclu /data/lclu gfs2 acl             0 0


And /etc/hosts:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.10.20 bldg1ux01n1.lan bldg1ux01n1
10.1.10.21 bldg1ux01n2.lan bldg1ux01n2
192.168.1.1 bldg1ux01n1i
192.168.1.2 bldg1ux01n2i
192.168.2.1 bldg1ux01n1f
192.168.2.2 bldg1ux01n2f
192.168.2.3 bldg1ux01fd
10.1.10.22 bldg1ux01n1drac bldg1ux01n1drac.lan.
10.1.10.23 bldg1ux01n2drac bldg1ux01n2drac.lan.
10.1.10.25 bldg1cludhcp bldg1cludhcp.lan.
10.1.10.26 bldg1cludns bldg1cludns.lan.
10.1.10.27 bldg1clucups bldg1clucups.lan.
10.1.10.28 bldg1cluhttp bldg1cluhttp.lan.
10.1.10.29 bldg1clunfsprojects bldg1clunfsprojects.lan.
10.1.10.30 bldg1clunfshome bldg1clunfshome.lan.
10.1.10.32 bldg1clusmbA bldg1clusmbA.lan.
10.1.10.33 bldg1clusmbB bldg1clusmbB.lan.

Well that should be it. I just wanted to write this as I found no single resource online to get all this going.  Hopefully this will spare someone out there from having to grub around looking for information the way I had to. 

More Power to your Penguins


Building a RHEL 6/Centos 6 HA Cluster for LAN Services (part 5)

Clustered NFS Server
I'm now going to add clustered NFS services. I'm going to have a shared projects area and an area for homedirs. To provide rudimentary load balancing I'm going serve these from separate nodes by default. Also as these are going to be using the lions share of my storage I'm going to make them quite large.

I don't know, but I suspect that for a large data area where performance is an issue it's probably not a good thing to grow these in small chunks. I'd assume this might lead to LVM fragmentation which may hurt performance.

First step lets bring up the storage as ever (on one node):


/sbin/lvcreate --size 200G --name lv00home cluvg00


/sbin/lvcreate --size 500G --name lv00projects cluvg00


/sbin/mkfs -t gfs2 -p lock_dlm -j 2 -t bldg1ux01clu:home /dev/cluvg00/lv00home


/sbin/mkfs -t gfs2 -p lock_dlm -j 2 -t bldg1ux01clu:projects /dev/cluvg00/lv00projects

Update fstab and mount these up on both nodes.

Also we need to add a parameter to make statd cluster and failover aware (or at least help it with this, or so I'm told). So on both nodes add to /etc/sysconfig/nfs:

STATD_HA_CALLOUT="/usr/sbin/clunfslock"


In the cluster.conf file we need to add a single nfsexport (this ensures the daemons are working) resource and an nfsclient resource for each thing we are exporting.

<ip address="10.1.10.29" monitor_link="1"/>

<clusterfs device="/dev/cluvg00/lv00projects" fstype="gfs2" mountpoint="/data/projects" name="projectsfs" options="acl"/>
<nfsclient name="nfsdprojects" options="rw" target="10.0.0.0/8"/>                                      
<ip address="10.1.10.30" monitor_link="1"/>
<clusterfs device="/dev/cluvg00/lv00home" fstype="gfs2" mountpoint="/data/home" name="homefs" options="acl"/>
<nfsclient name="nfsdhome" options="rw" target="10.0.0.0/8"/>

And then two service definitions for each of these:


<service autostart="1" domain="bldg1ux01A" exclusive="0" name="nfsdprojects" recovery="relocate">
               <ip ref="10.1.10.29"/>
                <clusterfs ref="projectsfs">
                     <nfsexport ref="bldg1cluexports">
                           <nfsclient ref="nfsdprojects"/>                                                                           
                      </nfsexport>
                 </clusterfs>
</service>
 <service autostart="1" domain="bldg1ux01B" exclusive="0" name="nfsdhome" recovery="relocate">
                <ip ref="10.1.10.29"/>
               <clusterfs ref="homefs">
               <nfsexport ref="bldg1cluexports">
                    <nfsclient ref="nfsdhome"/>
             </nfsexport>
      </clusterfs>
</service>


Notice that they are in the different failover domains to direct them to be served by different nodes by default (unless one fails that is).

There is an issue with this and NFSv3 clients. The portmapper replacement in RHEL6, rpcbind, by default it replies to incoming requests using the node IP rather than the service IP. This confuses client firewalls so they fail to mount. The only real work around just now is to fully open up both node IP's on the client Firewalls e.g at the bottom of the client machines /etc/sysconfig/iptables on say RHEL 5


-A RH-Firewall-1-INPUT -s 10.1.10.20 -j ACCEPT
-A RH-Firewall-1-INPUT -s 10.1.10.21 -j ACCEPT


Not great (bz#689589). Sadly the flags on rpcbind that are used for multi-homing , that should help with this don't seem to work when the IP isn't up when rpcbind is started.

Again add to DNS and hosts these service IP's

10.1.10.29   bldg1clunfsprojects bldg1clunfsprojects.lan
10.1.10.30   bldg1clunfshome bldg1clunfshome.lan

Bump the cluster.conf version number, verify the file and propagate and you should be able to mount the exports, either in a hard mount or via the automounter on client machine e.g

mount bldg1clunfshome:/data/home /mnt
mount bldg1clunfsprojects:/data/home /mnt

or

ls /net/bldg1clunfshome/data/home
ls /net/bldg1clunfsprojects/data/projects

Clustered Samba


NOTE: I have recently discovered that Red Hat do not allow you to share a GFS2 filesystem between local access (e.g Samba) and NFS. This is due to a lack of interoperability between local file access locking and NFS file locking (flocks vs plocks). On other filesystems this may lead to file  level corruption, however on GFS2 this may lead to FILESYSTEM level corruption ! And/or kernel panics (flocks, plocks and glocks not getting along, and I'm not making this up!)


So the below notes are fine if you use a filesystem shared out on Samba that isn't shared out on NFS (not as below but a new and different filesystem). Which is very unfortunate, as if you are anything like me it's exactly what you want to do (share files between Windows and Linux).  Or you can share out on samba an NFS mount (what RH say to do) but not recommended by the Samba people. I'm also pretty sure my backup software won't like to backup an NFS mount!


My solution to this that I will document here, was radical surgery. Reimplement my cluster using ext4 failover mounts. I can easily live with not relying on locks working between Samba and NFS (I  never expected this to work), I can't live with filesystem corruption and kernel panics. 


The original documented case here may eventually work once bug #580863 is resolved. If you want to see the rather murky world of Linux file locking this article is great here. I have left the original text unchanged below in the hope this bug gets resolved.




There are two ways of clustering samba. One if to use a failover method in the cluster.conf file. But a new way is to use samba's relatively new built in clustering. This provides load balancing (whereas the failover samba only provides HA). This method is outside the standard cluster.conf, but RH ship it so lets use it.

You need to ensure ctdb package is installed on both nodes.

I'm going to first create a common locking directory, clustered samba uses this to share a locking directory. For my own purposes, outside samba, I also use it to hold locks for cronjobs etc that can run on either node. I'm also going to site my printer click to print drivers for Windows in here.

So for this it's usual deal and setup (with some added setups for the samba areas):


/sbin/lvcreate --size 2G --name lv00lclu cluvg00
/sbin/mkfs -t gfs2 -p lock_dlm -j 2 -t bldg1ux01clu:lclu /dev/cluvg00/lv00lclu 


mkdir /data/lclu/samba
mkdir /data/lclu/samba/ctdb
mkdir /data/lclu/samba/drivers
chmod 775 /data/lclu/samba/drivers
chown root:itstaff /data/lclu/samba/drivers


The "itstaff" group above are the people allowed to add drivers to the server.

I now edit /etc/sysconfig/ctdb on both nodes, the options I have are:



CTDB_RECOVERY_LOCK="/data/lclu/samba/ctdb/.ctdb.lock"
CTDB_PUBLIC_INTERFACE=bond0
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_MANAGES_SAMBA=yes
CTDB_SAMBA_CHECK_PORTS="445"
CTDB_MANAGES_WINBIND=no
CTDB_MANAGES_VSFTPD=no
CTDB_MANAGES_HTTPD=no
CTDB_NODES=/etc/ctdb/nodes
CTDB_DEBUGLEVEL=ERR

In /etc/sysconfig/samba I have (both nodes):
# Options to smbd
SMBDOPTIONS="-D -p 445"
# Options to nmbd
NMBDOPTIONS="-D"
# Options for winbindd
WINBINDOPTIONS=""

I prefer using port 445, which should be lighter weight than port 139 (with it's netbios wrapping).

Now I put into  /etc/ctdb/nodes the IP addresses of my private network (clustered samba will use these for internal comms) (both nodes):
192.168.1.1
192.168.1.2

Then in /etc/ctdb/public_addresses I put in the public IP that I want this service to run at, with mask , again on both nodes:
10.1.10.32/24
10.1.10.33/24

Then just my modified /etc/samba/smb.conf:

[global]
        workgroup = MYDOMAIN
        clustering = yes
        netbios name = bldg1clusmb
        max log size = 100
        preserve case = yes
        short preserve case = yes
        security = ADS
        realm = MYDOMAIN.LAN
        password server = dclan.lan
        encrypt passwords = yes
        load printers = yes
        local master = no
        client use spnego = yes
        log level = 1
        printing = cups
        printcap = cups
        cups options = "raw"
        use client driver = no
        printer admin = @itstaff
        map to guest = Bad User
        guest account = guest

[printers]
        printable = yes
        path = /var/spool/samba
        browseable = no
        public = yes
        guest ok = yes
        writable = no
        default devmode = yes

[print$]
        comment = Windows Printer Driver Download Area
        path = /data/lclu/samba/drivers
        browseable = no
        guest ok = yes
        read only = yes
        write list = @itstaff
        force group = +itstaff
        force create mode = 0775
        force directory mode = 0775
                                                                                                                                                      
; Local disk configurations
                                                                                                                                                      
[projects]                                                                                                                                            
        guest ok = no                                                                                                                                 
        writeable = yes                                                                                                                               
        path = /data/projects                                                                                                                         
        force create mode = 0664
        force directory mode = 0775
                                                                                                                                                      
[user]                                                                                                                                                
        guest ok = no                                                                                                                                 
        writeable = yes                                                                                                                               
        path = /data/home                                   

The first thing to note is I don't have a homes share. This is because I use the automounter. Any homedirs I mount from the cluster will go via NFS (as they refer to a service IP (by the name bldg1clunfshome) ), samba mounts via NFS tend not to work very well (due to locking issues) and will be slower. So I have created a "user" share that people can get straight to their homedirs via.

I add the public addresses to DNS and hosts. 
10.1.10.32   bldg1clusmbA bldg1clusmbA.lan
10.1.10.32   bldg1clusmbB bldg1clusmbB.lan

BUT this time we want to add to DNS bldg1clusmb that will point to two A records, one for each service IP address that I'm using for samba e.g

# nslookup bldg1clusmb
Server:         10.1.10.26
Address:        10.1.10.26#53

Name:   bldg1clusmb.lan
Address: 10.1.10.32
Name:   bldg1clusmb.lan
Address: 10.1.10.33

I also have an "netbios name = bldg1clusmb" parameter, cause as I'm using AD I need to join the Samba to AD with the name the clients will refer to it as. But you'll need to start it before joining. 

Stop any samba's running on the nodes and chkconfig them off.

/etc/init.d/smb stop
/sbin/chkconfig smb off

And on both nodes start the ctdb service and chkconfig on:

/etc/init.d/ctdb start
/sbin/chkconfig ctdb on

You can check the status of the ctdb with "ctdb status". It will take a little while to settle but eventually it should return:

Number of nodes:2
pnn:0 192.168.1.1    OK (THIS NODE)
pnn:1 192.168.1.2    OK
Generation:369849912
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:1

Then you can join to AD or what ever and connect from Windows clients using \\bldg1clusmb and DNS should now round-robin between the two nodes from various clients. 

There is a bit of an issue with printing. Sadly the registry information about printers doesn't get copied between the nodes as yet. My hack around this is to stop the ctdb on one of the nodes. Then install the printers or printer drivers on the server that's up (just using the cluster name \\bldg1clusmb). Then when finished, copy the files ntdrivers.tdb, ntforms.tdb, ntprinters.tdb and the printing directory (and contents) all in /var/lib/samba to the other (down) node's /var/lib/samba. Then restart ctdb on the down node. As they share the driver directory this should now allow both nodes to perform click to print auto client driver installs. Just remember to do this procedure every time you make printer driver changes (or any default settings on these printers for windows). A bit of a hassle, but seems to work.