Migrate to Cloud

How to Setup NFS Server on AWS EC2

This tutorial explains how to set up a highly available NFS server that can be used as storage solution (NAS – networked attached storage) for a cluster of web server instances that are being load balanced by a load balancer , for example ELB . If you have a web server cluster with two or more instances that serve the same web site contents, then these instances must access the same amount of data so that each one serves the same data, no matter if the load balancer directs the user to instance 1 or instance n. This can be achieved with an NFS share on an NFS server that all web server instances (the NFS clients) can access.

Install the package nfs-utils in the NFS server instance

[shell]# yum install nfs-utils[/shell]

The NFS setup also needs the portmap daemon

[shell]# yum install portmap[/shell]

Every file system being exported to remote ec2 instance via NFS, as well as the access level for those file systems, are listed in the /etc/exports file. When the nfs service starts, the exportfs command launches and reads this file, passes control to rpc.mountd (if NFSv2 or NFSv3) for the actual mounting process, then to rpc.nfsd where the file systems are then available to remote instances.

Lets say you want to export the directory /opt/nfstest of the nfs server to two remote instances having private ip’s 192.168.1.9 and 192.168.1.10

[shell]# vi /etc/exports[/shell]

Add

[shell]
/opt/nfstest 192.168.1.9/255.255.255.0(rw, no_root_squash, async)
/opt/nfstest 192.168.1.10/255.255.255.0(rw, no_root_squash, async)
[/shell]

and save the file

Now we use the exportfs command. It is used to maintain the current table of exported file systems for NFS. This list is kept in a separate file named /var/lib/nfs/xtab which is read by mountd when a remote instance requests access to mount a directory.
Normally this xtab file is initialized with the list of all file systems named in /etc/exports by invoking exportfs -a.

[shell]#exportfs -a[/shell]

Getting the services started

Starting the Portmapper
NFS depends on the portmapper daemon, either called portmap or rpc.portmap. It will need to be started first.

[shell]# /etc/init.d/portmap start[/shell]

it is worth making sure that it is running before you begin working with NFS

[shell]# ps aux | grep portmap[/shell]

Now start the NFS daemon

[shell]# /etc/init.d/nfs start[/shell]

Verifying that NFS is running

[shell]# rpcinfo -p [/shell]

you will get something like this.

program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 749 rquotad
100005 2 tcp 766 mountd
100005 3 udp 769 mountd
100005 3 tcp 771 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
300019 1 tcp 830 amd
300019 1 udp 831 amd
100024 1 udp 944 status
100024 1 tcp 946 status
100021 1 udp 1042 nlockmgr
100021 3 udp 1042 nlockmgr
100021 4 udp 1042 nlockmgr

Please note that portmap listens on port 111 and nfs on port 2049.

If you later decide to add more NFS exports to the /etc/exports file, you will need to either restart NFS daemon or run command exportfs:

[shell]#exportfs -ar[/shell]

Mount remote file system on client

First we need to create a mount point:

[shell]# mkdir /opt/localmount [/shell]

If you are sure that the NFS client and mount point are ready, you can run the mount command to mount exported NFS remote file system:

[shell]# mount -t nfs 192.168.1.3:/opt/nfstest /opt/localmount[/shell]

Here 192.168.1.3 is the ip of the NFS server, -t is used to specify the file system.

Restart the portmap daemon in the client

[shell]# /etc/init.d/portmap restart[/shell]

done !

Now you can verify your NFS mounts with the showmount command which shows the exports from the NFS server.

[shell]# showmount -e 192.168.1.3[/shell]

which will show output like

[shell]
/opt/nfstest 192.168.1.9/255.255.255.0,192.168.1.10/255.255.255.0
[/shell]

Exit mobile version