En/celerra share nfs

Revision as of 21:40, 12 April 2011 by Wikiuser (talk | contribs) (Created page with 'Welcome to Simone Giustetti's wiki pages. Languages: '''English''' - [http://www.giustetti.net/wiki/index.php?title=celerra_share_nfs Italiano] ---- == Configuring a NFS Sha…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Welcome to Simone Giustetti's wiki pages.


Languages: English - Italiano



Configuring a NFS Share on Celerra

Celerra is a family of NAS (Network Attached Storage) arrays produced by Emc². It can work as a front end to independent Emc² storage such as a Clariion or a Symmetrix / Dmx or come provided with its own internal disks. An attached Clarrion array provides the raw disks for the all in one configuration.

What follows is a procedure detailing the configuration of a file system network shared through the NFS protocol with other client workstations. Some notions and definitions will precede the actual procedure; the introduction will be useful to understand the command syntax and the Celerra internal workings.

Every Celerra consists of some Data Movers from a minimum of 2 to a maximum of 15 with as much as 14 concurrent active ones. Data Movers manage shared file systems definitions and provide network connectivity to external clients. The array is controlled by means of a Control Station which provides administrator functionality through a CLI (Command Line Interface). Moreover a Celerra can be managed through a Web Interface reachable via the Control Station public IP address. Share configuration and maintenance can be executed by means of the interface and any web browser.

From now on we will presume that:

  • Network connectivity is in place and network services / daemons are well configured and running.
  • A DNS server is running and both forward and reverse look-up work flawlessly.
  • The Celerra comes with an integrated Clariion storage array.

CLI commands will be used through the character interface to configure a share with the following features:

  • Name: cvs_exportdb.
  • Size: 200 Gb.
  • Located in: The second Celerra Data Mover cl001nas.
  • Exported through the network interface: cl001nas.fe.pro.nas.
  • Accessible to client hosts: db01acvs and db01bcvs for domain pro.nas.


NFS share configuration

Connect to the Control Station, a Linux workstation, via the Ssh protocol. Point to the server host name or to the IP address if the DNS service can't resolve the host name. A Ssh client is needed to connect from a Windows machine and should be installed. Many such clients both "Free Software" or "Open Source" are available in internet: Putty. Connect using user name nasadmin. User nasadmin has enough privileges to run administration commands on the Control Station and its profile automatically loads the needed environment variables. The user password may vary according to your organization policy.

Once connected, create a new file system. the file system will be later tied to the target Data Mover. Creating a new file system is a system status altering activity. Celerra commands follow a strict naming convention: All system altering commands share prefix nas_, Data Mover configuration altering ones share prefix server_ instead.

To avoid potential mistakes the new file system will be named after the share:

  nas_fs -name cvs-exportdb -create size=200G pool=clar_r5_performance

The meaning of the used parameters should be self explanatory. The raw disk space source is selected by means of the pool parameter. The parameter is especially useful when many such sources (pools) are connected to a single Celerra. The clar_r5_performance pool is standard in any integrated storage unit; it identifies RAID5 configured high performance Clariion disks.

The new file system will be allocated and formatted prior to prompt return. Once the shell is usable again, create a mount point on the target Data Mover for later file system mount:

  server_mountpoint server_2 -c /cvs-exportdb

The mount point was again named after the share for ease of use. Parameter server_2 imposes the target Data Mover while -c is short for -create.

Mount the file system:

  server_mount server_2 -o rw cvs-exportdb /cvs-exportdb

The command syntax is strictly related to the standard Unix / Linux one: mount -o <options> <device> <mount_point> with the inclusion of the Data Mover. The device consists of the share related file system. The rw option is related to access control: rw assigns both read and write privileges to users.

Once mounted, the file system can be accessed from the control station for maintenance recurring to path: /nasmcd/quota/slot_2/cvs-exportdb. The Control Station recurs to Autofs in order to mount a file system when accessed only, command "ls /nasmcd/quota/slot_2/cvs-exportdb" could therefore return an error. To check whether a file system is mounted run the following command:

  server_mount server_2

This will return a list of the file systems available through the second Data Mover.


Share export

The share can now be made available to the clients. Run all of the export commands that follow:

  server_export server_2 -Protocol nfs -option rw=db01acvs.fe.nas:db01bcvs.fe.nas /cvs-exportdb
  server_export server_2 -Protocol nfs -option root=db01acvs.fe.nas:db01bcvs.fe.nas /cvs-exportdb
  server_export server_2 -Protocol nfs -option access=db01acvs.fe.nas:db01bcvs.fe.nas /cvs-exportdb

As for the previously run commands the first parameter: server_2 identifies the working Data Mover. The Protocol parameter imposes the network protocol the Data Mover will use for data export. The former command exports the file system read only to every client in the attached network and with read / write privileges to the listed clients only. The colon ":" character works as a separator for list items. The root permission from the second command line permits clients to assign administrator privileges when mounting the share. The access permission from the latter command line enables clients to mount the share and later access it as a local file system.

Both host names and IP addresses can be used in the clients list. For the host names to work a running DNS service is required.

To check whether shares were correctly exported, query the server from any client using command showmount -e <hostname>:

  showmount -e cl001nas.fe.pro.nas

The mount command can be used to mount shares on client machines. For NFS shares the file system type must be passed to the command line along with some other specific options. The syntax is:

  mount -t nfs -o <options> <server>:<share> <mountpoint>

Where <server> is the Fully Qualified Domain Name of the file system exporting machine or, were the DNS unavailable, the corresponding IP address. The command to mount our file system on clients db01acvs or db01bcvs is:

  mount -t nfs -o rw,bg,hard,intr,nfsvers=3,rsize=8192,wsize=8192,proto=tcp cl001nas.fe.pro.nas:/cvs_exportdb
  /mnt/db/cvs/export

The mount operation can be automated at boot time for both clients by updating the /etc/fstab file (/etc/vfstab for Solaris workstations) adding line:

  cl001nas.fe.pro.nas:/cvs_exportdb /mnt/db/cvs/export nfs rw,bg,hard,intr,nfsvers=3,rsize=8192,wsize=8192,proto=tcp 0 0

The share will be mounted togheter with local file systems during boot procedure. Beware: When in presence of networking issues, the machine start-up sequence could freeze in the mount phase until reaching a timeout threshold and triggering an NFS client error. A very long boot time is to be avoided for a server which could provide invaluable services to users. To avoid disservice and down time it is better to install and configure autofs instead of tweaking the /etc/fstab file.


Share creation follow-up actions

Protecting data against accidental deletion is a must for file systems shared among many users. It is better to be prepared to eventually solve little drawbacks arising from daily file usage.

Celerra arrays possess snapshot functionality for configured file systems. Snapshots can be used as a "fast" back-up solution providing an easy recovery for unintentionally deleted data. A snapshot is a dedicated raw disk pool area meant for differential volumes back-ups. A snapshot provides a copy of all data that were modified after its activation. Snapshots offer the advantages of a reduced recovery time when compared to standard tape back-ups and reduced space usage if compared to a full disk mirror. Snapshots, in fact, use a fraction of the original volume disk space: usually 15 to 20 %. Not being a full copy of the original data, a snapshot cannot be used for full recovery; it permits a differential one only. Snapshots should be regarded as a tool to instantly freeze data to later back-up on another media: a tape for example. They are not a full secure back-up solution per se.

Command nas_ckpt_schedule will schedule snapshot back-up operations daily, weekly or even monthly. Suppose you want to take a snapshot every night at 03:30 a.m. and keep it for a week before freeing the corresponding disk space; the command syntax is:

  nas_ckpt_schedule -create cvs_exportdb_daily -filesystem cvs_exportdb -recurrence daily -every 1 -runtimes
  03:30 -keep 7

After creation, snapshots will be accessible in read only mode to users mounting the share on their local machine. A hidden directory named .ckpt located into the share root directory will be the access point to the snapshots. To enter the snapshots directory tree use command cd. Directory .ckpt will contain 7 sub-directories numbered in decreasing order. Each sub-directory its an access point to a daily snapshot and as such contains a data image frozen at activation time. Users can move throughout the directory tree using standard shell commands: cd, ls, ecc. .


For any feedback, questions, errors and such, please e-mail me at studiosg [at] giustetti [dot] net


External links


Autofs ArchLinux

NFS protocol (FreeBsd Handbook)

NFS protocol (Linux FAQ)

NFS protocol (Wikipedia)

Emc² home page

Celerra home page



Languages: English - Italiano