Changes

From Studiosg
Jump to navigationJump to search
107 bytes added ,  09:47, 29 December 2016
Added SEO tags and templates and updated links
Line 1: Line 1: −
Welcome to Simone Giustetti's wiki pages.
+
{{header_en|title=StudioSG - Celerra shared file system configuration| keyword={{Template:Keyword_en_nas_san}}| description=How to configure a network shared file system on a Celerra NAS by EMC | link_page=celerra_share_nfs}}
 
  −
 
  −
Languages: '''English''' - [http://www.giustetti.net/wiki/index.php?title=celerra_share_nfs Italiano]
  −
 
  −
----
  −
 
      
== Configuring a NFS Share on Celerra ==
 
== Configuring a NFS Share on Celerra ==
Line 36: Line 30:     
To avoid potential mistakes the new file system will be named after the share:
 
To avoid potential mistakes the new file system will be named after the share:
   '''nas_fs''' -name cvs-exportdb -create size=200G pool=clar_r5_performance
+
   '''nas_fs''' ''-name'' cvs-exportdb ''-create'' size=200G pool=clar_r5_performance
 
The meaning of the used parameters should be self explanatory. The raw disk space source is selected by means of the '''pool''' parameter. The parameter is especially useful when many such sources (pools) are connected to a single Celerra. The clar_r5_performance pool is standard in any integrated storage unit; it identifies '''RAID5''' configured high performance Clariion disks.
 
The meaning of the used parameters should be self explanatory. The raw disk space source is selected by means of the '''pool''' parameter. The parameter is especially useful when many such sources (pools) are connected to a single Celerra. The clar_r5_performance pool is standard in any integrated storage unit; it identifies '''RAID5''' configured high performance Clariion disks.
    
The new file system will be allocated and formatted prior to prompt return. Once the shell is usable again, '''create a mount point on the target Data Mover''' for later file system mount:
 
The new file system will be allocated and formatted prior to prompt return. Once the shell is usable again, '''create a mount point on the target Data Mover''' for later file system mount:
   '''server_mountpoint''' server_2 -c /cvs-exportdb
+
   '''server_mountpoint''' server_2 ''-c'' /cvs-exportdb
 
The mount point was again named after the share for ease of use. Parameter ''server_2'' imposes the target Data Mover while ''-c'' is short for ''-create''.
 
The mount point was again named after the share for ease of use. Parameter ''server_2'' imposes the target Data Mover while ''-c'' is short for ''-create''.
    
Mount the file system:
 
Mount the file system:
   '''server_mount''' server_2 -o rw cvs-exportdb /cvs-exportdb
+
   '''server_mount''' server_2 ''-o'' rw cvs-exportdb /cvs-exportdb
 
The command syntax is strictly related to the standard Unix / Linux one: mount -o <options> <device> <mount_point> with the inclusion of the Data Mover. The device consists of the share related file system. The ''rw'' option is related to access control: rw assigns both read and write privileges to users.
 
The command syntax is strictly related to the standard Unix / Linux one: mount -o <options> <device> <mount_point> with the inclusion of the Data Mover. The device consists of the share related file system. The ''rw'' option is related to access control: rw assigns both read and write privileges to users.
   Line 55: Line 49:     
The share can now be made available to the clients. Run all of the export commands that follow:
 
The share can now be made available to the clients. Run all of the export commands that follow:
   server_export server_2 -Protocol nfs -option rw=db01acvs.pro.nas:db01bcvs.pro.nas /cvs-exportdb
+
   '''server_export''' server_2 ''-Protocol'' nfs ''-option'' rw=db01acvs.pro.nas:db01bcvs.pro.nas /cvs-exportdb
   server_export server_2 -Protocol nfs -option root=db01acvs.pro.nas:db01bcvs.pro.nas /cvs-exportdb
+
   '''server_export''' server_2 ''-Protocol'' nfs ''-option'' root=db01acvs.pro.nas:db01bcvs.pro.nas /cvs-exportdb
   server_export server_2 -Protocol nfs -option access=db01acvs.pro.nas:db01bcvs.pro.nas /cvs-exportdb
+
   '''server_export''' server_2 ''-Protocol'' nfs ''-option'' access=db01acvs.pro.nas:db01bcvs.pro.nas /cvs-exportdb
 
As for the previously run commands the first parameter: ''server_2'' identifies the working Data Mover. The ''Protocol'' parameter imposes the network protocol the Data Mover will use for data export. The former command exports the file system read only to every client in the attached network and with read / write privileges to the listed clients only. The colon ":" character works as a separator for list items. The root permission from the second command line permits clients to assign administrator privileges when mounting the share. The '''access''' permission from the latter command line enables clients to mount the share and later access it as a local file system.
 
As for the previously run commands the first parameter: ''server_2'' identifies the working Data Mover. The ''Protocol'' parameter imposes the network protocol the Data Mover will use for data export. The former command exports the file system read only to every client in the attached network and with read / write privileges to the listed clients only. The colon ":" character works as a separator for list items. The root permission from the second command line permits clients to assign administrator privileges when mounting the share. The '''access''' permission from the latter command line enables clients to mount the share and later access it as a local file system.
   Line 63: Line 57:     
To check whether shares were correctly exported, query the server from any client using command '''showmount''' -e <hostname>:
 
To check whether shares were correctly exported, query the server from any client using command '''showmount''' -e <hostname>:
   showmount -e cl001nas.fe.pro.nas
+
   '''showmount''' ''-e'' cl001nas.fe.pro.nas
    
The '''mount''' command can be used to mount shares on client machines. For NFS shares the file system type must be passed to the command line along with some other specific options. The syntax is:
 
The '''mount''' command can be used to mount shares on client machines. For NFS shares the file system type must be passed to the command line along with some other specific options. The syntax is:
   '''mount''' -t '''nfs''' -o <options> <server>:<share> <mountpoint>
+
   '''mount''' ''-t'' nfs ''-o'' <options> <server>:<share> <mountpoint>
 
Where <server> is the Fully Qualified Domain Name of the file system exporting machine or, were the DNS unavailable, the corresponding IP address. The command to mount our file system on clients db01acvs or db01bcvs is:
 
Where <server> is the Fully Qualified Domain Name of the file system exporting machine or, were the DNS unavailable, the corresponding IP address. The command to mount our file system on clients db01acvs or db01bcvs is:
   '''mount''' -t '''nfs''' -o rw,bg,hard,intr,nfsvers=3,rsize=8192,wsize=8192,proto=tcp cl001nas.fe.pro.nas:/cvs_exportdb
+
   '''mount''' ''-t'' nfs ''-o'' rw,bg,hard,intr,nfsvers=3,rsize=8192,wsize=8192,proto=tcp cl001nas.fe.pro.nas:/cvs_exportdb
 
   /mnt/db/cvs/export
 
   /mnt/db/cvs/export
   Line 83: Line 77:     
Command '''nas_ckpt_schedule''' will schedule snapshot back-up operations daily, weekly or even monthly. Suppose you want to take a snapshot every night at 03:30 a.m. and keep it for a week before freeing the corresponding disk space; the command syntax is:
 
Command '''nas_ckpt_schedule''' will schedule snapshot back-up operations daily, weekly or even monthly. Suppose you want to take a snapshot every night at 03:30 a.m. and keep it for a week before freeing the corresponding disk space; the command syntax is:
   '''nas_ckpt_schedule''' -create cvs_exportdb_daily -filesystem cvs_exportdb -recurrence daily -every 1 -runtimes
+
   '''nas_ckpt_schedule''' ''-create'' cvs_exportdb_daily ''-filesystem'' cvs_exportdb ''-recurrence'' daily ''-every'' 1 ''-runtimes''
   03:30 -keep 7
+
   03:30 ''-keep'' 7
    
After creation, snapshots will be accessible in read only mode to users mounting the share on their local machine. A hidden directory named '''.ckpt''' located into the share root directory will be the access point to the snapshots. To enter the snapshots directory tree use command '''cd'''. Directory .ckpt will contain 7 sub-directories numbered in decreasing order. Each sub-directory its an access point to a daily snapshot and as such contains a data image frozen at activation time. Users can move throughout the directory tree using standard shell commands: cd, ls, ecc. .
 
After creation, snapshots will be accessible in read only mode to users mounting the share on their local machine. A hidden directory named '''.ckpt''' located into the share root directory will be the access point to the snapshots. To enter the snapshots directory tree use command '''cd'''. Directory .ckpt will contain 7 sub-directories numbered in decreasing order. Each sub-directory its an access point to a daily snapshot and as such contains a data image frozen at activation time. Users can move throughout the directory tree using standard shell commands: cd, ls, ecc. .
Line 107: Line 101:     
[http://www.emc.com/products/family/celerra-family.htm Celerra home page]
 
[http://www.emc.com/products/family/celerra-family.htm Celerra home page]
      
----
 
----
   −
Languages: '''English''' - [http://www.giustetti.net/wiki/index.php?title=celerra_share_nfs Italiano]
+
{{footer_en | link_page=celerra_share_nfs}}

Navigation menu