Changes

Link updates and some other minor fixes
Line 2: Line 2:       −
Languages: '''English''' - [http://www.giustetti.net/wiki/index.php?title=celerra_data_migration_service Italiano]
+
Languages: '''English''' - [[celerra_data_migration_service | Italiano]]
    
----
 
----
Line 25: Line 25:       −
== Migration prerequisites ==
+
== Migration Prerequisites ==
 
Prior to migrating the share, some checks need performing: mainly network infrastructure and data source checks.
 
Prior to migrating the share, some checks need performing: mainly network infrastructure and data source checks.
   −
=== Network connectivity ===
+
=== Network Connectivity ===
 
One among the main check concerns should be network connectivity. Both the data source and the Celerra should communicate with each other and the network infrastructure should be able to stand the to be migrated data transfer load. The throughput check becomes fundamental in a scenario where you plan to use an all purpose network instead of a dedicate one. A choked network could result in a very long migration period and hosts located in the very same network could suffer from low throughput and consequently bad performances as well. It'll be wise to evaluate even alternative data move paths such as the standard back-up / restore sequence.
 
One among the main check concerns should be network connectivity. Both the data source and the Celerra should communicate with each other and the network infrastructure should be able to stand the to be migrated data transfer load. The throughput check becomes fundamental in a scenario where you plan to use an all purpose network instead of a dedicate one. A choked network could result in a very long migration period and hosts located in the very same network could suffer from low throughput and consequently bad performances as well. It'll be wise to evaluate even alternative data move paths such as the standard back-up / restore sequence.
   Line 43: Line 43:  
* Writing on the shared source disk will be permitted up until the last minute.
 
* Writing on the shared source disk will be permitted up until the last minute.
   −
=== File system sizing ===
+
=== File System Sizing ===
 
The target file system should be sized large enough to contain the source one. If disk space were to run out during data transfer, the migration will fail. The Celerra Network Server family of products provides a '''Perl''' script: '''diskUsage.pl''' to calculate the exact file system size and avoid bad planning originated issues. Script '''diskUsage.pl''' is to be run on the same host where the source file system is located. For a detailed description of the script usage and its options, please refer to the official documentation.
 
The target file system should be sized large enough to contain the source one. If disk space were to run out during data transfer, the migration will fail. The Celerra Network Server family of products provides a '''Perl''' script: '''diskUsage.pl''' to calculate the exact file system size and avoid bad planning originated issues. Script '''diskUsage.pl''' is to be run on the same host where the source file system is located. For a detailed description of the script usage and its options, please refer to the official documentation.
    
In our example we will assume that the target file system size is more than enough to contain the source one.
 
In our example we will assume that the target file system size is more than enough to contain the source one.
   −
=== Block size check ===
+
=== Block Size Check ===
 
Another size related check you have better execute is the block size one. If not instructed otherwise, all new file systems on a Celerra use a default block size of 8 Kb. The source file system could possibly use a lesser block size. The difference could result in excessive fragmentation on the target file system and performance issues. To avoid any inconvenience you are suggested to force a block size equal to the source file system one.
 
Another size related check you have better execute is the block size one. If not instructed otherwise, all new file systems on a Celerra use a default block size of 8 Kb. The source file system could possibly use a lesser block size. The difference could result in excessive fragmentation on the target file system and performance issues. To avoid any inconvenience you are suggested to force a block size equal to the source file system one.
   −
=== Creating the target file system ===
+
=== Creating the Target File System ===
Checks successfully completed; it's now time to create a new file system that we will later export through the Celerra. In our example we will log in to the Control Station via a terminal window and use the Command Line Interface provided commands. For a brief description of the creation procedure please refer to a previous article of mine [[En/celerra_share_nfs]].
+
Checks successfully completed; it's now time to create a new file system that we will later export through the Celerra. In our example we will log in to the Control Station via a terminal window and use the Command Line Interface provided commands. For a brief description of the creation procedure please refer to a previous article of mine [[En/celerra_share_nfs | celerra share nfs ]].
    
Connect to the '''Control Station''' using user '''nasadmin''' and the proper password. To create the file system issue command:
 
Connect to the '''Control Station''' using user '''nasadmin''' and the proper password. To create the file system issue command:
   '''nas_fs''' -name cvs_app_doc -type mgfs -create size=300G pool=clar_r5_performance -option nbpi=4096
+
   '''nas_fs''' ''-name'' cvs_app_doc ''-type'' mgfs ''-create'' size=300G pool=clar_r5_performance ''-option'' nbpi=4096
 
Option '''-type mgfs''' impose a file system type meant for data migration. When data migration starts, the file system will index the source one contents and will then drive the data copy service in order to give priority to accessed files. Downtime will be minimal as a result.
 
Option '''-type mgfs''' impose a file system type meant for data migration. When data migration starts, the file system will index the source one contents and will then drive the data copy service in order to give priority to accessed files. Downtime will be minimal as a result.
    
Option '''-option nbpi=4096''', impose a 4 Kb block size.
 
Option '''-option nbpi=4096''', impose a 4 Kb block size.
 
We will now create a '''mount point''' and will mount there the file system:
 
We will now create a '''mount point''' and will mount there the file system:
   '''server_mountpoint''' server_2 -c /cvs_app_doc
+
   '''server_mountpoint''' server_2 ''-c'' /cvs_app_doc
   '''server_mount''' server_2 -o rw cvs_app_doc /cvs_app_doc
+
   '''server_mount''' server_2 ''-o'' rw cvs_app_doc /cvs_app_doc
 
----
 
----
   Line 68: Line 68:  
Checks and pre-migration activities are over; we'll now copy data from the source to the target file system.
 
Checks and pre-migration activities are over; we'll now copy data from the source to the target file system.
   −
=== Early phase ===
+
=== Early Phase ===
 
Service '''CDMS''' should be running in order for the copy to start. To ensure that the service is running issue command:  
 
Service '''CDMS''' should be running in order for the copy to start. To ensure that the service is running issue command:  
   '''server_setup''' server_2 -Protocol cdms -option start
+
   '''server_setup''' server_2 ''-Protocol'' cdms ''-option'' start
 
Check for network connectivity one last time:
 
Check for network connectivity one last time:
 
   '''server_ping''' server_2 nfs001.pro.nas
 
   '''server_ping''' server_2 nfs001.pro.nas
 
The target file system should be mounted on the '''Data Mover'''. To check mount status run command:
 
The target file system should be mounted on the '''Data Mover'''. To check mount status run command:
   '''server_cdms''' server_2 -info cvs_app_doc
+
   '''server_cdms''' server_2 ''-info'' cvs_app_doc
 
A positive feedback output should look like:
 
A positive feedback output should look like:
 
   server_2 :
 
   server_2 :
Line 80: Line 80:  
All listed commands can be run by user '''nasadmin''' from a terminal window logged in to the Celerra Control Station.
 
All listed commands can be run by user '''nasadmin''' from a terminal window logged in to the Celerra Control Station.
   −
=== Source configuration update ===
+
=== Source Configuration Update ===
 
If the source share configuration were not updated yet, it's now time to proceed. Enable the read-only option for any client host having access rights to the shared file system in order to block write operations. The configuration change '''requires a service stop''' that will affect all clients using the shared resource therefore should be planned together with application support groups. For a Red Hat Linux file server, the update is limited to file '''/etc/exports'''. The share dedicated line should look somewhat like:
 
If the source share configuration were not updated yet, it's now time to proceed. Enable the read-only option for any client host having access rights to the shared file system in order to block write operations. The configuration change '''requires a service stop''' that will affect all clients using the shared resource therefore should be planned together with application support groups. For a Red Hat Linux file server, the update is limited to file '''/etc/exports'''. The share dedicated line should look somewhat like:
   /SHARE/app_doc -rw db01acvs.pro.nas db01bcvs.pro.nas
+
   /SHARE/app_doc ''-rw'' db01acvs.pro.nas db01bcvs.pro.nas
 
Update it in:
 
Update it in:
   /SHARE/app_doc -ro db01acvs.pro.nas db01bcvs.pro.nas cl001nas.be.pro.nas
+
   /SHARE/app_doc ''-ro'' db01acvs.pro.nas db01bcvs.pro.nas cl001nas.be.pro.nas
 
Then '''restart the NFS server daemon''' to confirm the new settings:
 
Then '''restart the NFS server daemon''' to confirm the new settings:
 
   '''service''' nfs restart
 
   '''service''' nfs restart
Line 91: Line 91:  
The shared file system should be unmounted from client hosts db01acvs.pro.nas and db01bcvs.pro.nas prior to the nfs001.pro.nas service configuration update in order to avoid them any issue.
 
The shared file system should be unmounted from client hosts db01acvs.pro.nas and db01bcvs.pro.nas prior to the nfs001.pro.nas service configuration update in order to avoid them any issue.
   −
=== File system association ===
+
=== File System Association ===
 
Everything is now in place to link the source and target file system. Run command:
 
Everything is now in place to link the source and target file system. Run command:
   '''server_cdms''' server_2 -c cvs_app_doc -t nfsv3 -p /cvs_app_doc -s nfs001.pro.nas:/SHARE/app_doc
+
   '''server_cdms''' server_2 ''-c'' cvs_app_doc ''-t'' nfsv3 ''-p'' /cvs_app_doc ''-s'' nfs001.pro.nas:/SHARE/app_doc
 
Checking the service state, the resulting output should look something like:
 
Checking the service state, the resulting output should look something like:
   '''server_cdms''' server_2 -info cvs_app_doc
+
   '''server_cdms''' server_2 ''-info'' cvs_app_doc
 
    
 
    
 
   server_2 :
 
   server_2 :
Line 110: Line 110:  
   nfs=nfs001.pro.nas:/SHARE/app_doc proto=TCP
 
   nfs=nfs001.pro.nas:/SHARE/app_doc proto=TCP
   −
=== Data filter ===
+
=== Data Filter ===
 
When performing a migration, not all data need moving. The CDMS service can filter source shared content and exclude part of it while copying data. The filter works reading directives from an '''exclusion file''' which contains a listing of all the files and directories to ignore. Every line in the list is an item to be ignored. The first character from each line is used to determine whether a single file or a whole directory is the filter target. A file formatting example follows:
 
When performing a migration, not all data need moving. The CDMS service can filter source shared content and exclude part of it while copying data. The filter works reading directives from an '''exclusion file''' which contains a listing of all the files and directories to ignore. Every line in the list is an item to be ignored. The first character from each line is used to determine whether a single file or a whole directory is the filter target. A file formatting example follows:
 
   f /<path>/<file>
 
   f /<path>/<file>
Line 127: Line 127:  
   '''server_log''' server_2
 
   '''server_log''' server_2
   −
=== Excluding snapshots ===
+
=== Excluding Snapshots ===
 
Sometimes you'll need to migrate data located on storage using snapshots to perform back-ups. Snapshots have to be excluded from the migration otherwise you'll risk to saturate the target file system and cause the entire process to fail. Let's assume to migrate a file system between two Celerras as an example. The source is configured to daily execute a '''checkpoint''' (that's a snapshot in Celerra terminology) with a weekly retention. The total amount of data read will almost add up to eight times the real file system size: the existing data summed to the seven daily copies of the previous week checkpoints. The workaround consists of adding the following filter to the esclusion file:
 
Sometimes you'll need to migrate data located on storage using snapshots to perform back-ups. Snapshots have to be excluded from the migration otherwise you'll risk to saturate the target file system and cause the entire process to fail. Let's assume to migrate a file system between two Celerras as an example. The source is configured to daily execute a '''checkpoint''' (that's a snapshot in Celerra terminology) with a weekly retention. The total amount of data read will almost add up to eight times the real file system size: the existing data summed to the seven daily copies of the previous week checkpoints. The workaround consists of adding the following filter to the esclusion file:
 
   d /<file system>/<file system>/.ckpt
 
   d /<file system>/<file system>/.ckpt
Line 133: Line 133:  
   d /<file system>/<file system>/.snapshot
 
   d /<file system>/<file system>/.snapshot
   −
=== Migration start ===
+
=== Migration Start ===
 
To start the data copy procedure issue command:
 
To start the data copy procedure issue command:
   '''server_cdms''' server_2 -start cvs_app_doc -p /cvs_app_doc -l /cdms_log_2 -e /cdms_log_2/exclude_cvs_20110706.txt
+
   '''server_cdms''' server_2 ''-start'' cvs_app_doc ''-p'' /cvs_app_doc ''-l'' /cdms_log_2 ''-e'' /cdms_log_2/exclude_cvs_20110706.txt
 
To monitor the migration status run command:
 
To monitor the migration status run command:
   '''server_cdms''' server_2 -info cvs_app_doc
+
   '''server_cdms''' server_2 ''-info'' cvs_app_doc
 
    
 
    
 
   server_2 :
 
   server_2 :
Line 157: Line 157:  
That will output disk space utilization once every minute. Use keyboard combination ''CTRL+c'' to kill the process.
 
That will output disk space utilization once every minute. Use keyboard combination ''CTRL+c'' to kill the process.
   −
=== New file system export ===
+
=== New File System Export ===
 
Once the data copy procedure is running, the Celerra file system can be exported to the client hosts and services needing the shared resource can be started again. Issue the following command sequence to export the file system:
 
Once the data copy procedure is running, the Celerra file system can be exported to the client hosts and services needing the shared resource can be started again. Issue the following command sequence to export the file system:
   '''server_export''' server_2 -Protocol nfs -option rw=db01acvs.pro.nas:db01bcvs.pro.nas /cvs_app_doc/cvs_app_doc
+
   '''server_export''' server_2 ''-Protocol'' nfs ''-option'' rw=db01acvs.pro.nas:db01bcvs.pro.nas /cvs_app_doc/cvs_app_doc
   '''server_export''' server_2 -Protocol nfs -option root=db01acvs.pro.nas:db01bcvs.pro.nas /cvs_app_doc/cvs_app_doc
+
   '''server_export''' server_2 ''-Protocol'' nfs ''-option'' root=db01acvs.pro.nas:db01bcvs.pro.nas /cvs_app_doc/cvs_app_doc
   '''server_export''' server_2 -Protocol nfs -option access=db01acvs.pro.nas:db01bcvs.pro.nas /cvs_app_doc/cvs_app_doc
+
   '''server_export''' server_2 ''-Protocol'' nfs ''-option'' access=db01acvs.pro.nas:db01bcvs.pro.nas /cvs_app_doc/cvs_app_doc
 
To check the export status from the Control Station run:
 
To check the export status from the Control Station run:
   '''server_export''' server_2 | grep cvs_app_doc
+
   '''server_export''' server_2 | '''grep''' cvs_app_doc
 
To perform the check from clients instead use:
 
To perform the check from clients instead use:
   '''showmount''' -e cl001nas.fe.pro.nas | grep cvs_app_doc
+
   '''showmount''' ''-e'' cl001nas.fe.pro.nas | '''grep''' cvs_app_doc
 
To mount the file system on clients db01acvs.pro.nas and db01bcvs.pro.nas connect to both hosts using the '''root''' account and run command:
 
To mount the file system on clients db01acvs.pro.nas and db01bcvs.pro.nas connect to both hosts using the '''root''' account and run command:
   '''mount''' -t nfs -o rw,bg,hard,intr,nfsvers=3,rsize=8192,wsize=8192,proto=tcp cl001nas.fe.pro.nas:/cvs_app_doc/cvs_app_doc
+
   '''mount''' ''-t'' nfs ''-o'' rw,bg,hard,intr,nfsvers=3,rsize=8192,wsize=8192,proto=tcp cl001nas.fe.pro.nas:/cvs_app_doc/cvs_app_doc
 
   /mnt/SHARE/app
 
   /mnt/SHARE/app
 
The mount operation can be automated at boot time by updating the '''/etc/fstab''' configuration file. Modify line
 
The mount operation can be automated at boot time by updating the '''/etc/fstab''' configuration file. Modify line
Line 176: Line 176:  
The service downtime period stretches from the instant the source share configuration is updated to read-only mode to the subsequent mount of the new share on the client hosts.
 
The service downtime period stretches from the instant the source share configuration is updated to read-only mode to the subsequent mount of the new share on the client hosts.
   −
=== Data copy end ===
+
=== Data Copy End ===
 
Command
 
Command
   '''server_cdms''' server_2 -info cvs_app_doc
+
   '''server_cdms''' server_2 ''-info'' cvs_app_doc
 
provides a feedback about the file and directory copy status. When the status changes to '''SUCCEED''' the data copy phase is successfully over and it's time to move on to post migration configuration.
 
provides a feedback about the file and directory copy status. When the status changes to '''SUCCEED''' the data copy phase is successfully over and it's time to move on to post migration configuration.
 
----
 
----
      −
== Post migration configuration ==
+
== Post Migration Configuration ==
 
Once the data copy is over, file system type is to be changed from '''mgfs''' to standard Celerra format. Connect to the Celerra Control Station using the '''nasadmin''' user account and issue command:
 
Once the data copy is over, file system type is to be changed from '''mgfs''' to standard Celerra format. Connect to the Celerra Control Station using the '''nasadmin''' user account and issue command:
   '''server_cdms''' server_2 -C cvs_app_doc
+
   '''server_cdms''' server_2 ''-C'' cvs_app_doc
 
    
 
    
 
   server_2 : done
 
   server_2 : done
Line 192: Line 192:     
As a last step we will ensure data security by updating the back-up configuration or by scheduling a checkpoint policy for the new file system. The following command schedules a daily checkpoint at 02.30 a.m. with a one week retention.
 
As a last step we will ensure data security by updating the back-up configuration or by scheduling a checkpoint policy for the new file system. The following command schedules a daily checkpoint at 02.30 a.m. with a one week retention.
   '''nas_ckpt_schedule''' -create cvs_app_doc_daily -filesystem cvs_app_doc -recurrence daily -every 1 -runtimes
+
   '''nas_ckpt_schedule''' ''-create'' cvs_app_doc_daily ''-filesystem'' cvs_app_doc ''-recurrence'' daily ''-every'' 1 ''-runtimes''
   02:30 -keep 7
+
   02:30 ''-keep'' 7
 
----
 
----
   Line 226: Line 226:  
----
 
----
   −
Languages: '''English''' - [http://www.giustetti.net/wiki/index.php?title=celerra_data_migration_service Italiano]
+
Languages: '''English''' - [[celerra_data_migration_service | Italiano]]