How to Wipe and Reset a EqualLogic SAN

EqualLogic SANSometimes you will need to do a wipe of a storage solution, maybe you’re selling it, just taking it down and do not have any plans for it or something third. No matter the reason, if it is a EqualLogic SAN there is no built-in way of doing it for the users – and then again…!

To do a total wipe and factory reset of a EqualLogic SAN there is a simple two-step process to follow. It does not take that much to do, frighteningly enough, but it will take quite a few hours to finish, so find a big pot of coffee or something else to do in the meantime.

First Step – Wipe the Disks

Normally the Tech-Support commands are not allowed for customers to run, but we will make an exception and hope Dell Support will not come after me for revealing this.

Please do not run any other commands while in the tech support bash as you may permanently damage your array, according to Dell Support.

Caution: Once this command is run, ALL former meta-data on the disks will be completely overwritten!
(It will take 3-4 hours to complete.)

To completely wipe all data with no chance of restoring it:

  1. Login to the array as “grpadmin”.
  2. Type “su ex sh” and press enter. This will take you to the tech-support commands which are restricted to Tech Support personnel.
  3. Type “diskzero” and press enter.
  4. The array will ask you to confirm with “Y” or “N”. Confirm by typing “Y” and hitting enter.

 

Second Step – Reset to Factory Defaults

If an array is a member of a multi-member group, it is highly recommended that you remove the member from the group, which automatically runs the reset command. This will move any volume data residing on the array to the remaining group members if possible.

Caution: This command eliminates ALL group, member, and volume configuration information and volume data on the array. The array will not be able to connect to a group until you add it to the group.

If the array is the only remaining member of a group, run the reset command. (You cannot remove the last member of a group.)

  1. Login to the array as “grpadmin”.
  2. Type “reset” and press enter.
  3. Type “DeleteAllMyDataNow” and press enter

After confirming that you want to reset the array, all network connections are closed.

How to See Bottleneck(s) on Veeam Restores

VeeamBottleneck
Veeam’s bottleneck feature is a great tool for giving a pointer to where a potential problem lies if you are not getting the performance you expect during a backup.
However, when doing a restore Veeam sadly does not show any bottleneck statistics – but it does create them!

To see the bottleneck statistics for a restore is actually quite easy, but does require you know where to look and how to interpret/read them.

Where to Look

Simply go to “C:\ProgramData\Veeam\Backup” and find the folder named after the virtual machine that you are restoring or have restored.
There you will find several log files, but the one that we are interested in here is the one called “Vm.VMNAME.Restore”, VMNAME being the virtual machine name.

What to Look for and How to Read It

The restore log contains a lot of information, but what we are interested in is a line that looks like this (just search for “pex”):

[24.04.2015 15:10:49] <11> Info           [AP] (a82b) output: –pex:0;262144;0;0;262144;1562;79;29;69;79;15;8;130743546495880000

Let us break it down a bit and just look at what is interesting:

  • The number after the sixth semicolon is Source Read Busy at the source storage
  • The number after the seventh semicolon is Source Processing Busy at the source proxy
  • The number after the eight semicolon is Source Write Busy at the source network
  • The number after the ninth semicolon is Target Read Busy at the target network
  • The number after the tenth semicolon is Target Processing Busy at the target proxy
  • The number after the eleventh semicolon is Target Write Busy at the target storage

These numbers are all in percentages and so taking the example above we will get the following bottleneck statistics for the restore:

  • 79 % source storage
  • 29 % source proxy
  • 69 % source network
  • 79 % target network
  • 15 % target proxy
  • 8 % target storage

Now bear in mind that the statistics might change as the jobs goes along, so it is a good idea to go through all the numbers from each line generated by each run. (The percentage processed is the first number after “pex”.)