The Resource “datastore” is in use.. (Unable to delete)

At one of my clients, I recently came across a datastore that we were unable to delete. We are in the process of migrating from an EMC VNX to a new EMC Unity SAN. Part of this process requires us to clean up the legacy datastores. Unfortunately, this isn’t a small environment and the datastores span across approximately 30+ hosts. Some quick things to check are HA and even VDS configs. In my case, I’ve reviewed everything and the datastore had absolutely nothing on it but the sdd.sf folder which stands for – SCSI device Description system file. This usually can’t be deleted and should’t be an issue. Below, I’ll walk you through a method that I used to delete the datastore. This method should be used as LAST resort as it involved deleting the partition table. You must verify that you have the correct datastore!


A screenshot of the datastore:


As you can see, there’s nothing on it.. Now let’s prepare to delete it.

1. Locate the datatore device name. You can do this by right clicking the datastore and selecting properties. On the left side there’s a section called ‘Extent’ — Copy that out to notepad.

2. SSH into a host that has the datastore presented.

3. Now that we have our datastore device ID, we will continue to verify we have the correct datastore by running the following (remember to replace naa. with your own device ID) :

# esxcfg-scsidevs -c | grep naa.60060160019e3000ee5e6dd0aac0e611

naa.60060160019e3000ee5e6dd0aac0e611  Direct-Access    /vmfs/devices/disks/naa.60060160019e3000ee5e6dd0aac0e611  6291456MB NMP     DGC Fibre Channel Disk (naa.60060160019e3000ee5e6dd0aac0e611)

# esxcfg-scsidevs -m | grep naa.60060160019e3000ee5e6dd0aac0e611

naa.60060160019e3000ee5e6dd0aac0e611:1                           /vmfs/devices/disks/naa.60060160019e3000ee5e6dd0aac0e611:1 5850242c-dbbeaa16-422e-0017a4770018  0  DATASTORE_NAME_HERE

# df -h | grep DATASTORE_NAME_HERE

VMFS-5       6.0T 1004.0M      6.0T   0% /vmfs/volumes/DATASTORE_NAME_HERE

4. Everything checks out and the Datastore name an device ID are a match. we will now gather details about the partition table and delete it. This will solve for ‘Device is in use’ error messages.

5. The command below retrieves all available partitions. As you can see, we have one partition formatted VMFS.

# partedUtil getptbl /vmfs/devices/disks/naa.60060160019e3000ee5e6dd0aac0e611

802048 255 63 12884901888
1 2048 12884901854 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

6. *Please proceed with caution. This last command will destroy all data on the datastore* The last command we will run removes the only partition. This allows us to right click the datastore and delete it without an error.

# partedUtil delete /vmfs/devices/disks/naa.60060160019e3000ee5e6dd0aac0e611 1
# partedUtil getptbl /vmfs/devices/disks/naa.60060160019e3000ee5e6dd0aac0e611

802048 255 63 12884901888

7. After deleting the partition table and also completing the removal of the datastore from vCenter, we will now detach the device from each host. This method is preferred when troubleshooting a “datastore is in use” removal. Performing a detach of the device from the host precautions against “paths down” or other storage related events that could cause hosts to become unstable.

8. With an SSH connection to a host, run the following command to retrieve the LUN ID of the device:

# esxcfg-mpath -L | grep naa.60060160019e3000ee5e6dd0aac0e611
vmhba2:C0:T1:L65 state:active naa.60060160019e3000ee5e6dd0aac0e611 vmhba2 0 1 65 NMP active san fc.20000000c9bf8d87:10000000c9bf8d87 fc.50060160c720092a:500601694720092a
vmhba1:C0:T0:L65 state:active naa.60060160019e3000ee5e6dd0aac0e611 vmhba1 0 0 65 NMP active san fc.20000000c9bf8d86:10000000c9bf8d86 fc.50060160c720092a:500601634720092a
vmhba2:C0:T2:L65 state:active naa.60060160019e3000ee5e6dd0aac0e611 vmhba2 0 2 65 NMP active san fc.20000000c9bf8d87:10000000c9bf8d87 fc.50060160c720092a:500601624720092a
vmhba1:C0:T1:L65 state:active naa.60060160019e3000ee5e6dd0aac0e611 vmhba1 0 1 65 NMP active san fc.20000000c9bf8d86:10000000c9bf8d86 fc.50060160c720092a:500601684720092a
vmhba2:C0:T3:L65 state:active naa.60060160019e3000ee5e6dd0aac0e611 vmhba2 0 3 65 NMP active san fc.20000000c9bf8d87:10000000c9bf8d87 fc.50060160c720092a:500601604720092a
vmhba1:C0:T2:L65 state:active naa.60060160019e3000ee5e6dd0aac0e611 vmhba1 0 2 65 NMP active san fc.20000000c9bf8d86:10000000c9bf8d86 fc.50060160c720092a:5006016a4720092a
vmhba1:C0:T3:L65 state:active naa.60060160019e3000ee5e6dd0aac0e611 vmhba1 0 3 65 NMP active san fc.20000000c9bf8d86:10000000c9bf8d86 fc.50060160c720092a:500601614720092a
vmhba2:C0:T0:L65 state:active naa.60060160019e3000ee5e6dd0aac0e611 vmhba2 0 0 65 NMP active san fc.20000000c9bf8d87:10000000c9bf8d87 fc.50060160c720092a:5006016b4720092a

9. The command above retrieves all of the multi-path information for the LUN. It also retrieves the LUN ID, which in my case, happens to be 65.

10. We will now perform a detach of LUN 65 using the vSphere Client.

11. Perform this on every host and then you can safely remove it on your SAN.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s