It was a bit quiet here in January caused by a new “private project” which has attracted some resources, and will pull more resources in the future.
But this will not stop me from documenting useful stuff. This one is nothing new, but commonly asked by some customers: How do I get my storage capacity back after deleting VMs?!
The outlined steps are all done using esxcli. You need to execute them on a single ESXi host, not on each host in the cluster.
Connect to one of your ESXi hosts using SSH. You can use this small PowerCLI command to enable SSH on a specific host.
Get-VMHost esx1.lab.local | Get-VMHostService | Where Key -EQ "TSM-SSH" | Start-VMHostService
The first step is to identify the datastore(s) from which
you want to reclaim storage.
[root@esx1:~]
esxcli storage vmfs extent list
Volume
Name VMFS
UUID Extent
Number Device Name
Partition
------------- ----------------------------------- ------------- ------------------------------------ ---------
VMDS01
55dc0522-c72eebec-3780-d89d672d7a3c 0 naa.60030d90eca17602ce5c5a54a083e31c 1
We will need the device name, and later the UUID. The next
step is to identify if the device is detected as a thin-provisioned disk, and
if it is VAAI-capable. I’ve shortened the output of the esxcli output to the
necessary output.
[root@esx1:~]
esxcli storage core device list -d naa.60030d90eca17602ce5c5a54a083e31c
Thin
Provisioning Status: yes
VAAI
Status: supported
No we have to verify if all necessary VAAI options are
supported.
[root@rzb-esx-1:~]
esxcli storage core device vaai status get -d
naa.60030d90eca17602ce5c5a54a083e31c
naa.60030d90eca17602ce5c5a54a083e31c
VAAI
Plugin Name:
ATS
Status: supported
Clone
Status: supported
Zero
Status: supported
Delete
Status: supported
Important for us is the “Delete” primitive. If this is
supported, we can use UNMAP to reclaim storage.
[root@rzb-esx-1:~]
esxcli storage vmfs unmap -u 55dc0522-c72eebec-3780-d89d672d7a3c
This process will take some time depending on the amount of
storage that has to be reclaimed. And it will put some load on your storage, so
you might want to run this in a less productive time.
No comments:
Post a Comment