Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

Docker volume cannot be deleted even though no containers are running. #1202

Closed
MattAtTTU opened this issue Apr 27, 2017 · 7 comments
Closed
Assignees
Milestone

Comments

@MattAtTTU
Copy link

MattAtTTU commented Apr 27, 2017

I've just installed the latest plugin and successfully created a volume using it. There are no containers running in the environment which are using the volume. When I try to remove it using the standard docker volume rm command, I receive the following error:

Error response from daemon: unable to remove volume: remove testing_domain@COMP_Ent_VMWare_DockerVols: VolumeDriver.Remove: Remove failure - volume is still mounted.  volume=testing_domain@COMP_Ent_VMWare_DockerVols, refcount=1

If I try to force the removal, the command returns the volume name indicating it ran successfully. However, the volume is still present.

root@devswarmmgr1:/app/swarm_scripts# docker volume rm testing_domain@COMP_Ent_VMWare_DockerVols --force
testing_domain@COMP_Ent_VMWare_DockerVols

root@devswarmmgr1:/app/swarm_scripts# docker volume ls
DRIVER              VOLUME NAME
vsphere:latest      testing_domain@COMP_Ent_VMWare_DockerVols

The plugin log contains the following output when I try and remove the volume:

2017-04-27 17:16:03.407427567 +0000 UTC [INFO] Removing volume name="testing_domain@COMP_Ent_VMWare_DockerVols" 
2017-04-27 17:16:03.407496051 +0000 UTC [ERROR] Remove failure - volume is still mounted.  volume=testing_domain@COMP_Ent_VMWare_DockerVols, refcount=1

Here is a volume inspect of this volume:

[
    {
        "Driver": "vsphere:latest",
        "Labels": null,
        "Mountpoint": "/mnt/vmdk/testing_domain@COMP_Ent_VMWare_DockerVols",
        "Name": "testing_domain@COMP_Ent_VMWare_DockerVols",
        "Options": {},
        "Scope": "global",
        "Status": {
            "access": "read-write",
            "attach-as": "independent_persistent",
            "attached to VM": "devswarmmgr1",
            "capacity": {
                "allocated": "483MB",
                "size": "20GB"
            },
            "clone-from": "None",
            "created": "Thu Apr 27 16:38:01 2017",
            "created by VM": "devswarmmgr1",
            "datastore": "COMP_Ent_VMWare_DockerVols",
            "diskformat": "thin",
            "fstype": "ext4",
            "status": "attached"
        }
    }
]
@pshahzeb
Copy link
Contributor

Could you please share the plugin logs? It will help to debug the inconsistent ref counting.

Meanwhile here is the work around with steps

docker plugin disable -f vsphere
docker plugin enable vsphere
docker run -it --volume-driver=vsphere -v testing_domain@COMP_Ent_VMWare_DockerVols:/vol1 --name dummy_bb busybox

@MattAtTTU
Copy link
Author

Here is the log file for the plugin.

docker-volume-vsphere.txt

The supplied workaround did not work. Please note I am just trying to delete the volume. I'm not certain why the busybox container mounted to the volume is necessary but it isn't working.

@pshahzeb
Copy link
Contributor

Thank you for sharing this.

From the logs, what we can understand is:
Volume is created
2017-04-27 16:38:12.42456046 +0000 UTC [INFO] Volume and filesystem created fstype=ext4 name="testing_domain@COMP_Ent_VMWare_DockerVols"

Then we see 2 back-to-back mount requests for same volume indicating Docker started 2 containers using same volume (yes?).

2017-04-27 16:38:13.932476204 +0000 UTC [INFO] Mounting volume name="testing_domain@COMP_Ent_VMWare_DockerVols" 
2017-04-27 16:38:15.099006408 +0000 UTC [INFO] Mounting volume name="testing_domain@COMP_Ent_VMWare_DockerVols" 
2017-04-27 16:38:15.099059258 +0000 UTC [INFO] Already mounted, skipping mount. name="testing_domain@COMP_Ent_VMWare_DockerVols" refcount=2 

Followed by Unmount-Mount-Unmount-Mount-Unmount. At the end leaving refcount to 1. This indicates there must be one container still running and using the volume and thus plugin didn’t allow remove operation.So, either there is a container still using the volume or we will have to investigate why Docker didn’t send Unmount.

Is it possible to share docker as well as ESX logs? See How .
We are also available for a quick WebEx if you are online.

Did you try busybox command after restarting Plugin? This workaround was for;

  1. Reset ref count with plugin restart
  2. Unmount from Busybox will detach VMDK from VM configuration

@MattAtTTU
Copy link
Author

It is mounted consecutively like you see in the logs. However, there is no container left running.

I'll work on retrieving logs.

If you'd like to WebEx, I'm available for the remainder of the day.

@pdhamdhere
Copy link
Contributor

This seems like a Docker issue 32907

@pdhamdhere
Copy link
Contributor

Docker has fix out for review #32909 to fix this issue.

@shuklanirdesh82 shuklanirdesh82 added this to the 0.15 milestone May 3, 2017
@pdhamdhere
Copy link
Contributor

Docker 17.06.0 will have the fix.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants