Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

cannot create/delete volumes when using the cli Docker EE with UCP #1950

Closed
ghost opened this issue Oct 26, 2017 · 31 comments
Closed

cannot create/delete volumes when using the cli Docker EE with UCP #1950

ghost opened this issue Oct 26, 2017 · 31 comments
Assignees

Comments

@ghost
Copy link

ghost commented Oct 26, 2017

I am trying to create/delete a vsphere docker volume using the docker command as documented here
https://docs.docker.com/datacenter/ucp/2.2/guides/user/access-ucp/cli-based-access/

from my linux workstation (after sourcing the env.sh script downloaded as explained in the doc above)

[root@clh-ansible ~]# docker --version
Docker version 17.06.1-ee-2, build 8e43158

validate connection with UCP control plane

[root@clh-ansible ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
03wve9xyds1sef5l4nrjb5cbv clh-dtr02.am2.cloudra.local Ready Active
558iotb56vy6e8n8key94ry8p clh-ucp01.am2.cloudra.local Ready Active Reachable
gdkl73ad5wdqa5ezybt4advfq * clh-ucp03.am2.cloudra.local Ready Active Leader
im5q776sr4qk8ljxc5ybzzskk clh-worker02.am2.cloudra.local Ready Active
n51c8zv513vjqbcnkor0dtnr7 clh-dtr03.am2.cloudra.local Ready Active
nvw9stst1wet4havzsmsjolyf clh-worker03.am2.cloudra.local Ready Active
psagy87us4fzm9vphvueecrmm clh-worker01.am2.cloudra.local Ready Active
tkwu2qt6ptwiplmukut8thcvx clh-dtr01.am2.cloudra.local Ready Active
us29daujkrefnm9uj685niol7 clh-ucp02.am2.cloudra.local Ready Active Reachable

create a docker volume

[root@clh-ansible ~]# docker volume create -d vsphere issue01
issue01

verify that the volume was created (it was not)

[root@clh-ansible ~]# docker volume ls | grep issue
[root@clh-ansible ~]#

now try the same from a node in the swarm (clh-ucp01)

[root@clh-ucp01 ~]# docker --version
Docker version 17.06.1-ee-2, build 8e43158
[root@clh-ucp01 ~]#

docker volume ls | grep issue

we confirm the volume is not here, try to create another one from the ucp node

[root@clh-ucp01 ~]# docker volume create -d vsphere issue02
issue02
[root@clh-ucp01 ~]# docker volume ls | grep issue
vsphere:latest issue02@Docker_CLH

it works!!

now exit clh-ucp01 and go back to the WS cli

[root@clh-ucp01 ~]#
[root@clh-ucp01 ~]#
[root@clh-ucp01 ~]# exit
logout
Connection to clh-ucp01 closed.

confirm we see the new volume

[root@clh-ansible ~]# docker volume ls | grep issue
vsphere:latest issue02@Docker_CLH
vsphere:latest issue02@Docker_CLH
vsphere:latest issue02@Docker_CLH
vsphere:latest issue02@Docker_CLH
vsphere:latest issue02@Docker_CLH
vsphere:latest issue02@Docker_CLH
vsphere:latest issue02@Docker_CLH
vsphere:latest issue02@Docker_CLH
vsphere:latest issue02@Docker_CLH

volume is there (there are 9 nodes in the swarm, hence 9 volumes displayed above)

note that docker volume delete does not work neither from the WS, it works from a node in the swarm

@govint
Copy link
Contributor

govint commented Oct 26, 2017

@chris7444 sorry, can you say what is WS? The docker volume driver for Linux will work fine on Linux guest VMs running on an ESX host (as you are able to observe).

@govint
Copy link
Contributor

govint commented Oct 26, 2017

@chris7444, could you also post the /var/log/docker-volume-vsphere.log file from the ws node.

@tusharnt tusharnt added the P0 label Oct 26, 2017
@tusharnt tusharnt added this to the Sprint - Thor milestone Oct 26, 2017
@tusharnt tusharnt assigned govint and unassigned shuklanirdesh82 Oct 26, 2017
@ghost
Copy link
Author

ghost commented Oct 26, 2017

@govint WS is a linux WorkStation (typically not one of the docker nodes) from which you use the docler cli command as explained in the docker documentation (see above, 2nd line of the issue description)

@govint
Copy link
Contributor

govint commented Oct 27, 2017

@chris7444, thanks for clarifying, let me check this myself and get back to you.

@govint
Copy link
Contributor

govint commented Oct 27, 2017

Set up UCP with a docker swarm (3 nodes installed with Ubuntu) installed with 17.07.0-ce (using the CE vs. EE version but am able to repro the issue).

The behavior is the same whether using the docker CLI or the UCP UI to create the volume.

Using docker CLI from a photon OS VM using the certificates bundle downloaded from UCP.

  1. Create a docker volume and it reports the volume as created
    root@photon-machine [ ~ ]# docker volume create -d vsphere newCEVol_1
    newCEVol_1

  2. Try accessing the volume and it reports the volume doesn't exist.
    root@photon-machine [ ~ ]# docker volume inspect newCEVol_1
    []
    Error: No such volume: newCEVol_1

The problem is that UCP is issuing the volume creation request to both the worker nodes, while one node succeeds in creating the volume, the other fails because,

  1. Both nodes are trying to create the volume in parallel - the create step succeeds

One node Worker2 successfully creates the volume,
10/27/17 11:56:32 11184359 [Thread-109241] [INFO ] db_mode='SingleNode (local DB exists)' cmd=create opts={'fstype': 'ext4'} vmgroup=_DEFAULT datastore_url=_VM_DS:// is allowed to execute
10/27/17 11:56:32 11184359 [worker2-VM2.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] *** createVMDK: /vmfs/volumes/sharedVmfs-0/dockvols/_DEFAULT/newCEVol_1.vmdk opts={'fstype': 'ext4'} vm_name=worker2-VM2.0 vm_uuid=564da434-72f8-50b6-6ea7-6f729a6d28a5 tenant_uuid=11111111-1111-1111-1111-111111111111 datastore_url=/vmfs/volumes/59b0f5dc-cb967cbf-9d52-020042686308
10/27/17 11:56:32 11184359 [Thread-109242] [WARNING] Volume size not specified
10/27/17 11:56:32 11184359 [Thread-109242] [INFO ] db_mode='SingleNode (local DB exists)' cmd=create opts={'fstype': 'ext4'} vmgroup=_DEFAULT datastore_url=_VM_DS:// is allowed to execute
10/27/17 11:56:32 11184359 [worker2-VM2.0-sharedVmfs-0._DEFAULT.newCEVol_1] [WARNING] Volume size not specified
10/27/17 11:56:32 11184359 [worker2-VM2.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] executeRequest 'create' completed with ret=None

The other node Worker1 finds the volume is already there,
10/27/17 11:56:32 11184359 [worker1-VM1.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] *** createVMDK: /vmfs/volumes/sharedVmfs-0/dockvols/_DEFAULT/newCEVol_1.vmdk opts={'fstype': 'ext4'} vm_name=worker1-VM1.0 vm_uuid=564dd2d5-d43d-a934-4dd4-74e64d1885ab tenant_uuid=11111111-1111-1111-1111-111111111111 datastore_url=/vmfs/volumes/59b0f5dc-cb967cbf-9d52-020042686308
10/27/17 11:56:32 11184359 [worker1-VM1.0-sharedVmfs-0._DEFAULT.newCEVol_1] [WARNING] File /vmfs/volumes/sharedVmfs-0/dockvols/_DEFAULT/newCEVol_1.vmdk already exists
10/27/17 11:56:32 11184359 [worker1-VM1.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] executeRequest 'create' completed with ret=None

  1. When both nodes attach the volume to format it, one succeeds and formats the volume while the other gets an error (since a VMDK can be attached to only one VM at a time).
  2. The first VM formats the VMDK and detaches it, the second (that got the error in (2) above) executes an error path and removes the volume.

Worker2 successfully created the volume and detaches it,
10/27/17 11:56:35 11184359 [worker2-VM2.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] *** disk_detach: VMDK /vmfs/volumes/sharedVmfs-0/dockvols/_DEFAULT/newCEVol_1.vmdk to VM 'worker2-VM2.0' , bios uuid = 564da434-72f8-50b6-6ea7-6f729a6d28a5, VC uuid=52f4106e-89a5-3d37-cdd7-09866a9519ef)
10/27/17 11:56:35 11184359 [worker2-VM2.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] Found vm name='worker2-VM2.0'
10/27/17 11:56:35 11184359 [MainThread] [INFO ] Started new thread : 609415354112 with target <function execRequestThread at 0x8de2535048> and args (12, 11983751, b'{"cmd":"remove","details":{"Name":"newCEVol_1"},"version":"2"}')
10/27/17 11:56:35 11184359 [worker2-VM2.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] Disk detached /vmfs/volumes/sharedVmfs-0/dockvols/_DEFAULT/newCEVol_1.vmdk

While Worker1 that got an error when trying to format the volume goes ahead and removes it,
10/27/17 11:56:36 11184359 [Thread-109246] [INFO ] db_mode='SingleNode (local DB exists)' cmd=remove opts={} vmgroup=_DEFAULT datastore_url=_VM_DS:// is allowed to execute
10/27/17 11:56:36 11184359 [worker1-VM1.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] *** removeVMDK: /vmfs/volumes/sharedVmfs-0/dockvols/_DEFAULT/newCEVol_1.vmdk
10/27/17 11:56:36 11184359 [worker1-VM1.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] *** cleanVMDK: /vmfs/volumes/sharedVmfs-0/dockvols/_DEFAULT/newCEVol_1.vmdk
10/27/17 11:56:36 11184359 [Thread-109247] [INFO ] db_mode='SingleNode (local DB exists)' cmd=get opts={} vmgroup=_DEFAULT datastore_url=_VM_DS:// is allowed to execute
10/27/17 11:56:36 11184359 [worker1-VM1.0-sharedVmfs-0._DEFAULT.newCEVol_1] [INFO ] executeRequest 'remove' completed with ret=None

So,

  1. Why does UCP even send the create request to both the worker nodes? Seems like a bug in UCP. Got a response on slack that this is expected and the plugin must handle an existing volume gracefully!

  2. On the vSphere plugin side we could enhance the plugin to terminate early when a volume of the requested name already exists vs. attempting to recreate it and then fail and remove the volume. If two requests are being made from two separate VMs (as in this case) one node must gracefully complete vs, failing.

I'll post a PR to fix this.

@govint
Copy link
Contributor

govint commented Oct 30, 2017

Raised issue on docker UCP #35334 (moby/moby#35334)

@govint
Copy link
Contributor

govint commented Oct 30, 2017

Also related to issue moby/moby#34664. Fix in UCP will be in by 2.2.4 or 3.0.0.

I'd prefer not to update the changes I have into the code till we have the fix in UCP. The changes are showing an issue only for UCP at this point and hence no point submitting the changes rightaway.

@tusharnt
Copy link
Contributor

tusharnt commented Nov 1, 2017

@govint to follow up with Docker folks about the timeline of the availability of fixes for moby/moby#34664 and moby/moby#35334

@ghost
Copy link
Author

ghost commented Nov 3, 2017

Please take note that I don;t have the issue with the vieux/sshfs docker volume plugin (see below) nor with the HPE 3PAR docker volume plugin (not shown here)

I consider this problem critical, the only way to create vsphere docker volumes is to have the end user connect to a worker node on which there is no role based access control

Regards
Chris

[root@clh-ansible certs.clh]# . env.sh
[root@clh-ansible certs.clh]# export | grep DOCKER
declare -x DOCKER_CERT_PATH="/root/certs.clh"
declare -x DOCKER_HOST="tcp://clh-ucp.am2.cloudra.local:443"
declare -x DOCKER_TLS_VERIFY="1"

[root@clh-ansible certs.clh]#
[root@clh-ansible certs.clh]#
[root@clh-ansible certs.clh]# docker volume create -d vieux/sshfs --name myvol2 -o sshcmd=root@clh-ansible:/remote -o password=******
myvol2
[root@clh-ansible certs.clh]# docker volume ls | grep myvol2
vieux/sshfs:latest myvol2
vieux/sshfs:latest myvol2
vieux/sshfs:latest myvol2
vieux/sshfs:latest myvol2
vieux/sshfs:latest myvol2
vieux/sshfs:latest myvol2
vieux/sshfs:latest myvol2
vieux/sshfs:latest myvol2
vieux/sshfs:latest myvol2

@ghost
Copy link

ghost commented Nov 3, 2017

I have reproduced the issue from the Docker EE user interface as well. Basically, you cannot create a volume using the vSphere driver at all - which makes the plug-in completely unusable.

@govint
Copy link
Contributor

govint commented Nov 3, 2017

@chris744, @kochavara, the fix for the plugin is ready but it can't be verified or merged till below UCP issue is resolved. The below issue was identified when testing the fix with UCP.

moby/moby#35334

@govint
Copy link
Contributor

govint commented Nov 3, 2017

And this issue moby/moby#35334, the UCP issue will be fixed by 2.2.4/3.0.0 timeframe.

@ghost
Copy link

ghost commented Nov 3, 2017

@govint Based on feedback from Docker management, 2.2.4 GA'd yesterday.

Was the fix merged into this version?

@ghost
Copy link
Author

ghost commented Nov 4, 2017

@govint , I just tested the whole thing with UCP 2.2.4. Issue is still there. FYI I am using version 0.13 of the plugin (the version from the Docker Store)

@govint
Copy link
Contributor

govint commented Nov 4, 2017 via email

@ghost
Copy link
Author

ghost commented Nov 6, 2017

@govint

I have UCP 2.2.4, vib v0.18, plugin v0.18, docker and still the same issue. At some time I have successfully created ONE volume but this was the only successful attempt

Where can I find the bits which are supposed to work ? I am downloading the bits from the repos documented here:
http://vmware.github.io/docker-volume-vsphere/documentation/install.html


note: clh06novoa is the name of the volume, I use a unique name each time so I see the same behaviour you explained a few days ago

Nov 6 10:33:46 065a5baa1a5f dtr-notary-signer-eb7c7f80862f[10165] 2017/11/06 10:33:46 transport: http2Server.HandleStreams failed to receive the preface from client: EOF
Nov 6 03:33:46 clh-ucp02 dockerd: time="2017-11-06T03:33:46.417292318-07:00" level=error msg="Handler for POST /v1.30/volumes/create returned error: create clh06novoa: VolumeDriver.Create: Failed to create volume: Cannot complete the operation because the file or folder [Docker_CLH] dockvols/11111111-1111-1111-1111-111111111111/clh06novoa.vmdk already exists"
Nov 6 03:33:46 clh-ucp01 dockerd: time="2017-11-06T03:33:46.421340007-07:00" level=error msg="Handler for POST /v1.30/volumes/create returned error: create clh06novoa: VolumeDriver.Create: Failed to create volume: Cannot complete the operation because the file or folder [Docker_CLH] dockvols/11111111-1111-1111-1111-111111111111/clh06novoa.vmdk already exists"
Nov 6 03:33:46 clh-dtr03 kernel: vmw_pvscsi: msg type: 0x0 - MSG RING: 1/0 (5)
Nov 6 03:33:46 clh-dtr03 kernel: vmw_pvscsi: msg: device added at scsi0:2:0
Nov 6 03:33:46 clh-dtr03 kernel: scsi 0:0:2:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6
Nov 6 03:33:46 clh-dtr03 kernel: sd 0:0:2:0: Attached scsi generic sg3 type 0
Nov 6 03:33:46 clh-dtr03 kernel: sd 0:0:2:0: [sdc] 204800 512-byte logical blocks: (104 MB/100 MiB)
Nov 6 03:33:46 clh-dtr03 kernel: sd 0:0:2:0: [sdc] Write Protect is off
Nov 6 03:33:46 clh-dtr03 kernel: sd 0:0:2:0: [sdc] Cache data unavailable
Nov 6 03:33:46 clh-dtr03 kernel: sd 0:0:2:0: [sdc] Assuming drive cache: write through
Nov 6 03:33:46 clh-dtr03 kernel: sd 0:0:2:0: [sdc] Attached SCSI disk
Nov 6 03:33:49 clh-ucp02 dockerd: time="2017-11-06T03:33:49.244099023-07:00" level=warning msg="memberlist: Failed fallback ping: read tcp 10.10.174.113:44458->10.10.174.116:7946: i/o timeout"
Nov 6 03:33:49 clh-ucp02 dockerd: time="2017-11-06T03:33:49.244139166-07:00" level=info msg="memberlist: Suspect clh-dtr02.am2.cloudra.local-5635b771e7b9 has failed, no acks received"
Nov 6 03:33:50 clh-dtr03 kernel: vmw_pvscsi: msg type: 0x1 - MSG RING: 2/1 (5)
Nov 6 03:33:50 clh-dtr03 kernel: vmw_pvscsi: msg: device removed at scsi0:2:0
Nov 6 03:33:50 clh-worker03 dockerd: time="2017-11-06T03:33:50.841963774-07:00" level=error msg="Handler for POST /v1.30/volumes/create returned error: create clh06novoa: VolumeDriver.Create: Failed to add disk 'scsi0:2'. disk /vmfs/volumes/Docker_CLH/dockvols/_DEFAULT/clh06novoa.vmdk already attached to VM=clh-dtr03"
@

@govint
Copy link
Contributor

govint commented Nov 6, 2017

@chris7444, the changes aren't merged yet. I'll get those in this week once its reviewed. Can you also confirm that with the HPE 3PAR and vieux/sshfs plugins you are able to create/inspect and delete the volumes via UCP?

@ghost
Copy link
Author

ghost commented Nov 6, 2017

I don't have an access to a 3PAR anymore.

vieux/sshfs plugin seems to work, both for create, inspect and delete
UCP2.4.0,
Docker version 17.06.2-ee-5, build 508bb92

[root@clh-ansible certs.clh]# export | grep DOCKER
declare -x DOCKER_CERT_PATH="/root/certs.clh"
declare -x DOCKER_HOST="tcp://clh-ucp.am2.cloudra.local:443"
declare -x DOCKER_TLS_VERIFY="1"

root@clh-ansible certs.clh]# docker volume create -d vieux/sshfs --name mysshvol01 -o sshcmd=root@clh-ansible:/remote -o password=
mysshvol01
[root@clh-ansible certs.clh]# docker volume ls | grep mysshvol01
vieux/sshfs:latest mysshvol01
vieux/sshfs:latest mysshvol01
vieux/sshfs:latest mysshvol01
vieux/sshfs:latest mysshvol01
vieux/sshfs:latest mysshvol01
vieux/sshfs:latest mysshvol01
vieux/sshfs:latest mysshvol01
vieux/sshfs:latest mysshvol01
vieux/sshfs:latest mysshvol01

[root@clh-ansible ~]# docker inspect mysshvol01
[
{
"Driver": "vieux/sshfs:latest",
"Labels": {
"com.docker.swarm.whitelists": "["node==clh-worker01.am2.cloudra.local|clh-ucp03.am2.cloudra.local|clh-ucp02.am2.cloudra.local|clh-dtr02.am2.cloudra.local|clh-dtr03.am2.cloudra.local|clh-dtr01.am2.cloudra.local|clh-ucp01.am2.cloudra.local|clh-worker03.am2.cloudra.local|clh-worker02.am2.cloudra.local"]",
"com.docker.ucp.access.label": "/",
"com.docker.ucp.collection": "swarm",
"com.docker.ucp.collection.root": "true",
"com.docker.ucp.collection.swarm": "true"
},
"Mountpoint": "/var/lib/docker/plugins/7470eda13bc7c11277e442cc949de446b24e7feed66a18ea55e1282019015512/rootfs/mnt/volumes/ee4f332735b4bc757b800a9999268d2b",
"Name": "mysshvol01",
"Options": {
"password": "",
"sshcmd": "root@clh-ansible:/remote"
},
"Scope": "local"
}
]

[root@clh-ansible ~]# docker volume rm mysshvol01
mysshvol01
[root@clh-ansible ~]# docker volume ls | grep myssh
[root@clh-ansible ~]#

@govint
Copy link
Contributor

govint commented Nov 6, 2017

@chris7444, thanks for confirming, I'm following up with the docker community on the fixes in UCP for the reported issue and will keep this thread updated.

@ghost
Copy link

ghost commented Nov 6, 2017

Thanks!

@ghost
Copy link
Author

ghost commented Nov 13, 2017

@govint Any news here ? I am not sure why UCP needs a fix when the sshfs volume plugin works with UCP

@govint
Copy link
Contributor

govint commented Nov 13, 2017

@chris7444, please see updates here at moby/moby#35334 (comment), seems like UCP 2.2.4 does have an issue with Get() and Remove() for non-local volumes. I don't have a date yet when UCP 2.2.4+ will be available though.

@ghost
Copy link
Author

ghost commented Nov 14, 2017

@govint. My understanding is that in moby 35334 you are reporting a problem with delete or inspect but it seems you can create volumes. At this point in time If people use vDVS 0.18 (or 0.17) they cannot create volumes when sourcing the client bundle. If we could have the changes you did to be able to create volumes this would help because with the released bit, the vsphere docker volume plugin is unusable as stated by @kochavara

@govint
Copy link
Contributor

govint commented Nov 14, 2017

@chris7444, I have PR #1985 out for a review that will fix the plugin to ensure no issues in creating volumes. This change should be in the next release of the plugin. Besides that inspect isn't supported via UCP and remove needs the labels on all nodes in the swarm cluster to be exactly the same in order for it to work.

@ghost
Copy link

ghost commented Nov 14, 2017

Thanks @govint
Do we have an ETA on when the next version of the plugin will be released?

@tusharnt
Copy link
Contributor

@kochavara the next release is (tentatively) planned in the first week of December.

@ghost
Copy link
Author

ghost commented Nov 14, 2017

@govint Thank you!

@govint
Copy link
Contributor

govint commented Nov 16, 2017

Verified the fixes with a five node swarm cluster with UCP:

docker volume rm photonvol-7
photonvol-7
root@(none):/vol/photon-ee# docker version
Client:
Version: 17.07.0-ce
API version: 1.30 (downgraded from 1.31)
Go version: go1.8.3
Git commit: 8784753
Built: Tue Aug 29 17:43:06 2017
OS/Arch: linux/amd64

Server:
Version: ucp/2.2.4
API version: 1.30 (minimum version 1.20)
Go version: go1.8.3
Git commit: 168ec746e
Built: Thu Nov 2 17:15:16 UTC 2017
OS/Arch: linux/amd64
Experimental: false

  1. Each of the nodes are running Docker CE (not docker EE as recommended as photon OS installs the below version:
    Client:
    Version: 17.06.0-ce
    API version: 1.30
    Go version: go1.8.1
    Git commit: 02c1d87
    Built: Tue Aug 15 18:54:23 2017
    OS/Arch: linux/amd64

Server:
Version: 17.06.0-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.1
Git commit: 02c1d87
Built: Tue Aug 15 18:55:36 2017
OS/Arch: linux/amd64
Experimental: false

  1. Create/inspect/rm of a volume works fine, although inspect and rm fails at times.
    docker volume create -d vsphere photonvol-7
    photonvol-7

docker volume inspect photonvol-7
[
{
"Driver": "vsphere",
"Labels": {
"com.docker.swarm.whitelists": "["node==photon-machine5|photon-machine2|photon-machine1|photon-machine3|photon-machine4"]",
"com.docker.ucp.access.label": "/",
"com.docker.ucp.collection": "swarm",
"com.docker.ucp.collection.root": "true",
"com.docker.ucp.collection.swarm": "true"
},
"Mountpoint": "/mnt/vmdk/photonvol-7/",
"Name": "photonvol-7",
"Options": {},
"Scope": "global",
"Status": {
"access": "read-write",
"attach-as": "independent_persistent",
"capacity": {
"allocated": "13MB",
"size": "100MB"
},
"clone-from": "None",
"created": "Thu Nov 16 10:35:00 2017",
"created by VM": "Photon-2.0-11",
"datastore": "sharedVmfs-0",
"diskformat": "thin",
"fstype": "ext4",
"status": "detached"
}
}
]

docker volume rm photonvol-7
photonvol-7

docker volume create -d vsphere photonvol-6
photonvol-6

docker volume inspect photonvol-6
[
{
"Driver": "vsphere",
"Labels": {
"com.docker.swarm.whitelists": "["node==photon-machine2|photon-machine1|photon-machine3|photon-machine4|photon-machine5"]",
"com.docker.ucp.access.label": "/",
"com.docker.ucp.collection": "swarm",
"com.docker.ucp.collection.root": "true",
"com.docker.ucp.collection.swarm": "true"
},
"Mountpoint": "/mnt/vmdk/photonvol-6/",
"Name": "photonvol-6",
"Options": {},
"Scope": "global",
"Status": {
"access": "read-write",
"attach-as": "independent_persistent",
"capacity": {
"allocated": "13MB",
"size": "100MB"
},
"clone-from": "None",
"created": "Thu Nov 16 10:34:08 2017",
"created by VM": "Photon-2.0-11",
"datastore": "sharedVmfs-0",
"diskformat": "thin",
"fstype": "ext4",
"status": "detached"
}
}
]

docker volume rm photonvol-6
photonvol-6

@govint
Copy link
Contributor

govint commented Nov 16, 2017

Changes are merged and closing.

@govint govint closed this as completed Nov 16, 2017
@shuklanirdesh82
Copy link
Contributor

in response to #1950 (comment)

vDVS 0.19 release is out please try out deliverables from https://github.com/vmware/docker-volume-vsphere/releases/tag/0.19

@govint
Copy link
Contributor

govint commented Dec 6, 2017

Please note, docker volume inspect isn't supported with Docker EE, but actually works with Docker CE - inconsistent behavior (although Docker CE isn't supposed to be used with UCP). Plus, after creating docker volumes with UCP (UI or CLI) log into the ESX hosts and do docker volume inspect over there and ensure that all hosts show the same set of labels and the same values for those. This is absolutely necessary to be able remove the volumes via docker UCP (UI or CLI).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants