Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggested changes to ID and key requirements for rbd CSI plugin #270

Closed
ShyamsundarR opened this issue Mar 20, 2019 · 7 comments · Fixed by #395
Closed

Suggested changes to ID and key requirements for rbd CSI plugin #270

ShyamsundarR opened this issue Mar 20, 2019 · 7 comments · Fixed by #395

Comments

@ShyamsundarR
Copy link
Contributor

Currently the CSI rbd plugin requires 2 IDs and its respective keys in secrets. Each ID seems to serve the following purpose,

  • adminID: Used for create/delete/snapshot operations (IOW, controller service operations)
  • userID: Used for mount/umount of the rbd image on nodes using the same (IOW, node service operations)

There are a few issues with the same, some of which is related to lack of documentation as stated in #80 but some others need consideration as follows,

  1. Do we need separate ID for the Controller Service and the Node Service? Or, can these be folded into a single ID/key instead of requiring 2?

I can see the following hierarchy of secrets (with some further granularity possible) that maybe used, and the question is, if we need to separate the admin from the mounter as in the following list,

  • Cluster-wide admin
  • Per-pool admin
  • Per-pool user/mounter
  • Per-image user/mounter
  1. Either-way, the ID and key can move into the secret, rather than mentioning the ID in the StorageClass and the key as a secret in the secret. This makes it easier to configure for a user, than having to do the same in 2 places. IOW, make this similar to a secret in CephFS plugin, where the secret carries both the ID and the key.

  2. Further make the secret a single ID/key pair, such that if required the same can be used as the provisioner and the publisher secret, rather than repeating the pair of user.key within the secret and also repeating the same in the StorageClass

  3. Ideally, if we needed to separate the provisioner secret from the node secret, these should not be placed in the same secret file, as otherwise the nodes get both secrets and that sort of defeats the purpose? I would suggest that we pass the secrets in different files as examples, instead of a single file (and if we choose to do (3) this would be handled automatically)

@dillaman
Copy link

dillaman commented Apr 12, 2019

From the point-of-view of the ceph-rbd CSI, I agree that the current usage of admin vs user doesn't make much sense. The CSI would only need a user w/ caps similar to the following:

ceph auth get-or-create client.{ID} mon 'profile rbd' osd 'profile rbd [pool={pool-name} [namespace={namespace-name}]][, profile ...]'

There isn't currently any need for full Cluster "admin"-level permissions (i.e. ceph.admin) for any RBD operations at the CSI level. A higher-level orchestration tool like Rook would need more permissions to create/delete pools.

There also isn't any plans in place for per-image permissions. Instead, starting w/ the Ceph Nautilus release, RBD now supports pool namespaces. This allows a storage admin to grant R/W permissions to a user for only a specific namespace within a pool. If a k8s admin wanted to only provide a user (project) with access to a specific namespace, they could create a new StorageClass that points to a different credential secret that it locked down to a specific namespace. Of course the ceph-rbd CSI would also need to add support for a pool_namespace attribute in the StorageClass and add --namespace XYZ CLI arguments to the rbd/rados commands.

@ShyamsundarR
Copy link
Contributor Author

@dillaman if we use namespaces, can we still maintain pool level RADOS omaps and keys. Assuming we are removing the csi.volumes and csi.snaps` omaps, we still need to create the other omaps and keys, do those need any privileges that would be missed in this scheme?

@dillaman
Copy link

@ShyamsundarR Technically you could grant additional access to the default ('') namespace on specific objects like csi.volumes and csi.snaps. It does bring up a good design question of whether or not that would violate the expected isolation between pool namespaces and therefore should the namespace name be encoded in the volume/snap id as well?

@ShyamsundarR
Copy link
Contributor Author

@dillaman anything added to the Storage Class that specializes where the volume (image) will land, will need encoding into the volume/snap ID. So, if we decide in the future to add namespace support into the StorageClass then this would need to be added to the returned ID.

Is it not feasible to query and determine which namespace a user has access to in a pool, and use an appropriate namespace in the CLI arguments? Thus, avoiding specifying the same in the Storage Class itself?

Further I assume the namespace "enhancement" is a logical next step post addressing the dual user issue cleanup, right?

@dillaman
Copy link

@dillaman anything added to the Storage Class that specializes where the volume (image) will land, will need encoding into the volume/snap ID. So, if we decide in the future to add namespace support into the StorageClass then this would need to be added to the returned ID.

Agreed (given the assumption that the CSI driver cannot pull the StorageClass or additional Secret metedata from k8s).

Is it not feasible to query and determine which namespace a user has access to in a pool, and use an appropriate namespace in the CLI arguments? Thus, avoiding specifying the same in the Storage Class itself?

Negative, it's not realistically feasible. You would need to pull their caps from the MONs (which would require high privilege) and then parse out the caps to determine which ones apply to which pool and namespace -- possibly multiple).

Further I assume the namespace "enhancement" is a logical next step post addressing the dual user issue cleanup, right?

Right, it doesn't stall current stateless PR -- but the stateless PR should be able to "gracefully" handle v2 of the encoded vol/snap ID (i.e. not crash upon seeing a v2).

@ShyamsundarR
Copy link
Contributor Author

@dillaman require some clarifications/confirmation as I start part of the implementation here, as follows,

  • We are stating that we do NOT need 2 IDs (adminID and userID) but just a single ID (calling it a adminID) for RBD

  • We will move the ID/Key configuration to the secret (like the CephFS configuration), where the secret holds both the ID and the Key. Instead of the current ID in the storage class and key in the secret

@dillaman
Copy link

  • We are stating that we do NOT need 2 IDs (adminID and userID) but just a single ID (calling it a adminID) for RBD

Correct -- the "admin" vs "user" split doesn't currently make sense (at least for RBD). I'd actually vote for calling it userID since you don't need any administrator-level permissions to create/read/update/delete RBD images.

  • We will move the ID/Key configuration to the secret (like the CephFS configuration), where the secret holds both the ID and the Key. Instead of the current ID in the storage class and key in the secret

Make sense to me.

ShyamsundarR added a commit to ShyamsundarR/ceph-csi that referenced this issue Jun 1, 2019
… secret

RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.

Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.

Fixes ceph#270

Testing done:
- Basic PVC creation and mounting

Signed-off-by: ShyamsundarR <srangana@redhat.com>
ShyamsundarR added a commit to ShyamsundarR/ceph-csi that referenced this issue Jun 11, 2019
… secret

RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.

Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.

Fixes ceph#270

Testing done:
- Basic PVC creation and mounting

Signed-off-by: ShyamsundarR <srangana@redhat.com>
ShyamsundarR added a commit to ShyamsundarR/ceph-csi that referenced this issue Jun 11, 2019
… secret

RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.

Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.

Fixes ceph#270

Testing done:
- Basic PVC creation and mounting

Signed-off-by: ShyamsundarR <srangana@redhat.com>
ShyamsundarR added a commit to ShyamsundarR/ceph-csi that referenced this issue Jun 14, 2019
… secret

RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.

Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.

Fixes ceph#270

Testing done:
- Basic PVC creation and mounting

Signed-off-by: ShyamsundarR <srangana@redhat.com>
ShyamsundarR added a commit to ShyamsundarR/ceph-csi that referenced this issue Jun 24, 2019
… secret

RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.

Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.

Fixes ceph#270

Testing done:
- Basic PVC creation and mounting

Signed-off-by: ShyamsundarR <srangana@redhat.com>
@mergify mergify bot closed this as completed in #395 Jun 24, 2019
mergify bot pushed a commit that referenced this issue Jun 24, 2019
… secret

RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.

Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.

Fixes #270

Testing done:
- Basic PVC creation and mounting

Signed-off-by: ShyamsundarR <srangana@redhat.com>
wilmardo pushed a commit to wilmardo/ceph-csi that referenced this issue Jul 29, 2019
… secret

RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.

Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.

Fixes ceph#270

Testing done:
- Basic PVC creation and mounting

Signed-off-by: ShyamsundarR <srangana@redhat.com>
nixpanic pushed a commit to nixpanic/ceph-csi that referenced this issue Mar 27, 2024
rebase: bump github.com/go-jose/go-jose/v3 from 3.0.1 to 3.0.3
nixpanic added a commit to nixpanic/ceph-csi that referenced this issue Mar 27, 2024
Syncing red-hat-storage/ceph-csi:devel up to commit 28bc4d1.

Pull-Request ceph#270 introduced a conflict that the resync automation job
could not address. This manual merge should make it possible for the
automation to continue again.

Closes: ceph#279 ceph#280
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants