-
Notifications
You must be signed in to change notification settings - Fork 537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc: adminid and userid not documented #80
Comments
Nope, looks like I didn't populate my secret correctly. The external-provisioner is sent binary data for the secrets and then complains (these are disposable secrets, so I can log them as they are):
|
I got provisioning working by storing the string from the keyring (for example, |
Another pitfall was that I got that resolved by removing the
Is that the point where I somehow have to create /etc/ceph with a suitable keyring? But where (host or container) and how (if its inside the container, where the CSI driver runs)? |
that can you check if id |
btw, k8s 1.11 has feature gate against driver probe, while 1.12 turns on the feature by default. |
According to my understanding of http://docs.ceph.com/docs/mimic/rbd/rados-rbd-cmds/#create-a-block-device-user, that should grant read/write access to the pool, right? So |
@pohly Is that a typo on your |
@dillaman that's copied straight from the |
Here's the command I used for creating the user:
|
@pohly Your RBD pool is really named "rdb" and not "rbd"? |
@dillaman Ooops! You are right of course. Thanks for spotting this. |
I don't want to abuse this issue here for support questions, but I am again seeing something related to keys. I promise to make up for all the answers here by creating a doc PR ;-} I have configured the E2E test in Kubernetes to run against ceph-csi on a local cluster. It gets to the point where an rbd volume was provisioned, but then formatting the volume seems to fail after attaching it to
This I find a bit strange: the controller server must have been able to create the volume, why does deleting now fail? |
@mkimuram has a k8s e2e PR using rbd csi driver |
Please see kubernetes/kubernetes#67088 |
WOW . You save a lot of my time. We need to base64 encode one more time our ceph client user key. |
We got this working, but have a question. I don't see anywhere in the code, where it actually needs a full blown cluster admin credential as opposed to just a credential that has access just to the pool the provisioner will be creating images in. We'd much rather give it the more restricted account if possible as we have multiple clusters targeting the same ceph. Is this safe? This is a config similar to other systems like openstack cinder use. |
so you would use the same account for user and admin, or would the user account be further restricted? |
rbd only allows one user and admin per storage class. The user id is not dynamically generated (as in cephfs). One way (not meant to be best practice) is to partition your ceph storage into multiple pools, one for each of your kubernetes cluster or namespaces. You can restrict admin id to have privileged osd caps per pool, while limit such caps for user id in that pool. |
Huamin Chen <notifications@github.com> writes:
rbd only allows one user and admin per storage class. The user id is
not dynamically generated (as in cephfs).
Note that the storage class can use templates for the secret name and
namespace. Then the actual secrets can be supplied by the user who
creates the PVC:
https://kubernetes-csi.github.io/docs/Usage.html#csi-provisioner-parameters
|
Regarding the need of encoding the keys in base64: that's more of just a quirk in Kubernetes where users creating Secret objects manually with |
closing this one as we moved using base64 encoded userID and password to use plain text in secret. |
sync downstream devel with upstream devel
examples/rbd/storageclass.yaml
contains these settings:Those are not mentioned under https://github.com/ceph/ceph-csi/blob/master/docs/deploy-rbd.md#configuration. Are both users needed?
I found the entire "Required secrets" section a bit confusing:
the value is its password
- is that really calledpassword
in Ceph? I think this refers to the base64 encodedkey
that is stored in ceph keyrings, right? Linking to http://docs.ceph.com/docs/mimic/rbd/rados-rbd-cmds/#create-a-block-device-user might be useful here, with a few words how to use the result.CSI RBD expects admin keyring and Ceph config file in /etc/ceph
- how can that be achieved when deplying in Kubernetes? Is it really necessary?I have set up a test cluster, but haven't actually tried provisioning. I guess I'll find out soon whether I interpreted the instructions correctly ;-}
The text was updated successfully, but these errors were encountered: