Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix image size #238

Merged
merged 1 commit into from
Mar 5, 2019
Merged

Fix image size #238

merged 1 commit into from
Mar 5, 2019

Conversation

Madhu-1
Copy link
Collaborator

@Madhu-1 Madhu-1 commented Mar 1, 2019

Fixes: #214

@rootfs @gman0 PTAL, let me know it changes looks good, i will test it out.

pkg/rbd/rbd_util.go Outdated Show resolved Hide resolved
@Madhu-1 Madhu-1 force-pushed the fix-image-size branch 2 times, most recently from cff46bc to 0678e1b Compare March 1, 2019 12:36
@rootfs
Copy link
Member

rootfs commented Mar 1, 2019

looks good, is it still DNM?

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 1, 2019

@rootfs we need to make a decision on minimum volume size on cephfs #214 (comment)

still i need to test this PR.

@rootfs
Copy link
Member

rootfs commented Mar 1, 2019

sure, I'll let @gman0 make the final decision

@gman0
Copy link
Contributor

gman0 commented Mar 1, 2019

@Madhu-1 @rootfs for cephfs:

  1. the correct solution would be to handle LimitBytes, because we're not pre-allocating volume space but merely specifying a soft limit for maximum bytes in a directory. Unfortunately, external-provisioner handles only RequiredBytes at the moment: https://github.com/kubernetes-csi/external-provisioner/blob/c42566f2229722b0184e45781be91f1e40b8c86c/pkg/controller/controller.go#L497-L504
  2. keep cephfs handling the size as it was before
  3. a compromise: round up to 1MiB as is done in this PR, but still accept 0 as size

I'd go with option (2) because it's the closest to reality (in regards to ceph and csi spec) we can get with current state of external-provisioner.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@Madhu-1 Madhu-1 changed the title [DNM] [WIP] Fix image size Fix image size Mar 5, 2019
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 5, 2019

@gman0 reverted back the changes for cephfs PTAL

tested for rbd and working fine

Copy link
Contributor

@gman0 gman0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Madhu-1 cool, thanks!

if err := setVolumeAttribute(volRootCreating, "ceph.quota.max_bytes", fmt.Sprintf("%d", bytesQuota)); err != nil {
return err
}
if err := setVolumeAttribute(volRootCreating, "ceph.quota.max_bytes", fmt.Sprintf("%d", bytesQuota)); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not so sure about this. Both CephFS and the CSI spec allows zero-sized requirements - why forbid 0?

Another thing is cephfs doesn't really have "volume sizes" per-se, only quotas. So maybe any CapacityRange.RequiredBytes should be an error and CapacityRange.LimitBytes should be used instead? Don't know whether we can pass LimitBytes through storage class yet, though.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah agreed, but if a user creates a PVC he sees the volume size mismatch in PVC describe and cephfs volume with current setup.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 5, 2019

@gman0 Mergify is failing can we merge this one manually?

@gman0 gman0 merged commit b072117 into ceph:csi-v1.0 Mar 5, 2019
wilmardo pushed a commit to wilmardo/ceph-csi that referenced this pull request Jul 29, 2019
nixpanic pushed a commit to nixpanic/ceph-csi that referenced this pull request Mar 4, 2024
Syncing latest changes from upstream devel for ceph-csi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants