-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add resource quotas #384
add resource quotas #384
Conversation
internal/app/kube_helpers.go
Outdated
spec: | ||
hard: | ||
` | ||
if quotas.Pods != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just marshal the object like it's done in the setLimits
method instead of setting each field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just copy pasted what I had from the old PR before helm 3, will change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at it, the marshalling won't work because it isn't the exact same object format because of the customResources option
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what matters is the format we use in the Helmsman file and the struct format we use to unmarshal, we can make them match, like it's done in the setLimits
function....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No we can't because of the customResources field, that can be anything, like gpu's or something custom, so we can't have that because the struct would need to be dynamic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
requests.nvidia.com/gpu I guess is a standard one, although not very common so I didn't include it but you can have resource quotas on completely custom resources such as count/widgets.example.com or wheels/custom.com
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but all the other parts can be marshaled and this appended at the end like you are doing now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so two questions
1, how would I tell it to marshal everything but the customResources?
2, this is what the actual yaml looks like for k8s
status:
hard:
limits.cpu: "10"
limits.memory: 8Gi
pods: "25"
requests.cpu: "10"
requests.memory: 8Gi
So I would have to change the structure of the helmsman yaml, we can do that but I think it'd be a little weird to have the dot notation and I would need to know the answer to question 1 to be able to do that.
If I'm understanding correctly you'd want me to marshal Limits and Resources
hard:
limits:
cpu: "10"
memory: "8Gi"
pods: "25"
requests:
cpu: "10"
memory: 8Gi
but the actual k8s yaml doesn't look like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@luisdavim hey this should be resolved now, sorry my day job was very busy this last month. Let me know if the way I handled not marshaling the custom quotas is okay
85b7054
to
0b7cc31
Compare
0b7cc31
to
06c399c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall, small requests from me though.
Can you come up with at least 1-2 test cases? We'd like to be sure the outcome yaml for quotas is fine and also that we can handle at least one example of improperly defined quotas. :-)
internal/app/kube_helpers.go
Outdated
log.Fatal("ERROR: failed to create ResourceQuota in namespace [ " + ns + " ]: " + result.errors) | ||
} | ||
|
||
deleteFile("temp-ResourceQuota.yaml") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather put this line before we run log.Fatal for result.code != 0 so the file "always" gets deleted even when kubectl apply failed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also you might want to use this catalog https://github.com/Praqma/helmsman/blob/master/internal/app/main.go#L10 for such file
docs/how_to/namespaces/quotas.md
Outdated
@@ -0,0 +1,44 @@ | |||
--- | |||
version: ??? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can set it as 3.2.0, I hope this will go with next minor release
Hi @ivelichkovich would you be able to address @mkubaczyk's comments? |
Yes working on it now, should be an hour or two |
Updated per the requests and I added quotas into the examples/example.toml (which is a file used in testing), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just change the version in docs and we can merge it :-)
refactored version of this #340, we're going to use helm3 and helmsman3 so don't need tiller resources so just resource quotas