-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Fix Machine adoption for KCP/MachineSet-owned Machines #7591
Conversation
// which were created before the `cluster.x-k8s.io/control-plane-name` label was introduced. | ||
// NOTE: Changes will be applied to the Machines in reconcileControlPlaneConditions. | ||
// NOTE: cluster.x-k8s.io/control-plane is already set at this stage (it is used when reading controlPlane.Machines). | ||
// TODO(sbueringer): Drop the following code with v1.4 after all existing Machines are guaranteed to have the new label. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will open a follow-up issue to track this for the v1.4 milestone
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: Turns out we can't just drop it with v1.4 as we have no guarantees that everyone is upgrading v1.2 => v1.3 => v1.4
I would like to get rid of the logic at some point though. Just to keep the complexity of our already complex code as low as possible. Maybe we can drop it 1-2 years for now? (with some disclaimer in the corresponding .0 release notes)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this isn't only for adding the label on upgrade but also for ensuring the label gets re-applied if someone removes it or changes its value.
So I will drop the todo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with keeping this, although the complexity that we have in the MD controller is really concerning already.
I wonder if the approach "what if a user deletes this label / ownerRef / field / ..." is something that we can sustain. There is just data that we have to be able to rely on otherwise all bets are off, e.g. what if a user:
- removes the cluster label or
- while we are now safe against users removing the
cluster.x-k8s.io/control-plane-name
label I think we have no chance if users remove thecluster.x-k8s.io/control-plane
label.
To be clear, I'm fine with keeping this logic, just wondering how sustainable it is in addition to the already concerning complexity we have having to be able to deal with all kind of edge cases where users manually remove/sabotage our data.
(dropped the TODOs)
/test pull-cluster-api-e2e-informing-main |
internal/controllers/machinedeployment/machinedeployment_controller.go
Outdated
Show resolved
Hide resolved
/test pull-cluster-api-e2e-full-main |
ba962a0
to
7a5b3e8
Compare
/test pull-cluster-api-e2e-full-main |
internal/controllers/machinedeployment/machinedeployment_controller.go
Outdated
Show resolved
Hide resolved
5e513d5
to
10f016c
Compare
/test pull-cluster-api-e2e-full-main |
/cherry-pick release-1.2 |
@sbueringer: once the present PR merges, I will cherry-pick it on top of release-1.2 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick release-1.3 |
@sbueringer: once the present PR merges, I will cherry-pick it on top of release-1.3 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
10f016c
to
6be714a
Compare
/test pull-cluster-api-e2e-full-main |
Did the following additional validation:
|
Co-authored-by: fabriziopandini <fpandini@vmware.com> Signed-off-by: Stefan Büringer buringerst@vmware.com
6be714a
to
4f536d8
Compare
/test pull-cluster-api-e2e-full-main |
@fabriziopandini This should now be ready for merge I think merging this PR would also unblock one of Killians PRs. |
/hold There's a bug in #7606 which might be related to this PR, so I'd like to get to the bottom of that first. |
/remove-hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks for tackling this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@sbueringer: #7591 failed to apply on top of branch "release-1.2":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@sbueringer: new pull request created: #7637 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Signed-off-by: Stefan Büringer buringerst@vmware.com
What this PR does / why we need it:
Kudos to @fabriziopandini for the initial 80% of the PR :)
The expected behavior after this PR is roughly:
(all also manually verified)
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #7529