-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Allow machine rollout if cert reconcile fails #8711
🐛 Allow machine rollout if cert reconcile fails #8711
Conversation
/cherry-pick release-1.4 |
@killianmuldoon: once the present PR merges, I will cherry-pick it on top of release-1.4 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cherry-pick release-1.3
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/area provider/control-plane-kubeadm
/cherry-pick release-1.3 |
@killianmuldoon: once the present PR merges, I will cherry-pick it on top of release-1.3 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
0a86895
to
39515c7
Compare
/retest |
@@ -445,6 +440,12 @@ func (r *KubeadmControlPlaneReconciler) reconcile(ctx context.Context, cluster * | |||
return ctrl.Result{}, errors.Wrap(err, "failed to update CoreDNS deployment") | |||
} | |||
|
|||
// Reconcile certificate expiry for machines that don't have the expiry annotation on KubeadmConfig yet. | |||
// Note: This should be at the end of the reconcile so it doesn't block remediation if some machines are unhealthy. Ref:https://github.com/kubernetes-sigs/cluster-api/issues/8691 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ok in moving this check here, but remediation was already handled before L374 (where this check was previously located. Probably, it was simply not triggered by MHC.
With this change, we are allowing rollout to happen if a KCP machine is not reporting the certificate expiry, which is good, but unrelated to remediation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with this check is that it blocked additional rollouts - not MHC remediation. In this case changing KCP to an invalid config caused a new non-working machine to be rolled out. Changing it back to a valid config resulted didn't fix KCP as this function returned an error.
Moving this check down here means that KCP is now able to do the additional rollouts and fix the control plane without MachineHealthChecks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can add another point to say we're moving it at the end of the reconcile so it doesn't block anything else as it's not important to run this func before anything else in the reconcile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sbueringer could you suggest wording for this comment - I'm not clear on what you're looking for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. I would suggest:
// Reconcile certificate expiry for Machines that don't have the expiry annotation on KubeadmConfig yet.
// Note: This requires that all control plane machines are working. We moved this to the end of the reconcile
// as nothing in the same reconcile depends on it and to ensure it doesn't block anything else,
// especially MHC remediation and rollout of changes to recover the control plane.
Hope that makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Signed-off-by: killianmuldoon <kmuldoon@vmware.com>
39515c7
to
a3a02e6
Compare
/lgtm |
LGTM label has been added. Git tree hash: a437683fef6c5783d42ed682401c303be2124401
|
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@killianmuldoon: new pull request created: #8737 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@killianmuldoon: new pull request created: #8738 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This fixes the bug described in ##8691 where KCP would fail to re-reconcile to a good state when the kube-apiserver was misconfigured and failing.
Fixes #8691