-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Self Healing: Allow AWS Controller to Detect and Fix AWS Resource Changes on Interval #2800
Comments
Experienced the same behaviour. I assumed when I delete the LB via the AWS Console the ALB Controller would automatically recreate, however it did not. |
I'm experiencing the same issue. It only gets recovered when the number of replicas behind the service is changed. I expected it would get recovered every 200s according to below aws-load-balancer-controller/pkg/deploy/elbv2/target_group_binding_manager.go Lines 21 to 26 in ec34185
|
Experienced the same by accident, removed the wrong ALB from the AWS console and the lb-controller only recreates the ALB the moment you restart the lb-controller deployment. |
Yes, that's unfortunately the only fix which is possible at the moment. I think in an older version (before the renaming) it was possible to get it recreated automatically. Can this please be fixed? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale
…On Tue, Feb 21, 2023 at 2:08 AM Kubernetes Triage Robot < ***@***.***> wrote:
The Kubernetes project currently lacks enough contributors to adequately
respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied,
lifecycle/rotten is applied
- After 30d of inactivity since lifecycle/rotten was applied, the
issue is closed
You can:
- Mark this issue as fresh with /remove-lifecycle stale
- Close this issue with /close
- Offer to help out with Issue Triage
<https://www.kubernetes.dev/docs/guide/issue-triage/>
Please send feedback to sig-contributor-experience at kubernetes/community
<https://github.com/kubernetes/community>.
/lifecycle stale
—
Reply to this email directly, view it on GitHub
<#2800 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJWRH7766EVOPYGH7WOQJ23WYRZZRANCNFSM6AAAAAAQMU33SM>
.
You are receiving this because you authored the thread.Message ID:
<kubernetes-sigs/aws-load-balancer-controller/issues/2800/1438014716@
github.com>
|
In previous versions, it did automatically modify / recreate when it detected that AWS resources are not correct. While running a previous version, we had an issue where multiple load balancers were accidentally deleted, and by the time we were notified there was issue, the controller had already re-created them. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Hi, we have shipped the fix in v2.5.4, please check the details in our release note: https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/tag/v2.5.4. |
@oliviassss Thank You! |
Is your feature request related to a problem?
If a load balancer is deleted through the AWS Console, the AWS load balancer does not notice or re-create the load balancer.
The AWS load balancer controller must be restarted, and then the missing load balancer is recreated.
Describe the solution you'd like
An argument that could be passed into the controller indicating that it should do a full scan of AWS on a certain interval in an attempt to detect and fix drift within AWS from the expected state.
This would basically emulate the behavior that the AWS load balancer controller does when it starts up.
Potentially, for large deployments, you might also need a segment size argument as well
i.e.
Every 5 minutes scan AWS for 100 ingresses, then the next 5 minutes the next 100 ingresses etc..
Describe alternatives you've considered
I have used all existing arguments, such as sync period, but none of them cause the load balancer to be re-created.
The text was updated successfully, but these errors were encountered: