-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Secret deleted by the garbage collector with delay #1599
Comments
The retries are already done with an exponential backoff, but since the retry limit is just 5 they happen too fast:
As you can see, each retry doubles the previous wait time, but it's not until the 9th retry that it starts waiting more than 1 second, and by the 15th retry it is already waiting more than 1 minute. A quick solution for this would be to just increase the number of max retries (currently at 5). What do you think? cc @agarcia-oss |
Yes, I think that we should increase the default number of max retries to 15. |
Which component:
sealed-secrets-controller:0.27.1
Describe the bug
argoCD is replacing the sealedSecret.
It is deleting the sealedSecret to recreate it in a few milliseconds.
The sealedSecret controller can't unseal the sealedSecret because it is already existing.
With less than 5 seconds of delay, the garbage-collector is seeing the secret with an obsolete SealedSecret ownerReference UID, and deletes it.
Since the sealedSecret controller has given up the unseal of the SealedSecrets after 5 attempts, we don't have the secret anymore.
To Reproduce
It's not easily reproducible because it didn't happen on every clusters we have made this scenario.
Expected behavior
We're expecting the sealedSecret controller to try to unseal the secret with an exponential backoff, instead of doing all its attempts in a few milliseconds.
It's not rare than the garbage collector has some delays in its actions.
Version of Kubernetes:
1.28 & 1.29
kubectl version
:The text was updated successfully, but these errors were encountered: