-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k3s-upgrade] k3s service failed to start after upgrade #5345
Comments
It looks like the |
i haven't changed the the config file. Not sure if it got modified by the update process? |
@brandond |
No, if you were not manually configuring the token, and all nodes with a copy of the token file have been lost, there is no way to recover the value with only a copy of the datastore. |
Is it also stored in |
The bootstrap data (cluster CA certificates and such) are stored in the datastore, encrypted with the token as the key generation passphrase. The token value cannot be extracted from the datastore; that would render the encryption meaningless. |
I deleted the |
If you delete that file but the token is not specified elsewhere (in the config or on the CLI), then a new one will be generated on startup. This is most likely fine on single-server clusters, but it will cause problems when using etcd or an external SQL datastore. |
I am indeed running a single-server cluster. Thanks for your explanation! |
What about multi-node clusters? I ran into this issue while trying to upgrade an agent node from 1.22.6+k3s1 to the latest. Can I just grab the token from another node and force inject it during the upgrade? The weirdest part is that it's communicating with the cluster just fine. |
@bramnet this issue has wandered a bit; I may need to lock it so that folks can open their own issues describing their individual problems. What is the exact message you're getting? |
I was just trying again to reproduce it, and suddenly it’s saying the node is up to date… not sure what happened here. |
I'm having the same issue on a single node cluster. I noticed that |
same here using single master mode, vesrion |
I'm not aware of any paths in the k3s code that would cause it to write an empty token file. If anyone else runs into this, and can confirm that they are not using any automation or scripting to manage the content of that file, please open a new issue with steps that can help us reproduce this. |
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Describe the bug:
I tried to upgrade the k3s version of my cluster (master node and worker nodes) by following this : k3s-upgrade
Steps To Reproduce:
my plans:
server.yml
agent.yml
Expected behavior:
All nodes to upgrade successfully to k3s version
1.23.4+k3s1
Actual behavior:
master node k3s updated the k3s binary on the machine but failed to start the service
Additional context / logs:
The text was updated successfully, but these errors were encountered: