-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AL2023 - PrivateDNSName regression #1711
Comments
I use Karpenter to launch nodes.... Is there a way to patch the userdata with a blend of bash and the new nodeconfig?
|
Sorry about this. We were intending to change the naming convention for nodes in AL2023 from the beginning, to use instance ID's instead of the PrivateDnsName. This had some downstream effects and didn't ultimately make the cut (though we intend to make it opt-in soon). I'll get a PR up to address this. |
Now that the fix for this issue has been merged, how long before we can expect to see it released? We're itching to get AL2023 nodes running our EKS cluster. |
btw, I ran into this same issue and I found if you set the hostname to be: TOKEN=$(curl --request PUT "http://169.254.169.254/latest/api/token" --header "X-aws-ec2-metadata-token-ttl-seconds: 10")
ZONE=$(curl http://169.254.169.254/latest/meta-data/placement/region --header "X-aws-ec2-metadata-token: $TOKEN")
IP_BASED_NAME=$(curl http://169.254.169.254/latest/meta-data/hostname --header "X-aws-ec2-metadata-token: $TOKEN" | cut -f1 -d".")
hostnamectl set-hostname --static $IP_BASED_NAME.$ZONE.compute.internal in your user data you should be able to get your instance working in the meanwhile so you can test before the patch is out. |
The fix will be release in next AMI: https://github.com/awslabs/amazon-eks-ami/releases/tag/v20240315 |
Confirmed this fix is working with the following image:
|
What happened:
With the new AL2023 NodeConfig system, it seems like private DNS names cause problems for new nodes (previously reported in #1263 and fixed in #1264). Our VPC uses a DHCP options set to specify a custom hostname, this prevents nodes from joining the cluster.
What you expected to happen:
Nodes can join the cluster successfully after launch
How to reproduce it (as minimally and precisely as possible):
#1263 gives great reproduction steps
For me, just launching new AL2023 nodes in a VPC with DHCP that sets a domain name causes these logs from kubelet:
Anything else we need to know?:
Erroneously reported this here: aws/karpenter-provider-aws#5793
Other similar issues:
#1376
#1457
Environment:
aws eks describe-cluster --name <name> --query cluster.platformVersion
): eks.1aws eks describe-cluster --name <name> --query cluster.version
): 1.29uname -a
): 6.1.77-99.164.amzn2023.x86_64cat /etc/eks/release
on a node):The text was updated successfully, but these errors were encountered: