-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support node CIDR mask config #488
Comments
We are trying to deploy a EKS Anywhere setup with more than 500 nodes and hitting this issue too. Appreciate any workarounds or suggestions to move ahead. |
We are going to go ahead and look into this one and see if we can something quickly. Well plan on having it in our late June release (0.10.0) but could produce a dev build if interested in testing it earlier once we have something in place. |
@jaxesn Please let us know when you have the fix, happy to give a shot! JFYI, Here is the command we used to set the CIDR on the nodes after which the Cilium pods came up as expected. Before this fix, we had only 254 cilium pods coming up as /24 mask was used.
|
When you initially created the cluster with 500 nodes, what were some of the values of the cilium annotation before you changed it? This information may still exist as |
@CharudathGopal would you mind giving me a few more details on your network setup and what cidr range and masks you would like to set? |
Here is the snapshot of cilium config.
With this configuration, Cilium PODs were failing to comeup after reaching 255 nodes. So I added few more params:
This did not make much difference, finally after setting the annotations on the nodes using this command cilium PODs came up.
|
After that annotation change, are pods running on all nodes? Could you send the results "k get pods -A -owide"? Setting the pod cidr range to the entire /16 block on all nodes seems like it shouldn't work since all the nodes could potentially being trying to assign pods with the same IPs as other nodes? I think exposing the node cidr mask makes a lot of sense and @mitalipaygude is actively looking at what it will take to that do that, but I want to make sure that would actually solve the problem in your environment. Are you thinking of leaving the podcidr the same, 192.168.0.0/16 and then changing the node cidr mask to like 28 or something to increase the number of avoid ranges for nodes, but limit the number of pods on each node? Or were you thinking of opening up your cidr range to like 10.0.0.0/8 to have more total IPs? |
Right now, the kube controller manager is using the default for
--node-cidr-mask-size
(24
for ipv4 and64
for ipv6)Add the ability to configure this through the eks-a cluster config CRD
The text was updated successfully, but these errors were encountered: