Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eksa 0.8.1 cluster upgrade error #1819

Closed
dborysenko opened this issue Apr 12, 2022 · 3 comments
Closed

eksa 0.8.1 cluster upgrade error #1819

dborysenko opened this issue Apr 12, 2022 · 3 comments
Assignees
Labels
external An issue, bug or feature request filed from outside the AWS org

Comments

@dborysenko
Copy link

What happened:
We've upgraded eksa cli from 0.7.2 to 0.8.1 and tried to upgrade running cluster using 0.8.1 eksa with no luck. Getting below error:

[skipped]
2022-04-11T16:48:24.440-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker exec -i eksa_1649713698917347000 kubectl get --namespace eksa-system releases.distro.eks.amazonaws.com kubernetes-1-21-eks-9 -o json --kubeconfig eksa-mgmt-cl01/eksa-mgmt-cl01-eks-a-cluster.kubeconfig"}
error: the server doesn't have a resource type "releases"
Error: failed to display upgrade plan: failed fetching EKS-D release for cluster: error getting releases.distro.eks.amazonaws.com with kubectl: exit status 1

What you expected to happen:
We expected to see upgrade plan and subsequent cluster upgrade.

How to reproduce it (as minimally and precisely as possible):
Build management cluster using eksa 0.7.2:

bash-3.2$ eksctl anywhere version
v0.7.2
bash-3.2$ eksctl anywhere download artifacts -f eksa-mgmt-cl01.yaml -r
bash-3.2$ eksctl anywhere import-images -f eksa-mgmt-cl01.yaml -v 9
bash-3.2$ eksctl anywhere create cluster -f eksa-mgmt-cl01.yaml -v 9 --bundles-override=eks-anywhere-downloads/manifest.yaml

Wait for cluster to be created.
Upgrade eksa to 0.8.1, download artifacts, import images, try to upgrade:

bash-3.2$ eksctl anywhere version
v0.8.1
bash-3.2$ rm -rf eks-anywhere-downloads*
bash-3.2$ eksctl anywhere download artifacts -f eksa-mgmt-cl01.yaml -r
bash-3.2$ eksctl anywhere import-images -f eksa-mgmt-cl01.yaml -v 9
2022-04-11T15:46:15.757-0500	V4	Logger init completed	{"vlevel": 9}
2022-04-11T15:46:15.760-0500	V4	Reading releases manifest	{"url": "https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml"}
2022-04-11T15:46:16.383-0500	V4	Reading bundles manifest	{"url": "https://anywhere-assets.eks.amazonaws.com/releases/bundles/8/manifest.yaml"}

[skipped]

2022-04-11T15:51:31.652-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker exec -i -e HELM_EXPERIMENTAL_OCI=1 eksa_1649709977420326000 helm push cilium-1.9.13-eksa.2.tgz oci://my-registry.local/cilium-chart --insecure-skip-tls-verify"}
2022-04-11T15:51:34.185-0500	V3	Cleaning up long running container	{"name": "eksa_1649709977420326000"}
2022-04-11T15:51:34.186-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker rm -f -v eksa_1649709977420326000"}

Import and tag bottlerocket-vmware-k8s-1.22-x86_64-v1.6.2 OVA template

Try to generate upgrade plan:

bash-3.2$ eksctl anywhere upgrade plan cluster -f eksa-mgmt-cl01.yaml -v 9
2022-04-11T16:59:17.164-0500	V4	Logger init completed	{"vlevel": 9}
2022-04-11T16:59:17.165-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker version --format {{.Client.Version}}"}
2022-04-11T16:59:17.730-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker info --format '{{json .MemTotal}}'"}
2022-04-11T16:59:18.368-0500	V4	Reading releases manifest	{"url": "https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml"}
2022-04-11T16:59:19.085-0500	V4	Reading bundles manifest	{"url": "https://anywhere-assets.eks.amazonaws.com/releases/bundles/8/manifest.yaml"}
2022-04-11T16:59:19.674-0500	V2	Pulling docker image	{"image": "my-registry.local:443/eks-anywhere/cli-tools:v0.7.2-eks-a-8"}
2022-04-11T16:59:19.674-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker pull my-registry.local:443/eks-anywhere/cli-tools:v0.7.2-eks-a-8"}
2022-04-11T16:59:21.632-0500	V3	Initializing long running container	{"name": "eksa_1649714359674697000", "image": "my-registry.local:443/eks-anywhere/cli-tools:v0.7.2-eks-a-8"}
2022-04-11T16:59:21.632-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker run -d --name eksa_1649714359674697000 --network host -w /Users/dborysenko/git/eksa-lab -v /var/run/docker.sock:/var/run/docker.sock -v /Users/dborysenko/git/eksa-lab:/Users/dborysenko/git/eksa-lab --entrypoint sleep my-registry.local:443/eks-anywhere/cli-tools:v0.7.2-eks-a-8 infinity"}
2022-04-11T16:59:22.144-0500	V0	Checking new release availability...
2022-04-11T16:59:22.144-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker exec -i eksa_1649714359674697000 kubectl get clusters.anywhere.eks.amazonaws.com -A -o jsonpath={.items[0]} --kubeconfig eksa-mgmt-cl01/eksa-mgmt-cl01-eks-a-cluster.kubeconfig --field-selector=metadata.name=eksa-mgmt-cl01"}
2022-04-11T16:59:24.005-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker exec -i eksa_1649714359674697000 kubectl get bundles.anywhere.eks.amazonaws.com eksa-mgmt-cl01 -o json --kubeconfig eksa-mgmt-cl01/eksa-mgmt-cl01-eks-a-cluster.kubeconfig --namespace default"}
2022-04-11T16:59:25.478-0500	V6	Executing command	{"cmd": "/usr/local/bin/docker exec -i eksa_1649714359674697000 kubectl get --namespace eksa-system releases.distro.eks.amazonaws.com kubernetes-1-21-eks-9 -o json --kubeconfig eksa-mgmt-cl01/eksa-mgmt-cl01-eks-a-cluster.kubeconfig"}
error: the server doesn't have a resource type "releases"
Error: failed to display upgrade plan: failed fetching EKS-D release for cluster: error getting releases.distro.eks.amazonaws.com with kubectl: exit status 1

Anything else we need to know?:
eksa-mgmt-cl01.yaml:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: eksa-mgmt-cl01
spec:
  clusterNetwork:
    cni: cilium
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 2
    endpoint:
      host: "10.27.159.200"
    machineGroupRef:
      kind: VSphereMachineConfig
      name: eksa-mgmt-cl01-cp
  datacenterRef:
    kind: VSphereDatacenterConfig
    name: eksa-mgmt-cl01
  externalEtcdConfiguration:
    count: 3
    machineGroupRef:
      kind: VSphereMachineConfig
      name: eksa-mgmt-cl01-etcd
  kubernetesVersion: "1.22"
  managementCluster:
    name: eksa-mgmt-cl01
  workerNodeGroupConfigurations:
  - count: 3
    machineGroupRef:
      kind: VSphereMachineConfig
      name: eksa-mgmt-cl01
    name: md-0
  registryMirrorConfiguration:
    endpoint: my-registry.local
    port: 443

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereDatacenterConfig
metadata:
  name: eksa-mgmt-cl01
spec:
  datacenter: "MSDC"
  insecure: false
  network: "/MSDC/network/MY-VLAN-DHCP"
  server: "my-vsphere.local"
  thumbprint: "29:F6:..."

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
  name: eksa-mgmt-cl01-cp
spec:
  datastore: "vsanDatastore-workload"
  diskGiB: 25
  folder: "/CDVR-K8-EKS"
  memoryMiB: 8192
  numCPUs: 2
  osFamily: bottlerocket
  resourcePool: "eksa-mgmt-cl01"
  template: "/MSDC/vm/Templates/eksa/bottlerocket-vmware-k8s-1.22-x86_64-v1.6.2"
  users:
    - name: "ec2-user"
      sshAuthorizedKeys:
      - "ssh-rsa AAAA..."

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
  name: eksa-mgmt-cl01
spec:
  datastore: "vsanDatastore-workload"
  diskGiB: 25
  folder: "/CDVR-K8-EKS"
  memoryMiB: 8192
  numCPUs: 2
  osFamily: bottlerocket
  resourcePool: "eksa-mgmt-cl01"
  template: "/MSDC/vm/Templates/eksa/bottlerocket-vmware-k8s-1.22-x86_64-v1.6.2"
  users:
    - name: "ec2-user"
      sshAuthorizedKeys:
      - "ssh-rsa AAAA..."

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
  name: eksa-mgmt-cl01-etcd
spec:
  datastore: "vsanDatastore-workload"
  diskGiB: 25
  folder: "/CDVR-K8-EKS"
  memoryMiB: 8192
  numCPUs: 2
  osFamily: bottlerocket
  resourcePool: "eksa-mgmt-cl01"
  template: "/MSDC/vm/Templates/eksa/bottlerocket-vmware-k8s-1.22-x86_64-v1.6.2"
  users:
    - name: "ec2-user"
      sshAuthorizedKeys:
      - "ssh-rsa AAAA..."

Environment:

  • EKS Anywhere Release: 0.7.2; 0.8.1
  • EKS Distro Release:
@mrajashree mrajashree added the external An issue, bug or feature request filed from outside the AWS org label Apr 12, 2022
@mrajashree
Copy link
Contributor

Thanks for opening the issue @dborysenko we are taking a look

@taneyland
Copy link
Member

taneyland commented Apr 14, 2022

Hi @dborysenko,

This issue has been fixed in the latest EKS Anywhere release v0.8.2. Please give it a try! Thanks

@dborysenko
Copy link
Author

Hi @taneyland
0.8.2 worked just fine. Thanks for a quick fix!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
external An issue, bug or feature request filed from outside the AWS org
Projects
None yet
Development

No branches or pull requests

3 participants