Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy k8s cluster fails with Pharos::PhaseManager::Error : Phase failed on 1 host: #1559

Closed
raghavanv90 opened this issue Sep 23, 2020 · 3 comments
Labels
bug Something isn't working
Milestone

Comments

@raghavanv90
Copy link

When I try to deploy the k8s cluster it fails with a below error,
Environment:
VMs - Tried it on both versions - CentOS Linux release 7.8.2003 (Core)
pharos 3.2.1 chpharos 0.6.2

ERROR: Pharos::PhaseManager::Error : Phase failed on 1 host:

[vm-1] Retrying after 2 seconds (#3) ...
[vm-1] Removing old certicates ...
[vm-1] Configuring etcd certs ...
[vm-1] Configuring etcd ...
[vm-1] Retrying after 2 seconds (#4) ...
[vm-1] Removing old certicates ...
[vm-1] Configuring etcd certs ...
[vm-1] Configuring etcd ...
[vm-1] Retried 5 times, increasing verbosity
[vm-1] Error: exec failed with code 1: ensure-kubelet.sh
[vm-1] + sudo env -i - http_proxy= https_proxy= no_proxy= HTTP_PROXY= HTTPS_PROXY= NO_PROXY= FTP_PROXY= PATH=/usr/local/sbin:/usr/ocal/bin:/usr/sbin:/usr/bin ARCH=amd64 KUBE_VERSION=1.18.6 CNI_VERSION=0.8.6 'KUBELET_ARGS=--rotate-server-certificates --fail-swap-on=flse --pod-manifest-path=/etc/kubernetes/manifests/ --address=127.0.0.1 --cgroup-driver=cgroupfs' IMAGE_REPO=docker.io/kontenapharos bash--norc --noprofile -x -s
[vm-1] + . /usr/local/share/pharos/util.sh
[vm-1] + . /usr/local/share/pharos/el7.sh
[vm-1] + set -e
[vm-1] + systemctl is-active --quiet kubelet
[vm-1] + mkdir -p /etc/systemd/system/kubelet.service.d
[vm-1] + cat
[vm-1] + yum_install_with_lock kubernetes-cni 0.8.6
[vm-1] + versionlock=/etc/yum/pluginconf.d/versionlock.list
[vm-1] + package=kubernetes-cni
[vm-1] + version=0.8.6
[vm-1] + linefromfile '^0:kubernetes-cni-' /etc/yum/pluginconf.d/versionlock.list
[vm-1] + '[' 2 -lt 2 ']'
[vm-1] + match='^0:kubernetes-cni-'
[vm-1] + shift
[vm-1] + for file in '"$@"'
[vm-1] + file_exists /etc/yum/pluginconf.d/versionlock.list
[vm-1] + '[' -f /etc/yum/pluginconf.d/versionlock.list ']'
[vm-1] + return 0
[vm-1] + sed -i '/^0:kubernetes-cni-/d' /etc/yum/pluginconf.d/versionlock.list
[vm-1] + unset match
[vm-1] + yum install -y kubernetes-cni-0.8.6
[vm-1] Loaded plugins: fastestmirror, versionlock
[vm-1] Loading mirror speeds from cached hostfile
[vm-1] * base: mirror.keystealth.org
[vm-1] * extras: mirror.shastacoe.net
[vm-1] * updates: mirror.shastacoe.net
[vm-1] Excluding 1 update due to versionlock (use "yum versionlock status" to show it)
[vm-1] Package matching kubernetes-cni-0.8.6-0.x86_64 already installed. Checking for update.
[vm-1] Nothing to do
[vm-1] + rpm -qi kubernetes-cni-0.8.6
[vm-1] + yum downgrade -y kubernetes-cni-0.8.6
[vm-1] Loaded plugins: fastestmirror, versionlock
[vm-1] Loading mirror speeds from cached hostfile
[vm-1] * base: mirror.keystealth.org
[vm-1] * extras: mirror.shastacoe.net
[vm-1] * updates: mirror.shastacoe.net
[vm-1] Excluding 1 update due to versionlock (use "yum versionlock status" to show it)
[vm-1] Resolving Dependencies
[vm-1] --> Running transaction check
[vm-1] ---> Package kubernetes-cni.x86_64 0:0.8.6-0 will be a downgrade
[vm-1] ---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be erased
[vm-1] --> Finished Dependency Resolution
[vm-1] Error: Package: kubelet-1.19.2-0.x86_64 (@kubernetes)
[vm-1] Requires: kubernetes-cni >= 0.8.7
[vm-1] Removing: kubernetes-cni-0.8.7-0.x86_64 (@kubernetes)
[vm-1] kubernetes-cni = 0.8.7-0
[vm-1] Downgraded By: kubernetes-cni-0.8.6-0.x86_64 (kubernetes)
[vm-1] kubernetes-cni = 0.8.6-0
[vm-1] Available: kubernetes-cni-0.3.0.1-0.07a8a2.x86_64 (kubernetes)
[vm-1] kubernetes-cni = 0.3.0.1-0.07a8a2
[vm-1] Available: kubernetes-cni-0.5.1-0.x86_64 (kubernetes)
[vm-1] kubernetes-cni = 0.5.1-0
[vm-1] Available: kubernetes-cni-0.5.1-1.x86_64 (kubernetes)
[vm-1] kubernetes-cni = 0.5.1-1
[vm-1] Available: kubernetes-cni-0.6.0-0.x86_64 (kubernetes)
[vm-1] kubernetes-cni = 0.6.0-0
[vm-1] Available: kubernetes-cni-0.7.5-0.x86_64 (kubernetes)
[vm-1] kubernetes-cni = 0.7.5-0
[vm-1] You could try using --skip-broken to work around the problem
[vm-1] You could try running: rpm -Va --nofiles --nodigest
[vm-1] Retrying after 2 seconds (#5) ...
[vm-1] Removing old certicates ...

@brentavery
Copy link

Same problem here

@jakolehm jakolehm added the bug Something isn't working label Sep 26, 2020
@jakolehm jakolehm added this to the 3.2.2 milestone Sep 26, 2020
@jakolehm
Copy link
Contributor

Working on a fix here: #1556

@jakolehm
Copy link
Contributor

Fixed in #1556

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants