Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openstack cloud provider causing apiserver container to fail #588

Closed
bputt opened this issue Dec 6, 2017 · 4 comments · Fixed by kubernetes/kubernetes#57561
Closed

Openstack cloud provider causing apiserver container to fail #588

bputt opened this issue Dec 6, 2017 · 4 comments · Fixed by kubernetes/kubernetes#57561
Assignees

Comments

@bputt
Copy link

bputt commented Dec 6, 2017

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT

Versions

kubeadm version (use kubeadm version): 1.8.4

Environment:

  • Kubernetes version (use kubectl version): 1.8.4
  • Cloud provider or hardware configuration: openstack icehouse
  • OS (e.g. from /etc/os-release): CentOS 7.4
  • Kernel (e.g. uname -a): Linux 3.10
  • Others: Docker 17.09.0-ce

What happened?

When adding --cloud-provider=openstack and --cloud-config=/etc/kubernetes/cloud.conf to /etc/kubernetes/manifests/kube-apiserver.yaml I get the following error on startup of the container:

[mount_linux.go:142] Mount failed: exit status 1
Mounting arguments: -t vfat -o ro /tmp/configdrive071298910
Output: mount: can't read '/etc/fstab': No such file or directory
Error mounting configdrive: mount failed

What you expected to happen?

For the container to find the config-2 drive and stay alive

How to reproduce it (as minimally and precisely as possible)?

/etc/kubernetes/manifests/kube-apiserver.yaml and /etc/kubernetes/manifests/kube-controller-manager.yaml share the following:

spec:
  containers:
  - command:
    - --cloud-provider=openstack
    - --cloud-config=/etc/kubernetes/cloud.conf
    ...

volumeMounts:
- mountPath: /etc/kubernetes/cloud.conf
  name: cloud-config
  readOnly: true
...

volumes:
- hostPath:
    path: /etc/kubernetes/cloud.conf
    type: File
  name: cloud-config
...

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

/etc/kubernetes/cloud.conf

[Global]
auth-url=...
tenant-id=...
username=...
password=...
region=...
ca-file=...

Anything else we need to know?

The openstack metadata url is not available as HTTP, so unless HTTPS support is added, I need the config drive to work as expected.

Referred here from: kubernetes/kubernetes#47392

@xiaosuiba
Copy link

After digging a little into the source code , I found a work around. You just have to map /dev/sr0 device into your container and set the container as privileged.
Here is a working example:

...
volumeMounts:
- mountPath: /dev/disk/by-label/config-2
  name: sr0
...
volumes:
- hostPath:
    path: /dev/sr0
    type: BlockDevice
  name: sr0
securityContext:
 privileged: true

@luxas
Copy link
Member

luxas commented Dec 22, 2017

Can you check whether this is fixed in v1.9.0
cc @dims

@dims
Copy link
Member

dims commented Dec 22, 2017

/assign @dims

@dims
Copy link
Member

dims commented Dec 22, 2017

@xiaosuiba no need to mount /dev/sr0 just adding privileges is enough (at least with 1.9.0+), i have proposed a PR to enabled that for OpenStack based kubeadm (see 57561)

Thanks a lot for all your research, it helped quite a bit in fixing this issue.

-- Dims

k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Jan 18, 2018
…piserver-and-controller

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Enable privileged containers for apiserver and controller

**What this PR does / why we need it**:

In OpenStack environment, when there is no metadata service, we
look at the config drive to figure out the metadata. Since we need
to run commands like blkid, we need to ensure that api server and
kube controller are running in the privileged mode.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #47392
Fixes kubernetes/kubeadm#588

**Special notes for your reviewer**:

**Release note**:

```release-note
Fix issue when using OpenStack config drive for node metadata
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants