Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename labels from sigs.k8s.io to machine.openshift.io #116

Merged
merged 1 commit into from
Feb 27, 2019

Conversation

spangenberg
Copy link
Contributor

/cc bison

@openshift-ci-robot openshift-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Feb 18, 2019
@spangenberg
Copy link
Contributor Author

/test e2e

@spangenberg
Copy link
Contributor Author

/retest

@frobware
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 25, 2019
@ingvagabund
Copy link
Member

/retest

@ingvagabund
Copy link
Member

/approve

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ingvagabund

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 25, 2019
@ingvagabund
Copy link
Member

/retest

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

6 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@ingvagabund
Copy link
Member

/retest

1 similar comment
@ingvagabund
Copy link
Member

/retest

@bison
Copy link
Contributor

bison commented Feb 26, 2019

Looks like we failed to send the image to the remote machine. I guess it didn't exist locally?

I0226 12:07:58.381930    2629 framework.go:358] Uploading "registry.svc.ci.openshift.org/openshift/origin-v4.0-2019-02-26-043319@sha256:d3b4c0908122e1645c1b6905728f5ffd338991a985ba4f2927e932e129712a28" to the master machine under "192.168.122.51"
I0226 12:07:58.622430    2629 framework.go:369] Error response from daemon: reference does not exist
open /var/lib/docker/tmp/docker-import-247578409/repositories: no such file or directory
STEP: Deleting machine API controllers
I0226 12:08:03.634426    2629 framework.go:292] del.err: <nil>

@ingvagabund
Copy link
Member

/retest

@ingvagabund
Copy link
Member

ingvagabund commented Feb 26, 2019

Looks like we failed to send the image to the remote machine. I guess it didn't exist locally?

yeah, I accidentally re-rendered changes in the JJ without updating my local template. I am fixing the job now. It just takes time as it needs to spin new packet instance every time I make a change.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

3 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

5 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@ingvagabund
Copy link
Member

/retest

@ingvagabund
Copy link
Member

Interesting:

Name:               clusterapi-controllers-7f87dbb645-lsdrh
Namespace:          namespace-9189e2ef-3a81-11e9-9b17-0cc47ab214f0
Priority:           0
PriorityClassName:  <none>
Node:               192.168.122.51/192.168.122.51
Start Time:         Wed, 27 Feb 2019 11:24:42 +0000
Labels:             api=clusterapi
                    pod-template-hash=3943866201
Annotations:        <none>
Status:             Running
IP:                 192.168.0.4
Controlled By:      ReplicaSet/clusterapi-controllers-7f87dbb645
Containers:
  machine-controller:
    Container ID:  docker://2cf7e5609e0ec94cadcd646a5a41b2231609bff8ecd6632510d41f594a615f17
    Image:         gcr.io/k8s-cluster-api/libvirt-machine-controller:0.0.1
    Image ID:      docker://sha256:cb3e8fcd1f6d84a4dc105c85fed13198c5afd53bce0809597f51eb430a6eb398
    Port:          <none>
    Host Port:     <none>
    Command:
      ./machine-controller-manager
    Args:
      --logtostderr=true
      --v=3
    State:          Running
      Started:      Wed, 27 Feb 2019 11:24:45 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  30Mi
    Requests:
      cpu:     100m
      memory:  20Mi
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /etc/kubernetes from config (rw)
      /etc/ssl/certs from certs (rw)
      /root/.ssh/actuator.pem from libvirt-private-key (ro)
      /usr/bin/kubeadm from kubeadm (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qswwd (ro)
  nodelink-controller:
    Container ID:  docker://a7b9f9e8cbb1fd108af0b83f5fedfc643d39b017b11f4c06e213f9b75585271c
    Image:         gcr.io/k8s-cluster-api/machine-api-operator:0.0.1
    Image ID:      docker://sha256:da9d5c20d16f094673cb6aa50f647f8781819c068145104c6dd230870a79a5e5
    Port:          <none>
    Host Port:     <none>
    Command:
      ./nodelink-controller
    State:          Running
      Started:      Wed, 27 Feb 2019 11:24:45 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  30Mi
    Requests:
      cpu:        100m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /etc/kubernetes from config (rw)
      /etc/ssl/certs from certs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qswwd (ro)
  manager:
    Container ID:  docker://5651f638b852f10845385cde4bd7623d8534f9bd60c7031d738545bfec56914d
    Image:         gcr.io/k8s-cluster-api/machine-api-operator:0.0.1
    Image ID:      docker://sha256:da9d5c20d16f094673cb6aa50f647f8781819c068145104c6dd230870a79a5e5
    Port:          <none>
    Host Port:     <none>
    Command:
      /manager
    State:       Waiting
      Reason:    CrashLoopBackOff
    Last State:  Terminated
      Reason:    ContainerCannotRun
      Message:   oci runtime error: container_linux.go:247: starting container process caused "exec: \"/manager\": stat /manager: no such file or directory"

      Exit Code:    127
      Started:      Wed, 27 Feb 2019 11:26:18 +0000
      Finished:     Wed, 27 Feb 2019 11:26:18 +0000
    Ready:          False
    Restart Count:  4
    Limits:
      cpu:     100m
      memory:  30Mi
    Requests:
      cpu:        100m
      memory:     30Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qswwd (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes
    HostPathType:  
  certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  
  kubeadm:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/bin/kubeadm
    HostPathType:  
  libvirt-private-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  libvirt-private-key
    Optional:    false
  default-token-qswwd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qswwd
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.alpha.kubernetes.io/notReady:NoExecute
                 node.alpha.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age              From                     Message
  ----     ------     ----             ----                     -------
  Normal   Scheduled  2m               default-scheduler        Successfully assigned namespace-9189e2ef-3a81-11e9-9b17-0cc47ab214f0/clusterapi-controllers-7f87dbb645-lsdrh to 192.168.122.51
  Normal   Pulled     2m               kubelet, 192.168.122.51  Container image "gcr.io/k8s-cluster-api/libvirt-machine-controller:0.0.1" already present on machine
  Normal   Created    2m               kubelet, 192.168.122.51  Created container
  Normal   Started    2m               kubelet, 192.168.122.51  Started container
  Normal   Pulled     2m               kubelet, 192.168.122.51  Container image "gcr.io/k8s-cluster-api/machine-api-operator:0.0.1" already present on machine
  Normal   Created    2m               kubelet, 192.168.122.51  Created container
  Normal   Started    2m               kubelet, 192.168.122.51  Started container
  Normal   Created    2m (x4 over 2m)  kubelet, 192.168.122.51  Created container
  Warning  Failed     2m (x4 over 2m)  kubelet, 192.168.122.51  Error: failed to start container "manager": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/manager\": stat /manager: no such file or directory"
  Warning  BackOff    1m (x6 over 2m)  kubelet, 192.168.122.51  Back-off restarting failed container
  Normal   Pulled     1m (x5 over 2m)  kubelet, 192.168.122.51  Container image "gcr.io/k8s-cluster-api/machine-api-operator:0.0.1" already present on machine

@ingvagabund
Copy link
Member

/retest

1 similar comment
@ingvagabund
Copy link
Member

/retest

@openshift-merge-robot openshift-merge-robot merged commit 5c8dd38 into openshift:master Feb 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants