Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics missing, such as container_cpu_usage_seconds_total #1958

Closed
ArvinDevel opened this issue Jun 10, 2018 · 7 comments
Closed

Metrics missing, such as container_cpu_usage_seconds_total #1958

ArvinDevel opened this issue Jun 10, 2018 · 7 comments

Comments

@ArvinDevel
Copy link

I used kube-prometheus to monitor the k8s, but I found that the container level metrics is restricted to those pods in kube-system,monitoring and kubeflow-seldon namespace, the pods info in other namespace is not displayed, such as default namespace. I'm curious about the reason.
Some one else happened to has the same problem, and the kube-state-metrics members thought the reason may be lay in cadvisor.
The official doc said that the cadvisor is integrated in k8s, so I wanna use log command to investigate it but found that the k8s has no daemonset named cadvisor. Can you give me some suggestions to debug the cadvisor in k8s.

@ArvinDevel
Copy link
Author

Up to now, I found the problem is that the metrics generated by cadvisor(except those can be displayed by prometheus) has lots of null labels, shown on the picture. The output metrics is output of one node/kubelet from http://localhost:8001/api/v1/nodes/MYNODE/proxy/metrics/cadvisor after apply kubectl proxy.
Output of

kubectl version

is

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:27:55Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

I think this is related to #1704. Is my k8s cluster's version too small to suffer such problem?

@ArvinDevel
Copy link
Author

cadvisor-metrics

@dashpole
Copy link
Collaborator

cAdvisor logs are found in the kubelet logs.
I don't see any null labels, it looks like they are all filled in with empty strings if they are not found. #1704 results in a different set of metrics each time. Sometimes it shows only containers, sometimes only cgroups. Your issue sounds different.

@ArvinDevel
Copy link
Author

@dashpole thanks.
yeah, not null labels, just empty strings.
What does the id field of metric labels mean? is there any docs to explain this?

@ArvinDevel
Copy link
Author

I solved my question using this method, I forgot to clarify that some of my kubelet don't grant the kube-state to access it. And now the metrics what I want has emerged, but these still exists lots of metrics with empty labels. I think these too much metrics maybe is related to system.

@dashpole
Copy link
Collaborator

cAdvisor produces metrics for each cgroup. It just happens to attack extra information to cgroups that it identifies as containers.

@ArvinDevel
Copy link
Author

since the problem has been solved, close it now. thanks for you @dashpole .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants