From a6b1227249578becfd47bcacad31769330d2d1ba Mon Sep 17 00:00:00 2001 From: Sonny Rajagopalan Date: Thu, 26 Jun 2025 11:16:52 -0400 Subject: [PATCH] Update private-registry.md I added some clarifying information on _which_ node's containerd logs to check, as this is a frequently requested item from the maintainers on the Issues page of this repo. Signed-off-by: Sonny Rajagopalan --- docs/installation/private-registry.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/installation/private-registry.md b/docs/installation/private-registry.md index 828ff936ec..873c88b64f 100644 --- a/docs/installation/private-registry.md +++ b/docs/installation/private-registry.md @@ -255,7 +255,7 @@ In order for the registry changes to take effect, you need to restart K3s on eac When Kubernetes experiences problems pulling an image, the error displayed by the kubelet may only reflect the terminal error returned by the pull attempt made against the default endpoint, making it appear that the configured endpoints are not being used. -Check the containerd log on the node at `/var/lib/rancher/k3s/agent/containerd/containerd.log` for detailed information on the root cause of the failure. +Check the containerd log on the node at `/var/lib/rancher/k3s/agent/containerd/containerd.log` for detailed information on the root cause of the failure. In case you have a multi-node set up (common), you can check which node your image was attempted deployment at by issuing `kubectl describe pod and getting the name of the node. You will have to check the containerd log on _that_ node. ## Adding Images to the Private Registry