Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Release-1.28] - Bundle or embed a registry mirror #9189

Closed
brandond opened this issue Jan 9, 2024 · 5 comments
Closed

[Release-1.28] - Bundle or embed a registry mirror #9189

brandond opened this issue Jan 9, 2024 · 5 comments
Assignees
Milestone

Comments

@brandond
Copy link
Member

brandond commented Jan 9, 2024

Backport fix for Bundle or embed a registry mirror

@aganesh-suse
Copy link

aganesh-suse commented Jan 25, 2024

Validated on release-1.28 branch with rc build: v1.28.6-rc1+k3s1

Environment Details

Infrastructure

  • Cloud
  • Hosted

Node(s) CPU architecture, OS, and Version:

$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.2 LTS"

$ uname -m
x86_64

Cluster Configuration:

HA: 3 server/ 1 agent

Config.yaml:

$ cat /etc/rancher/k3s/config.yaml

token: secret
write-kubeconfig-mode: "0644"
embedded-registry: true
disable-default-registry-endpoint: true
node-external-ip: 1.1.1.1
node-label:
- k3s-upgrade=server

registries.yaml:

 $ cat /etc/rancher/k3s/registries.yaml
mirrors:
  docker.io:
  registry.k8s.io:
  gcr.io:
  quay.io:
  ghcr.io:

or for a private registry setting:

mirrors:
  test.compute.amazonaws.com:
    endpoint:
      - https://test.compute.amazonaws.com
configs:
  test.compute.amazonaws.com:
    auth:
      username: testuser
      password: password
    tls:
      ca_file: /home/ubuntu/ca.pem

test.yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: pvt-reg-test
  labels:
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/warn: privileged
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pvt-reg-test
  namespace: pvt-reg-test
spec:
  selector:
    matchLabels:
      k8s-app: nginx-app-clusterip
  replicas: 2
  template:
    metadata:
      labels:
        k8s-app: nginx-app-clusterip
    spec:
      containers:
      - name: nginx
        image: test.amazonaws.com/nginx:latest
        ports:
        - containerPort: 8080

Testing Steps

  1. Copy config.yaml
$ sudo mkdir -p /etc/rancher/k3s && sudo cp config.yaml /etc/rancher/k3s

Copy registries.yaml to /etc/rancher/k3s/registries.yaml
copy ca.pem to the user home directory (as per path provided in the registries.yaml file).

  1. Install k3s
curl -sfL https://get.k3s.io | sudo INSTALL_K3S_VERSION='v1.28.6-rc1+k3s1' sh -s - server
  1. Verify Cluster Status:
kubectl get nodes -o wide
kubectl get pods -A
  1. Verify the metrics and look for spegel:
kubectl get --raw /api/v1/nodes/<NODENAME>/proxy/metrics | grep -F 'spegel'
  1. Deploy test image (kubectl apply -f test.yaml) that was tagged and pushed to the private registry.
    Verify the pods come up successfully.
    Verify from the journal logs on the agent that 'image received event' for this deployment.
    Ex:
$ sudo journalctl -xeu k3s-agent.service | grep 'spegel' | grep 'received image event'
level=info msg="spegel 2024/01/24 21:07:25 \"level\"=0 \"msg\"=\"received image event\" \"image\"=\"test.compute.amazonaws.com/nginx:latest@sha256:xxx

Validation Results:

  • k3s version used for validation:
k3s -v
k3s version v1.28.6-rc1+k3s1 (c236c9ff)
go version go1.20.13
kubectl get nodes
NAME               STATUS   ROLES                       AGE     VERSION
ip-172-31-25-70    Ready    control-plane,etcd,master   4m2s    v1.28.6-rc1+k3s1
ip-172-31-29-206   Ready    <none>                      89s     v1.28.6-rc1+k3s1
ip-172-31-30-115   Ready    control-plane,etcd,master   2m10s   v1.28.6-rc1+k3s1
ip-172-31-30-117   Ready    control-plane,etcd,master   3m13s   v1.28.6-rc1+k3s1
kubectl get pods -A
NAMESPACE      NAME                                      READY   STATUS      RESTARTS   AGE
kube-system    coredns-6799fbcd5-xj87f                   1/1     Running     0          4m1s
kube-system    helm-install-traefik-crd-2t9tb            0/1     Completed   1          4m1s
kube-system    helm-install-traefik-v68p8                0/1     Completed   0          4m1s
kube-system    local-path-provisioner-84db5d44d9-zjfhs   1/1     Running     0          4m1s
kube-system    metrics-server-67c658944b-gwk29           1/1     Running     0          4m1s
kube-system    svclb-traefik-1ecf3990-67477              2/2     Running     0          3m38s
kube-system    svclb-traefik-1ecf3990-r52ss              2/2     Running     0          2m19s
kube-system    svclb-traefik-1ecf3990-sx2vd              2/2     Running     0          3m38s
kube-system    svclb-traefik-1ecf3990-wrltk              2/2     Running     0          3m
kube-system    traefik-f4564c4f4-5jtx6                   1/1     Running     0          3m38s
pvt-reg-test   pvt-reg-test-78d66dbc7d-59hzc             1/1     Running     0          28s
pvt-reg-test   pvt-reg-test-78d66dbc7d-pj57s             1/1     Running     0          28s

See images used by the pods:

kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec['initContainers', 'containers'][*].image}" | tr -s '[[:space:]]' '
' | sort | uniq -c 
      2 test.compute.amazonaws.com/nginx:latest
      2 rancher/klipper-helm:v0.8.2-build20230815
      8 rancher/klipper-lb:v0.4.5
      1 rancher/local-path-provisioner:v0.0.24
      1 rancher/mirrored-coredns-coredns:1.10.1
      1 rancher/mirrored-library-traefik:2.10.5
      1 rancher/mirrored-metrics-server:v0.6.3
Node: ip-172-31-25-70:
======= Execute START ========
 $ kubectl get --raw /api/v1/nodes/ip-172-31-25-70/proxy/metrics | grep -F 'spegel' 
libp2p_rcmgr_streams{dir="inbound",protocol="/spegel/kad/1.0.0",scope="protocol"} 2
libp2p_rcmgr_streams{dir="outbound",protocol="/spegel/kad/1.0.0",scope="protocol"} 3
======= Execute DONE ========
Node: ip-172-31-29-206:
======= Execute START ========
 $ kubectl get --raw /api/v1/nodes/ip-172-31-29-206/proxy/metrics | grep -F 'spegel' 
libp2p_rcmgr_streams{dir="inbound",protocol="/spegel/kad/1.0.0",scope="protocol"} 2
libp2p_rcmgr_streams{dir="outbound",protocol="/spegel/kad/1.0.0",scope="protocol"} 3
# HELP spegel_advertised_images Number of images advertised to be available.
# TYPE spegel_advertised_images gauge
spegel_advertised_images{registry="test.compute.amazonaws.com"} 2
# HELP spegel_mirror_requests_total Total number of mirror requests.
# TYPE spegel_mirror_requests_total counter
spegel_mirror_requests_total{cache="miss",registry="test.compute.amazonaws.com",source="internal"} 9
======= Execute DONE ========
Node: ip-172-31-30-115:
======= Execute START ========
 $ kubectl get --raw /api/v1/nodes/ip-172-31-30-115/proxy/metrics | grep -F 'spegel' 
libp2p_rcmgr_streams{dir="inbound",protocol="/spegel/kad/1.0.0",scope="protocol"} 2
libp2p_rcmgr_streams{dir="outbound",protocol="/spegel/kad/1.0.0",scope="protocol"} 3
# HELP spegel_advertised_images Number of images advertised to be available.
# TYPE spegel_advertised_images gauge
spegel_advertised_images{registry="test.compute.amazonaws.com"} 2
# HELP spegel_mirror_requests_total Total number of mirror requests.
# TYPE spegel_mirror_requests_total counter
spegel_mirror_requests_total{cache="miss",registry="test.compute.amazonaws.com",source="internal"} 9
======= Execute DONE ========
Node: ip-172-31-30-117:
======= Execute START ========
 $ kubectl get --raw /api/v1/nodes/ip-172-31-30-117/proxy/metrics | grep -F 'spegel' 
libp2p_rcmgr_streams{dir="inbound",protocol="/spegel/kad/1.0.0",scope="protocol"} 3
libp2p_rcmgr_streams{dir="outbound",protocol="/spegel/kad/1.0.0",scope="protocol"} 3
======= Execute DONE ========
We can see the communication established on the port 5001 for all nodes:
$ sudo lsof -i | grep :5001 
k3s-serve 2826            root   27u  IPv4  23550      0t0  TCP ip-172-31-25-70.us-east-2.compute.internal:5001 (LISTEN)
k3s-serve 2826            root  234u  IPv4  31157      0t0  TCP ip-172-31-25-70.us-east-2.compute.internal:5001->ip-172-31-30-117.us-east-2.compute.internal:5001 (ESTABLISHED)
k3s-serve 2826            root  293u  IPv4  38270      0t0  TCP ip-172-31-25-70.us-east-2.compute.internal:5001->ip-172-31-30-115.us-east-2.compute.internal:5001 (ESTABLISHED)
k3s-serve 2826            root  297u  IPv4  38816      0t0  TCP ip-172-31-25-70.us-east-2.compute.internal:5001->ip-172-31-29-206.us-east-2.compute.internal:5001 (ESTABLISHED)
$ sudo journalctl -xeu k3s-agent.service | grep 'spegel' | grep 'received image event'
level=info msg="spegel 2024/01/24 21:07:25 \"level\"=0 \"msg\"=\"received image event\" \"image\"=\"test.compute.amazonaws.com/nginx:latest@sha256:xxx

@ClashTheBunny
Copy link

Is this meant to work on aarch64? I'm getting -embedded-registry not a flag when I pass --embedded-registry on the cli or when I set embedded-registry: true in the config. I'm on k3s version v1.28.6+k3s2 (c9f49a3b) go version go1.20.13 downloaded from the releases page. It works on amd64.

@brandond
Copy link
Member Author

brandond commented Mar 3, 2024

It's a server flag, are you perhaps trying to set it on an agent? There's nothing arch specific about it.

@ClashTheBunny
Copy link

Ah, yes, it wasn't totally clear from the documenation that it doesn't work on the agents in the cluster. It makes sense given the sharing of the api port/cert/auth now that I think about it.

@brandond
Copy link
Member Author

brandond commented Mar 3, 2024

Agents also participate in sharing images, but it is enabled cluster-wide by a server config flag.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done Issue
Development

No branches or pull requests

3 participants