Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ListObjectsV2 search parameter metadata not implemented when using Cloudflare R2 #8924

Closed
brandond opened this issue Nov 21, 2023 · 5 comments
Assignees
Milestone

Comments

@brandond
Copy link
Member

I am seeing the following error when doing a backup:

ERRO[0001] Error retrieving S3 snapshots for reconciliation: ListObjectsV2 search parameter metadata not implemented 

Looks like the search parameters used are not implemented with Cloudflare R2.
https://developers.cloudflare.com/r2/reference/changelog/#2022-07-01

Backups have been working though, both manually and via daily automated backups, this is the only issue I have encountered the last few months.

Not sure whether this info is ok here, or a bug, or a feature request for Cloudflare R2 support?

Originally posted by @maggie44 in #8140 (comment)

@VestigeJ
Copy link

##Environment Details
Reproduced using VERSION=v1.28.3+k3s2

Infrastructure

  • Cloud

Node(s) CPU architecture, OS, and version:

Linux 5.14.21-150500.53-default x86_64 GNU/Linux
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP5"

Cluster Configuration:

NAME   STATUS   ROLES                       AGE     VERSION
ip-1   Ready    control-plane,etcd,master   3m13s   v1.28.3+k3s2
ip-2   Ready    control-plane,etcd,master   7m35s   v1.28.3+k3s2
ip-3   Ready    <none>                      3m39s   v1.28.3+k3s2
ip-4   Ready    control-plane,etcd,master   3m27s   v1.28.3+k3s2

Config.yaml:

write-kubeconfig-mode: 644
debug: true
token: YOUR_TOKEN_HERE
profile: cis
selinux: true
node-external-ip: 1.2.2.2
protect-kernel-defaults: true
cluster-init: true

etcd-s3: true
etcd-s3-bucket: "k3s-etcd-testing"
etcd-s3-endpoint: "https://thingOnethingTwo.r2.cloudflarestorage.com"
etcd-s3-access-key: "thingOnethingTwoKey"
etcd-s3-secret-key: "thingOnethingTwoTwoTwoKeyKeyFinal"

Reproduction

$ curl https://get.k3s.io --output install-"k3s".sh
$ sudo chmod +x install-"k3s".sh
$ sudo groupadd --system etcd && sudo useradd -s /sbin/nologin --system -g etcd etcd
$ sudo modprobe ip_vs_rr
$ sudo modprobe ip_vs_wrr
$ sudo modprobe ip_vs_sh
$ sudo printf "on_oovm.panic_on_oom=0 \nvm.overcommit_memory=1 \nkernel.panic=10 \nkernel.panic_ps=1 \nkernel.panic_on_oops=1 \n" > ~/90-kubelet.conf
$ sudo cp 90-kubelet.conf /etc/sysctl.d/
$ sudo systemctl restart systemd-sysctl
$ sudo INSTALL_K3S_VERSION=v1.28.3+k3s2 INSTALL_K3S_EXEC=server ./install-k3s.sh
$ kgn // kubectl get nodes
$ sudo /usr/local/bin/k3s etcd-snapshot save //view error output below

Results:

$ sudo /usr/local/bin/k3s etcd-snapshot save

WARN[0000] Unknown flag --write-kubeconfig-mode found in config.yaml, skipping
WARN[0000] Unknown flag --token found in config.yaml, skipping
WARN[0000] Unknown flag --profile found in config.yaml, skipping
WARN[0000] Unknown flag --selinux found in config.yaml, skipping
WARN[0000] Unknown flag --node-external-ip found in config.yaml, skipping
WARN[0000] Unknown flag --protect-kernel-defaults found in config.yaml, skipping
WARN[0000] Unknown flag --cluster-init found in config.yaml, skipping
DEBU[0000] Attempting to retrieve extra metadata from k3s-etcd-snapshot-extra-metadata ConfigMap
DEBU[0000] Error encountered attempting to retrieve extra metadata from k3s-etcd-snapshot-extra-metadata ConfigMap, error: configmaps "k3s-etcd-snapshot-extra-metadata" not found
INFO[0000] Saving etcd snapshot to /var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-1-1-2-4-1700678053
{"level":"info","ts":"2023-11-22T18:34:12.56826Z","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-1-1-2-4-1700678053.part"}
{"level":"info","ts":"2023-11-22T18:34:12.57052Z","logger":"client","caller":"v3@v3.5.9-k3s1/maintenance.go:212","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2023-11-22T18:34:12.570564Z","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2023-11-22T18:34:12.611249Z","logger":"client","caller":"v3@v3.5.9-k3s1/maintenance.go:220","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2023-11-22T18:34:12.627767Z","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"4.1 MB","took":"now"}
{"level":"info","ts":"2023-11-22T18:34:12.627976Z","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-1-1-2-4-1700678053"}
WARN[0000] Unable to initialize S3 client: Endpoint url cannot have fully qualified paths.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x41be1b9]

goroutine 1 [running]:
github.com/k3s-io/k3s/pkg/etcd.(*S3).snapshotRetention(0xc0010bdab9?, {0x6565e08?, 0xc000a4b540?})
	/go/src/github.com/k3s-io/k3s/pkg/etcd/s3.go:284 +0x59
github.com/k3s-io/k3s/pkg/etcd.(*ETCD).Snapshot(0xc000a4b590, {0x6565e08, 0xc000a4b540})
	/go/src/github.com/k3s-io/k3s/pkg/etcd/snapshot.go:375 +0x13ca
github.com/k3s-io/k3s/pkg/cli/etcdsnapshot.save(0xc000c6f8c0, 0xc000a67970?)
	/go/src/github.com/k3s-io/k3s/pkg/cli/etcdsnapshot/etcd_snapshot.go:121 +0x92
github.com/k3s-io/k3s/pkg/cli/etcdsnapshot.Save(0xc000c6f8c0?)
	/go/src/github.com/k3s-io/k3s/pkg/cli/etcdsnapshot/etcd_snapshot.go:104 +0x45
github.com/urfave/cli.HandleAction({0x4ec5ea0?, 0x5e149a0?}, 0x4?)
	/go/pkg/mod/github.com/urfave/cli@v1.22.14/app.go:524 +0x50
github.com/urfave/cli.Command.Run({{0x597fc1c, 0x4}, {0x0, 0x0}, {0x0, 0x0, 0x0}, {0x5a2f781, 0x22}, {0x0, ...}, ...}, ...)
	/go/pkg/mod/github.com/urfave/cli@v1.22.14/command.go:175 +0x67b
github.com/urfave/cli.(*App).RunAsSubcommand(0xc00092cfc0, 0xc000c6f600)
	/go/pkg/mod/github.com/urfave/cli@v1.22.14/app.go:405 +0xe87
github.com/urfave/cli.Command.startApp({{0x59a2a52, 0xd}, {0x0, 0x0}, {0x0, 0x0, 0x0}, {0x0, 0x0}, {0x0, ...}, ...}, ...)
	/go/pkg/mod/github.com/urfave/cli@v1.22.14/command.go:380 +0xb7f
github.com/urfave/cli.Command.Run({{0x59a2a52, 0xd}, {0x0, 0x0}, {0x0, 0x0, 0x0}, {0x0, 0x0}, {0x0, ...}, ...}, ...)
	/go/pkg/mod/github.com/urfave/cli@v1.22.14/command.go:103 +0x845
github.com/urfave/cli.(*App).Run(0xc00092ce00, {0xc0009e58c0, 0x9, 0x9})
	/go/pkg/mod/github.com/urfave/cli@v1.22.14/app.go:277 +0xb87
main.main()
	/go/src/github.com/k3s-io/k3s/cmd/server/main.go:81 +0xc1e
	

@brandond
Copy link
Member Author

brandond commented Nov 22, 2023

@VestigeJ this is actually a different root cause, but should be handled now as well.

Unable to initialize S3 client: Endpoint url cannot have fully qualified paths.

You need to take the https:// off the endpoint, it should just be a hostname. The insecure option determines if http or https is used.

@VestigeJ
Copy link

@brandond Thanks I just did that on the validation step actually - I saw the same initial issue at the top but didn't remove the full path problem - I'm validating on COMMIT=3f237230350b5170eef4e54c7826d88433182efc and still seeing that full path exception but editing the config.yaml with no restarts was an easy change.

@brandond
Copy link
Member Author

brandond commented Nov 22, 2023

@VestigeJ There should be an error but no panic:

root@k3s-server-1:/# k3s --version
k3s version v1.28.4+k3s-3f237230 (3f237230)
go version go1.20.11

root@k3s-server-1:/# k3s etcd-snapshot save --etcd-s3 --etcd-s3-bucket=invalid --etcd-s3-endpoint=https://thingOnethingTwo.r2.cloudflarestorage.com --etcd-s3-access-key=invalid --etcd-s3-secret-key=invalid
INFO[0000] Saving etcd snapshot to /var/lib/rancher/k3s/server/db/snapshots/on-demand-k3s-server-1-1700681330
{"level":"info","ts":"2023-11-22T19:28:50.375807Z","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-k3s-server-1-1700681330.part"}
{"level":"info","ts":"2023-11-22T19:28:50.377651Z","logger":"client","caller":"v3@v3.5.9-k3s1/maintenance.go:212","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2023-11-22T19:28:50.377698Z","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2023-11-22T19:28:50.39231Z","logger":"client","caller":"v3@v3.5.9-k3s1/maintenance.go:220","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2023-11-22T19:28:50.401661Z","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"2.8 MB","took":"now"}
{"level":"info","ts":"2023-11-22T19:28:50.401731Z","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-k3s-server-1-1700681330"}
WARN[0000] Unable to initialize S3 client: Endpoint url cannot have fully qualified paths.
INFO[0000] Reconciling ETCDSnapshotFile resources
WARN[0000] Unable to initialize S3 client: Endpoint url cannot have fully qualified paths.
INFO[0000] Reconciliation of ETCDSnapshotFile resources complete
FATA[0000] Endpoint url cannot have fully qualified paths.

This is all separate from the metadata parameter error tracked by this issue though; the panic is tracked in #8918

@VestigeJ
Copy link

##Environment Details
Validated using COMMIT=3f237230350b5170eef4e54c7826d88433182efc

Infrastructure

  • Cloud

Node(s) CPU architecture, OS, and version:

Linux 5.14.21-150500.53-default x86_64 GNU/Linux
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP5"

Cluster Configuration:

NAME   STATUS   ROLES                       AGE   VERSION
ip-1   Ready    control-plane,etcd,master   10m   v1.28.4+k3s-3f237230
ip-2   Ready    control-plane,etcd,master   13m   v1.28.4+k3s-3f237230
ip-3   Ready    <none>                      12m   v1.28.4+k3s-3f237230
ip-4   Ready    control-plane,etcd,master   12m   v1.28.4+k3s-3f237230

Config.yaml:

write-kubeconfig-mode: 644
debug: true
token: YOUR_TOKEN_HERE
profile: cis
selinux: true
node-external-ip: 1.2.2.2
protect-kernel-defaults: true
cluster-init: true

etcd-s3: true
etcd-s3-bucket: "k3s-etcd-testing"
etcd-s3-endpoint: "chonky.r2.cloudflarestorage.com"
etcd-s3-access-key: "wonky"
etcd-s3-secret-key: "tronky"

kubelet-arg:
  - max-pods=250
kube-controller-manager-arg:
  - node-cidr-mask-size=22

Validation

$ curl https://get.k3s.io --output install-"k3s".sh
$ sudo chmod +x install-"k3s".sh
$ sudo groupadd --system etcd && sudo useradd -s /sbin/nologin --system -g etcd etcd
$ sudo modprobe ip_vs_rr
$ sudo modprobe ip_vs_wrr
$ sudo modprobe ip_vs_sh
$ sudo printf "on_oovm.panic_on_oom=0 \nvm.overcommit_memory=1 \nkernel.panic=10 \nkernel.panic_ps=1 \nkernel.panic_on_oops=1 \n" > ~/90-kubelet.conf
$ sudo cp 90-kubelet.conf /etc/sysctl.d/
$ sudo systemctl restart systemd-sysctl
$ sudo INSTALL_K3S_COMMIT=3f237230350b5170eef4e54c7826d88433182efc INSTALL_K3S_EXEC=server ./install-k3s.sh
$ kgp -A
$ kgn
$ w2 kg no,po -A
$ sudo /usr/local/bin/k3s etcd-snapshot save

$ sudo /usr/local/bin/k3s etcd-snapshot save

WARN[0000] Unknown flag --write-kubeconfig-mode found in config.yaml, skipping
WARN[0000] Unknown flag --token found in config.yaml, skipping
WARN[0000] Unknown flag --profile found in config.yaml, skipping
WARN[0000] Unknown flag --selinux found in config.yaml, skipping
WARN[0000] Unknown flag --node-external-ip found in config.yaml, skipping
WARN[0000] Unknown flag --protect-kernel-defaults found in config.yaml, skipping
WARN[0000] Unknown flag --cluster-init found in config.yaml, skipping
WARN[0000] Unknown flag --kubelet-arg found in config.yaml, skipping
WARN[0000] Unknown flag --kube-controller-manager-arg found in config.yaml, skipping
DEBU[0000] Attempting to retrieve extra metadata from k3s-etcd-snapshot-extra-metadata ConfigMap
DEBU[0000] Error encountered attempting to retrieve extra metadata from k3s-etcd-snapshot-extra-metadata ConfigMap, error: configmaps "k3s-etcd-snapshot-extra-metadata" not found
INFO[0000] Saving etcd snapshot to /var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-1-1-2-4-1700680838
{"level":"info","ts":"2023-11-22T19:20:37.905006Z","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-1-1-2-4-1700680838.part"}
{"level":"info","ts":"2023-11-22T19:20:37.909274Z","logger":"client","caller":"v3@v3.5.9-k3s1/maintenance.go:212","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2023-11-22T19:20:37.909331Z","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2023-11-22T19:20:37.967884Z","logger":"client","caller":"v3@v3.5.9-k3s1/maintenance.go:220","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2023-11-22T19:20:37.986108Z","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"5.7 MB","took":"now"}
{"level":"info","ts":"2023-11-22T19:20:37.986214Z","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-1-1-2-4-1700680838"}
INFO[0000] Checking if S3 bucket k3s-etcd-testing exists
INFO[0000] S3 bucket k3s-etcd-testing exists
INFO[0000] Saving etcd snapshot on-demand-ip-1-1-2-4-1700680838 to S3
INFO[0000] Uploading snapshot to s3://k3s-etcd-testing//var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-1-1-2-4-1700680838
INFO[0001] Uploaded snapshot metadata s3://k3s-etcd-testing/.metadata/on-demand-ip-1-1-2-4-1700680838
INFO[0001] S3 upload complete for on-demand-ip-1-1-2-4-1700680838
INFO[0001] Reconciling ETCDSnapshotFile resources
DEBU[0001] Found snapshotFile for on-demand-ip-1-1-2-4-1700680768 with key local-on-demand-ip-1-1-2-4-1700680768
DEBU[0001] Found snapshotFile for on-demand-ip-1-1-2-4-1700680838 with key local-on-demand-ip-1-1-2-4-1700680838
DEBU[0001] Found snapshotFile for on-demand-ip-1-1-2-4-1700680838 with key s3-on-demand-ip-1-1-2-4-1700680838
DEBU[0001] Found ETCDSnapshotFile for on-demand-ip-1-1-2-4-1700680768 with key local-on-demand-ip-1-1-2-4-1700680768
DEBU[0001] Found ETCDSnapshotFile for on-demand-ip-1-1-2-4-1700680838 with key local-on-demand-ip-1-1-2-4-1700680838
DEBU[0001] Found ETCDSnapshotFile for on-demand-ip-1-1-2-4-1700680768 with key s3-on-demand-ip-1-1-2-4-1700680768
DEBU[0001] Found ETCDSnapshotFile for on-demand-ip-1-1-2-4-1700680838 with key s3-on-demand-ip-1-1-2-4-1700680838
INFO[0001] Reconciliation of ETCDSnapshotFile resources complete

Screenshot 2023-11-22 at 11 21 07 AM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

2 participants