Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading to Argo CD 2.10.1 drop reconciliation/sync to nearly a zero. #17257

Closed
2 of 3 tasks
daftping opened this issue Feb 20, 2024 · 36 comments
Closed
2 of 3 tasks

Upgrading to Argo CD 2.10.1 drop reconciliation/sync to nearly a zero. #17257

daftping opened this issue Feb 20, 2024 · 36 comments
Labels
bug Something isn't working

Comments

@daftping
Copy link
Contributor

daftping commented Feb 20, 2024

Checklist:

  • I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • I've included steps to reproduce the bug.
  • I've pasted the output of argocd version.

Describe the bug
After upgrading Argo CD from 2.9.2 to 2.10.1 it is unable to reconcile almost anything. Most of the Applications are stuck in the Progressing or OutOfSync state, metrics drop close to zero.

Sharding is not used. (1 replica in StatefulSet with default configuration)

To Reproduce
We are unable to reproduce in a similar dev environment with a handful of apps and only a few clusters connected.
see below #17257 (comment)

Expected behavior
Argo CD should operate as usual after the upgrade.

Screenshots
~9:40 Argo CD was upgraded to 2.10.1. ~10:45 it was rolled back to 2.9.2
Application Controller
image
image
Screenshot from 2024-02-20 14-11-48

Repo Server
image

Redis
image

Config

kubectl get cm argocd-cmd-params-cm -o json | jq .data
{
  "application.namespaces": "",
  "applicationsetcontroller.enable.leader.election": "true",
  "applicationsetcontroller.enable.new.git.file.globbing": "true",
  "applicationsetcontroller.enable.progressive.syncs": "true",
  "applicationsetcontroller.log.format": "text",
  "applicationsetcontroller.log.level": "debug",
  "applicationsetcontroller.policy": "sync",
  "controller.log.format": "text",
  "controller.log.level": "debug",
  "controller.operation.processors": "10",
  "controller.repo.server.timeout.seconds": "60",
  "controller.self.heal.timeout.seconds": "5",
  "controller.status.processors": "20",
  "otlp.address": "",
  "redis.server": "redis-ha-haproxy:6379",
  "repo.server": "argo-cd-argocd-repo-server:8081",
  "reposerver.log.format": "text",
  "reposerver.log.level": "debug",
  "reposerver.parallelism.limit": "0",
  "server.basehref": "/",
  "server.dex.server": "https://argo-cd-argocd-dex-server:5556",
  "server.dex.server.strict.tls": "false",
  "server.disable.auth": "false",
  "server.enable.gzip": "true",
  "server.insecure": "false",
  "server.log.format": "text",
  "server.log.level": "debug",
  "server.repo.server.strict.tls": "false",
  "server.rootpath": "",
  "server.staticassets": "/shared/app",
  "server.x.frame.options": "sameorigin"
}

kubectl get cm argocd-cm -o json | jq .data
{
  "admin.enabled": "false",
  "application.instanceLabelKey": "argocd.argoproj.io/instance",
  "dex.config": "<redacted>",
  "exec.enabled": "false",
  "help.chatText": "<redacted>",
  "help.chatUrl": "<redacted>",
  "resource.customizations": "<redacted>",
  "resource.links": "<redacted>",
  "server.rbac.log.enforce.enable": "false",
  "statusbadge.enabled": "true",
  "timeout.hard.reconciliation": "0s",
  "timeout.reconciliation": "180s",
  "ui.bannercontent": "<redacted>",
  "ui.bannerpermanent": "true",
  "url": "<redacted>"
}

Version

argocd: v2.10.1+a79e0ea
  BuildDate: 2024-02-14T17:56:39Z
  GitCommit: a79e0eaca415461dc36615470cecc25d6d38cefb
  GitTreeState: clean
  GoVersion: go1.21.7
  Compiler: gc
  Platform: linux/amd64
argocd-server: v2.10.1+a79e0ea
  BuildDate: 2024-02-14T17:37:43Z
  GitCommit: a79e0eaca415461dc36615470cecc25d6d38cefb
  GitTreeState: clean
  GoVersion: go1.21.3
  Compiler: gc
  Platform: linux/amd64
  Kustomize Version: v5.2.1 2023-10-19T20:13:51Z
  Helm Version: v3.14.0+g3fc9f4b
  Kubectl Version: v0.26.11
  Jsonnet Version: v0.20.0

Logs

Lots of messages
Screenshot from 2024-02-20 13-24-50

From 20000 to 40000 messages below per cluster per 5 min

"Checking if cluster <cluster> with clusterShard 0 should be processed by shard 0"
@daftping daftping added the bug Something isn't working label Feb 20, 2024
@selfuryon
Copy link

Yeah, the same thing

@crenshaw-dev
Copy link
Member

crenshaw-dev commented Feb 22, 2024

This was meant to fix it: #17167

Can you try 2.10.0 and see if you hit the same issue?

@prune998
Copy link
Contributor

I'm on 2.10.1 and everything is fine for me... but I only have few clusters and few Apps

@Hariharasuthan99
Copy link
Contributor

Hariharasuthan99 commented Feb 23, 2024

We are facing a similar issue but when we enable dynamic cluster sharding and run application-controller as a deployment with 2 replicas(We need more than 1). I made application-controller 'statefulset' replica 0 and tried deleting the statefulset itself to ensure the deployment runs properly. The env var 'ARGOCD_ENABLE_DYNAMIC_CLUSTER_DISTRIBUTION' is set to true on the deployment. But syncs stopped working with below error

image

Attaching application-controller deployment pod logs
argocd-application-controller-6bdfc8586c-5vpcp-argocd-application-controller.log

However when we run application-controller as statefulest everything works as expected. But we are interested in the dynamic cluster sharding feature hence tried running it as deployment.

@daftping
Copy link
Contributor Author

This was meant to fix it: #17167

Can you try 2.10.0 and see if you hit the same issue?

Same behaviour on 2.10.0
I am highly suspicious of the newly introduced rate-limiting feature
https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/#rate-limiting-application-reconciliations

When I create 100 Applications reconciliation drops to zero and recovers after a while
image

Any mass operations cause this, in bigger environments it is a permanent state.

I am trying to reproduce it on a vanilla Argo CD deployed on Kind so we can have steps to reproduce.

@daftping
Copy link
Contributor Author

I forgot to mention restarting application-controller helps for a short period, it processes a bunch of operations and stuck again.

@daftping
Copy link
Contributor Author

daftping commented Feb 23, 2024

I have repro steps:

Create cluster and install Argo CD

kind create cluster -n argocd
kubectl create namespace argocd
kubectl apply -n argocd -f https://github.com/argoproj/argo-cd/stable/manifests/install.yaml

Create AppProject with orphanedResources enabled.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: default-orphaned-resources
  namespace: argocd
spec:
  clusterResourceWhitelist:
  - group: '*'
    kind: '*'
  destinations:
  - namespace: '*'
    server: '*'
  sourceRepos:
  - '*'
  orphanedResources:
    warn: false

Create 225 Apps

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: argocd-loadtest
spec:
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
    - matrix:
        generators:
          - list:
              elements:
                - prefix: a
                - prefix: b
                - prefix: c
                - prefix: d
                - prefix: e
                - prefix: f
                - prefix: g
                - prefix: h
                - prefix: i
                - prefix: j
                - prefix: k
                - prefix: l
                - prefix: m
                - prefix: n
                - prefix: o
          - list:
              elements:
                - number: 10
                - number: 11
                - number: 12
                - number: 13
                - number: 14
                - number: 15
                - number: 16
                - number: 17
                - number: 18
                - number: 19
                - number: 20
                - number: 21
                - number: 22
                - number: 23
                - number: 24

  template:
    metadata:
      name: "argocd-loadtest-{{.prefix}}-{{.number}}"
      finalizers:
      - resources-finalizer.argocd.argoproj.io
    spec:
      project: default-orphaned-resources
      sources:
      - repoURL: "https://github.com/argoproj/argocd-example-apps.git"
        path: helm-guestbook
        helm:
          valuesObject:
            replicaCount: 0
      destination:
        name: in-cluster
        namespace: default
      syncPolicy:
        automated:
          prune: true
          selfHeal: true

Go to UI
kubectl port-forward svc/argocd-server -n argocd 8080:443

Argo CD is stuck at around 80 apps and really slowly processing them afterwards
image

reconcile_count barely changing
kubectl port-forward pod/argocd-application-controller-0 8082:8082

while true
do 
    sleep 1; 
    curl -s http://localhost:8082/metrics | grep argocd_app_reconcile_count | cut -d' ' -f2
done
25905
25905
25906
25906
25906
25906
25906
25906
25906
25907
25907
25907
25907
25908
25909
25909
25909
25909
25909
25909
25909
25909
25909
25909
25909
25909
25909
25910
25911
25912

orphanedResources probably is not related, it just helps to generate the load.

@crenshaw-dev could you please advise what else can I check to help find the root cause?

@crenshaw-dev
Copy link
Member

Thanks for the analysis, @daftping! @gdsoumya can you take a look at the rate limiting code and see if it's a viable explanation for this issue?

@gdsoumya
Copy link
Member

Just reading the issue, I am not sure if the behavior is expected due to the rate limiter. We added 2 rate limiters 1 - global Bucket limiter and 2 - per item ExponentialBackoff limiter. By default the exp. limiter is disabled and I don't see the it being turned on so the only possible limiter being on is the Bucket limiter and the defaults for that are as follows :

WORKQUEUE_BUCKET_SIZE - The number of items that can be queued in a single burst. Defaults to 500.
WORKQUEUE_BUCKET_QPS - The number of items that can be queued per second. Defaults to 50.

So it should at least let 500 items in the queue in the burst and then 50 items per second if queue get's full. In that case 225 apps should easily be handled in the burst as it has enough capacity. @crenshaw-dev correct me if I am wrong but I believe that the workqueue dedups the items so it shouldn't be putting in the same item multiple times until it's processed so I am not sure if the issue is due to the rate limiter. But I will dig more and get back in case I find anything suspicious with the limiter.

@jessesuen
Copy link
Member

Whenever reconcilliation drops to zero, controller deadlocks may be culpable. @daftping - when this happens, can we get a dump of stacktrace of the controller to see if this might be the case?

the workqueue dedups the items so it shouldn't be putting in the same item multiple times

This is correct. Workqueues will dedup Add() calls by name.

@gdsoumya
Copy link
Member

That is possible we had some deadlock issues before causing the controller to freeze

@daftping
Copy link
Contributor Author

daftping commented Feb 23, 2024

Just reading the issue, I am not sure if the behavior is expected due to the rate limiter. We added 2 rate limiters 1 - global Bucket limiter and 2 - per item ExponentialBackoff limiter. By default the exp. limiter is disabled and I don't see the it being turned on so the only possible limiter being on is the Bucket limiter and the defaults for that are as follows :

WORKQUEUE_BUCKET_SIZE - The number of items that can be queued in a single burst. Defaults to 500.
WORKQUEUE_BUCKET_QPS - The number of items that can be queued per second. Defaults to 50.

So it should at least let 500 items in the queue in the burst and then 50 items per second if queue get's full. In that case 225 apps should easily be handled in the burst as it has enough capacity. @crenshaw-dev correct me if I am wrong but I believe that the workqueue dedups the items so it shouldn't be putting in the same item multiple times until it's processed so I am not sure if the issue is due to the rate limiter. But I will dig more and get back in case I find anything suspicious with the limiter.

@gdsoumya setting bigger values for Global limiter improves reconciliation performance for the report above. It is somehow related.

 - name: WORKQUEUE_BUCKET_SIZE
   value: 5000 
 - name: WORKQUEUE_BUCKET_QPS
   value: 500

Is it possible to disable all rate limiters altogether to validate this assumption?

@daftping
Copy link
Contributor Author

daftping commented Feb 23, 2024

Whenever reconcilliation drops to zero, controller deadlocks may be culpable. @daftping - when this happens, can we get a dump of stacktrace of the controller to see if this might be the case?

the workqueue dedups the items so it shouldn't be putting in the same item multiple times

This is correct. Workqueues will dedup Add() calls by name.

@jessesuen
I have limited Go lang knowledge. How can I do that?

@gdsoumya
Copy link
Member

gdsoumya commented Feb 23, 2024

Is it possible to disable all rate limiters altogether to validate this assumption?

@daftping Currently it's not possible to completely disable the limiter, the simplest way to simulate a disabled bucket limiter would be to set a very large value for both QPS and bucket size like you have done. I would set a even higher limit something like 10000000000 for both

@daftping
Copy link
Contributor Author

daftping commented Feb 23, 2024

Is it possible to disable all rate limiters altogether to validate this assumption?

@daftping Currently it's not possible to completely disable the limiter, the simplest way to simulate a disabled bucket limiter would be to set a very large value for both QPS and bucket size like you have done. I would set a even higher limit something like 10000000000 for both

I've set them both to 10000000000 as you suggested and all 225 apps were created and fully in sync in about 50 seconds. No issues or halts whatsoever.

- name: WORKQUEUE_BUCKET_SIZE
  value: "10000000000"
- name: WORKQUEUE_BUCKET_QPS
  value: "10000000000"

value of argocd_app_reconcile_count every second

20
20
42
79
84
110
152
165
193
206
227
246
265
295
321
342
359
378
445
591
721
831
918
1052
1139
1248
1357
1466
1557
1692
1772
1896
1978
2041
2135
2236
2300
2391
2486
2590
2758
2817
2836
2857
2876
2896
2897
2897

Thank you for suggestion @gdsoumya, at least we have a workaround now.

@daftping
Copy link
Contributor Author

I did a few more tests with a higher load (520 Apps)

On v2.9.6
I see constant reconciliation progress. CPU is loaded close to 100% and the controller trying to reconcile everything as fast as possible

On v2.10.1 even with "10000000000" for WORKQUEUE_BUCKET_SIZE and WORKQUEUE_BUCKET_QPS I see multiple stalls in the process where reconciliation almost stopped and CPU load drops to its average minimum.

@gdsoumya
Copy link
Member

Thanks @daftping for the tests, will check on why this is happening. I didn't expect the bucket limiter to behave like this with such a small no. of apps.

@alex-souslik-hs
Copy link
Contributor

@daftping I set them both to 9223372036854775807, which appears to be the maximal possible value, with 750 apps it works like in v2.9.6

@gdsoumya
Copy link
Member

gdsoumya commented Feb 26, 2024

@daftping here are the tests I ran :

  1. I was able to execute the same appset without issues when I turned off orphaned resources. I checked the no. of reconciliations I had and it stopped at 1806 (starting at 0) when all apps became healthy. And it barely took a few seconds maybe a minute to get to that stable state.
  2. I also enabled orphaned resources and deployed the appset but made sure to deploy each app to a different ns instead of the same one like in your example. All 225 apps were created and in healthy state after a 2-3mins max.
  3. I also ran your exact setup with orphaned resources and same target ns and this time also I was eventually able to get to a healthy state but it took much longer than the first 2 tests like you said.
  4. I also increased the app count to 750 to see if 1&2 are still working okay with larger no.s and to my expectation it performed the same took some time longer than 225 apps for the first sync but nothing as long as 3.

From this I can conclude that the rate limiter is working as expected. In your case you ran with a heavier load by creating an appset where each app generated overwrites the same resources in the same ns which possibly was creating an avalanche of requeues larger than the default bucket limiter causing the behaviour we saw.

The example appset used is probably not a valid scenario though I can see that there might be valid apps with such large requeues too in which case we have only 2 options :

  1. have the user update the limits for the workqueue to something large that works for them (quick and easy no code changes needed)
  2. disable limiter by default (needs code changes and release)

cc: @crenshaw-dev @jessesuen

@crenshaw-dev
Copy link
Member

crenshaw-dev commented Feb 26, 2024

Thanks for the investigation, @gdsoumya! The number of thumbs up on the original issue indicates to me that either this far edge case impacts a surprisingly high number of people or that others are experiencing a completely different issue with similar symptoms.

Do we need an option number 3, increase our default workqueue limits? Or will that cause disproportionate performance degradation for the large majority of users?

@daftping
Copy link
Contributor Author

To add a little bit more context to it. Steps to reproduce just random weird setup I came up with. In real production, we have ~500 Applications in 150 namespaces on 16 clusters. Most of the Project has orphaned resources monitoring enabled. Upgrading to 2.10.1 causes a complete halt in this environment and never recovers.
I'd argue setup above is not an outlier in terms of load or configurations. Having a default configuration supporting this out of the box would be beneficial.

@gdsoumya
Copy link
Member

gdsoumya commented Feb 26, 2024

@crenshaw-dev according to @daftping 's point maybe it's specific to orphaned resources somehow then, because in my test with 750 apps much larger than 500 I did not see any halting. I don't have a in depth understanding of what orphaned resource monitoring does in context to requeues we might want to investigate that a bit.

Do we need an option number 3, increase our default workqueue limits? Or will that cause disproportionate performance degradation for the large majority of users?

We can surely increase the no.s, as @alex-souslik-hs pointed out setting the max value would almost be equivalent to disabling the limiter, we can either do this by modifying the code or just setting the values in manifest whichever is a better option. I don't think it would affect the other users as for them it should behave the same.

@Enclavet
Copy link
Contributor

Enclavet commented Feb 26, 2024

Scalability sig here, if required we have a testing environment that can be used to determine the best default settings or test any code changes that might be required.

I was running some scalability testing with 2.10.1 and saw performance degradation from 2.8.x. Running a sync test with 4k apps (2kbconfigmaps), the first sync tests run as expected but any subsequent sync test takes significantly longer. I assume this is normal for the rate limiting as 4k apps is much greater than the 500 bucket limit.

You can see that the ops queue is able to load up for the first test and every subsequent test I am unable to load up to 500 items because of rate limiting. Of course my setup is not normal as I'm syncing all 4k apps in one shot.

image

@snuggie12
Copy link

snuggie12 commented Feb 26, 2024

What's the definition of an "item" in terms of rate limiting? Is it whole Applications or is it reseources in an app.

I have one environment with ~100 apps and no issues. However, after seeing previous posts where you all are spinning up apps I decided to do an argocd app list -o name | xargs -P 16 -n1 argocd app get --hard-refresh which was able to successfully drop reconciliations to near 0 for a several minutes (5-10) as well as cause 5xx's on some of the requests.

In another environment the apps are closer to 500 and syncs take hours before working. I'm guessing this is a slow build up filling up presumably rate limiting queues and eventually it just comes to a near halt.

Similar to @daftping I can restart the single application controller I have and work gets done for a while.

I don't think I have many orphaned resources, but I do have a ton of dependent resources from kyverno (sometimes a 1:1 ratio.) I've yet to figure out a way to not get them to show up in the UI so I presume they aren't ignored.

@gdsoumya
Copy link
Member

gdsoumya commented Feb 26, 2024

@snuggie12 by item I meant apps, we don't queue resources but we do requeue the parent apps if the dependant resources modify or change state so it might happen that given a small no. of apps with a large no. of managed child resources that can frequently change state like a deployment etc. the no. of times the app gets queued could be high.

You can see that the ops queue is able to load up for the first test and every subsequent test I am unable to load up to 500 items because of rate limiting. Of course my setup is not normal as I'm syncing all 4k apps in one shot.

@Enclavet The 500 burst limit is the max size of the queue, so it can handle at the most 500 items in the queue but when it's filled any new add() calls to the queue will be delayed and the delay is calculated on the basis of the qps which is 50 by default. So it might happen that the new items are being requested to be added to the queue but because the queue was already full the items got delayed for x seconds and this delay might be what's causing the drop to 0 in all our observations. A restart of the pod will clear out any such delay initially so we see a sudden burst (max 500) but then again after it's filled the delay kicks in.

To add more context to the approach taken for the rate limiter implementation, initially the plan was to just use the default rate limiter provided by k8s client-go which can be seen here but we later decided to implement a custom one as the exponential limiter didn't work for us but we kept the bucket limiter as is from the default limiter. In the default limiter it had a similar setup to what we use, 1 item based exponential limiter and a bucket limiter with a even smaller bucket size and qps (100 and 10 respectively).

@snuggie12
Copy link

@snuggie12 by item I meant apps, we don't queue resources but we do requeue the parent apps if the dependant resources modify or change state so it might happen that given a small no. of apps with a large no. of managed child resources that can frequently change state like a deployment etc. the no. of times the app gets queued could be high.

@gdsoumya Thanks for the quick reply. Does that mean unmanaged dependent resources of managed resources (e.g. a ReplicaSet is unmanaged but is dependent of a Deployment which is) do not count?

Additionally, our setup all cascades from a root app called argocd which in itself has the Application CR called argocd. argocd holds something called meta which has several sub-meta applications which then hold applications. Meaning when a resource changes on a leaf it eventually refreshes argocd. Could that be the issue?

For any of these new rate limiting features are there metrics? I can't seem to find any containing words like "rate", "limit", or "shard" though only run 1:1 cluster to controller so maybe I won't see shard-based metrics. Seems like it would be helpful to know rate limiting is occurring.

@gdsoumya
Copy link
Member

@snuggie12 we do not have any metrics for rate limiting yet, that's probably a good point we should see if we can add it to make it more visible to the users.

As far as I understand the deployment itself is the only child resource to the app but because any changes to the child resources of the deployment would eventually lead to a change in state for the deployment itself the app would refresh in those cases too.

I am not sure if we can call it an issue, but as you would expect any changes to dependent resources would eventually move up the tree to the root causing a refresh (if not a sync) for the app. The problem here might be the depth of the tree, if there are a lot of apps that need a refresh due to a change in a leaf resource then that could cause a significant no. of items being queued which would be rate limited according to the limits set.

@woehrl01
Copy link
Contributor

woehrl01 commented Feb 27, 2024

Just an addition from my side as I also experienced the same issue on my side with 8.000 apps and lots of clusters.

We have orphaned resources disabled but make use of sync waves and also we are using keda scalers and app of apps.

Bringing this together with the limiter having frequent updating children likely describes the root cause.

I know that keda scalers are triggering updates of resources quite heavily and caused high cpu load in the past (was fixed by adding the ignoreupdates feature)

But having sync waves or (helm) hooks somewhere in your resource tree could deadlock your dependency graph if intermediate status updates (cronjobs, deployments, keda, etc) chime in.

I can give you an update as soon as I have time to try increasing the ratelimit values.

@abhipsnl
Copy link

abhipsnl commented Feb 28, 2024

Even with 200+ apps, I am facing the same issue, refresh and syncs both are getting stuck almost for hours.
But with a lower app count working as expected.

As mentioned applied the workaround on the controller and looks good now

controller:
  env:
    - name: WORKQUEUE_BUCKET_SIZE
      value: "9223372036854775807"
    - name: WORKQUEUE_BUCKET_QPS
      value: "9223372036854775807"

@csantanapr
Copy link
Member

csantanapr commented Feb 28, 2024

@gdsoumya
What are the next steps on this issue?

  • The SIG-Scalability (@Enclavet , @csantanapr ) met today and discussed this issue Sig Notes Doc.
  • The SIG thinks is best to revert the change and make the ratelimit feature as optional and not enable by default.

Reading the PR comment, the assumption was that this was disable by default according to @jessesuen

This is disabled by default (WORKQUEUE_FAILURE_COOLDOWN_NS=0) and unless enabled, will behave exactly the same as before.

@gdsoumya
Copy link
Member

gdsoumya commented Feb 29, 2024

@csantanapr I shall raise a PR to disable the bucket limiter too by default we can cherry pick it back into 2.10.

In the original PR only the per item limiter was disabled which was expected to interfere in a normal setup but the bucket limiter wasn't expected to behave like this with the default limits and did not behave in the way in any of the tests I conducted. Though as seen specific configurations might be spiking the work queue higher than expected.

@jdomag
Copy link

jdomag commented Mar 11, 2024

is this fixed in 2.10.2? I've installed 2.10.2 from newest helm chart 6.6.0 and still seeing the issue that controller first logs "The cluster xyz has no assigned shard" described here and then stop processing apps. The restart of the controller fixes the issue for a few seconds and it stops again.
I use statefulset with 1 replica.

@rumstead
Copy link
Member

https://github.com/argoproj/argo-cd/commits/v2.10.2/

Looks to need a new tag

@gdsoumya
Copy link
Member

I don't think this has made it to a new release yet, it should be available when 2.10.3 is released.

@jdomag
Copy link

jdomag commented Mar 19, 2024

I've upgraded to 2.10.4 and it works fine.

@daftping
Copy link
Contributor Author

Thank you folks for quick turnaround!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests