-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Scaler that Read Metrics From Current Custom Metrics Adapter #5810
Comments
Maybe it's better to allow keep existing metrics spec when resue an existing HPA? For example, add apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
annotations:
scaledobject.keda.sh/keep-existing-hpa-metrics-spec: "true"
spec:
advanced:
horizontalPodAutoscalerConfig:
name: test Then KEDA only add extra external metric spec to HPA metrics list, keep existing HPA metrics spec in the HPA metrics list. |
This is not friendly to GitOps, for example, use ArgoCD to manage YAMLs, and it both include HPA and ScaledObject which reuses HPA, the KEDA will change exsiting HPA's spec, and ArgoCD found that HPA is been changed, then it change it back to original defination. |
I think I found the best solution, add a field to apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: test
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test
pollingInterval: 15
minReplicaCount: 1
maxReplicaCount: 100
advanced:
horizontalPodAutoscalerConfig:
metrics:
- pods:
metric:
name: k8s_pod_rate_cpu_core_used_limit
target:
averageValue: "80"
type: AverageValue
type: Pods
- pods:
metric:
name: k8s_pod_rate_mem_usage_limit
target:
averageValue: "80"
type: AverageValue
type: Pods
- pods:
metric:
name: k8s_pod_rate_gpu_used_request
target:
averageValue: "60"
type: AverageValue
type: Pods
triggers:
- type: cron
metadata:
timezone: Asia/Shanghai
start: 30 9 * * *
end: 30 10 * * *
desiredReplicas: "10" And KEDA then populate the metrics spec to the HPA that been managed to KEDA, just like keep the This should be a small but very useful change. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
Sorry for the slow response, my life's been a chaos :( |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
Metric Type and Kubernetes Metric API
There are several metric types when defining HPA:
Resource
,Pods
,Object
andExternal
:Resource
type usesResource Metrics API
(v1beta1.metrics.k8s.io).Pods
type andObject
type usesCustom Metrics API
(v1beta1.custom.metrics.k8s.io).External
type usesExternal Metrics API
(v1beta1.external.metrics.k8s.io).KEDA
occupies theExternal Metric API
, and providecpu
andmemory
triggers which read metrics from current Resource Metrics API adapter, but no trigger read metrics from current Custom Metrics API adapter.Scenario: Migrate HPA from
Pods
orObject
metric typeSome cloud vendors provide rich metrics for HPA by default. For example, Tencent TKE provide a lot of HPA metrics for users: https://www.tencentcloud.com/document/product/457/34025
And these metrics are based on
Custom Metrics API
, which means it has a default adapter ofCustom Metrics API
, users can define HPA like this:But if users want to use KEDA to add some triggers to the same workload, they need to delete the previously defined HPA because KEDA and HPA cannot be used together, and KEDA didn't provide a trigger that can read metrics from current
Custom Mtrics API
, so this prevents users from migrating to KEDA.Proposal: Add Scaler that Read Metrics From Current Custom Metrics API Adapter
The Pods and Object type metric have multiple levels of definitions, and metadata is a
map[string]string
. It is not possible to directly move existing HPApods
andobject
metric definitions to metadata. We need to consider how to design it.The text was updated successfully, but these errors were encountered: