Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEDA Support for configurable scaling behavior in HPA v2beta2 #802

Closed
tbickford opened this issue Apr 30, 2020 · 2 comments
Closed

KEDA Support for configurable scaling behavior in HPA v2beta2 #802

tbickford opened this issue Apr 30, 2020 · 2 comments
Assignees
Labels
feature-request All issues for new features that have not been committed to needs-discussion
Milestone

Comments

@tbickford
Copy link
Contributor

It would be ideal to control how scaling works for the HPA that the KEDA Operator provisions. Kubernetes 1.18 offers this functionality in the HPA v2beta2 to set scaling policies on an individual HPA through the behavior field - https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior

Use-Case

If the metrics that the HPA is consuming are volatile and in short intervals (near the default 5 minute window for the HPA scale down), the number of replicas can get into a state where they are continuously flapping. This could be mitigated through controlling the scaling down window (or scale up) of the HPA that KEDA is creating.

Specification

The ScaledObject could possibly be extended to account for the additional HPA configuration:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: {scaled-object-name}
spec:
  scaleTargetRef:
    deploymentName: {deployment-name} # must be in the same namespace as the ScaledObject
    containerName: {container-name}  #Optional. Default: deployment.spec.template.spec.containers[0]
  pollingInterval: 30  # Optional. Default: 30 seconds
  cooldownPeriod:  300 # Optional. Default: 300 seconds
  minReplicaCount: 0   # Optional. Default: 0
  maxReplicaCount: 100 # Optional. Default: 100
  triggers:
  # {list of triggers to activate the deployment}
  behavior:
    scaleDown:
      policies:
      - type: Pods
        value: 4
        periodSeconds: 60
      - type: Percent
        value: 10
        periodSeconds: 60
@tbickford tbickford added feature-request All issues for new features that have not been committed to needs-discussion labels Apr 30, 2020
@zroubalik zroubalik added this to the v2.0 milestone May 1, 2020
@zroubalik zroubalik self-assigned this May 1, 2020
@zroubalik
Copy link
Member

zroubalik commented May 1, 2020

This is definitely something that we should support and should be easy to implement. Only minor problem is that we will need to bump k8s version of deps and libraries used by Operator and Metrics server to v18.2 (they are currently on v17.4)

@zroubalik
Copy link
Member

Fixed by #805

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to needs-discussion
Projects
None yet
Development

No branches or pull requests

2 participants