Skip to content
This repository has been archived by the owner on Dec 1, 2021. It is now read-only.

Pods doesn't scale in to zero #7

Open
TsuyoshiUshio opened this issue Jan 24, 2020 · 3 comments
Open

Pods doesn't scale in to zero #7

TsuyoshiUshio opened this issue Jan 24, 2020 · 3 comments
Labels
bug Something isn't working priority-low

Comments

@TsuyoshiUshio
Copy link
Collaborator

The current version of KEDA Durable Scaler has a limitation. The pods of durable functions never goes to zero. The reason is, if you need to get a pod of durable functions by this scaler, you need to put a control/worker queue. In this case, you need additional http/queue trigger implementation that executes DurableOrchestrationClient.

That is why, the current Azure Functions Core Tools (func command) generate following yaml file for durable functions. It includes, AzureFunctionsJobHost__functions__0 that is for filter the functions. However, it seems doesn't work and I can see the issue.

data:
  AzureWebJobsStorage: <YOUR_STORAGE_ACCOUNT_BASE64> 
  FUNCTIONS_WORKER_RUNTIME: ZG90bmV0
apiVersion: v1
kind: Secret
metadata:
  name: durable-keda
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  name: durable-keda-http
  namespace: default
  annotations:
    osiris.deislabs.io/enabled: true
    osiris.deislabs.io/deployment: durable-keda-http
spec:
  selector:
    app: durable-keda-http
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: durable-keda-http
  namespace: default
  labels:
    app: durable-keda-http
  annotations:
    osiris.deislabs.io/enabled: true
    osiris.deislabs.io/minReplicas: 1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: durable-keda-http
  template:
    metadata:
      labels:
        app: durable-keda-http
    spec:
      containers:
      - name: durable-keda-http
        image: tsuyoshiushio/durable-keda
        ports:
        - containerPort: 80
        env:
        - name: AzureFunctionsJobHost__functions__0
          value: LoadOrchestration_HttpStart
        envFrom:
        - secretRef:
            name: durable-keda
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: durable-keda
  namespace: default
  labels:
    app: durable-keda
spec:
  selector:
    matchLabels:
      app: durable-keda
  template:
    metadata:
      labels:
        app: durable-keda
    spec:
      containers:
      - name: durable-keda
        image: tsuyoshiushio/durable-keda
        env:
        - name: AzureFunctionsJobHost__functions__0
          value: LoadOrchestration
        - name: AzureFunctionsJobHost__functions__1
          value: LoadOrchestration_Hello
        envFrom:
        - secretRef:
            name: durable-keda
---
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: durable-keda
  namespace: default
  labels:
    deploymentName: durable-keda
spec:
  scaleTargetRef:
    deploymentName: durable-keda
  triggers:
  - type: orchestrationtrigger
    metadata:
      type: orchestrationTrigger
      name: context
  - type: activitytrigger
    metadata:
      type: activityTrigger
      name: name
---

Currently, I make it one deployment and make it minimum scale count as 1. We need to wait until the issue is solved or if there is other way to solve it, I'll take your advice.

---
apiVersion: v1
kind: Service
metadata:
  name: durable-keda
  namespace: default
spec:
  selector:
    app: durable-keda
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: durable-keda
  namespace: default
  labels:
    app: durable-keda
spec:
  replicas: 1
  selector:
    matchLabels:
      app: durable-keda
  template:
    metadata:
      labels:
        app: durable-keda
    spec:
      containers:
      - name: durable-keda
        image: tsuyoshiushio/durable-keda:latest
        ports:
        - containerPort: 80
        env:
        envFrom:
        - secretRef:
            name: durable-keda
---
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: durable-keda
  namespace: default
  labels:
    deploymentName: durable-keda
spec:
  scaleTargetRef:
    deploymentName: durable-keda
  triggers:
  - type: external
    metadata:
      scalerAddress: durable-external-scaler-service.keda.svc.cluster.local:5000
---

@TsuyoshiUshio TsuyoshiUshio added bug Something isn't working priority-low labels Jan 24, 2020
@jainshikha
Copy link

Hi,
created the blob azure func with KEDA, facing the issue like it always runs with 1 pod. ideally it should get back to zero.
does this scaling to zero issue resolved?

@TsuyoshiUshio
Copy link
Collaborator Author

hi @jainshikha

No. However, this issue is for Durable Functions. It is different from blob functions one.
You can go here and submit an issue. :)
https://github.com/kedacore/keda/issues

@tomkerkhove
Copy link
Member

We already have kedacore/keda#809 so no need for that.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working priority-low
Projects
None yet
Development

No branches or pull requests

3 participants