[Azure Function] Remove jobs and related pods when work is done #1737
Replies: 5 comments
-
This sounds like you're looking for more of a functions-as-a-service tool that is hosted in k8s? Not sure Keda is the right tool here. In Keda, it doesn't know anything about task routing or other data-level configuration, just "I should have N jobs" (or N replicas) and it's up to your code to implement a worker model system that matches. |
Beta Was this translation helpful? Give feedback.
-
@coderanger sorry what are you saying? The same behavior works perfectly in case of |
Beta Was this translation helpful? Give feedback.
-
Keda doesn't know anything about "when the job is done". It only sees the number of messages in your servicebus queue. And based on that and the queueLength parameter it will spawn some number of job pods. When the number of messages goes down, it will reduce the replica count and Kubernetes will eventually remove those pods. Or similarly if the pod command exits cleanly K8s will consider that job a success and not restart it so the pod will be destroyed. |
Beta Was this translation helpful? Give feedback.
-
Also note that your parallelism is |
Beta Was this translation helpful? Give feedback.
-
@andrekiba, Azure Function host runtime does not turn off even though there is no message in the queue. It just waits for the new message in the queue to process. So, while the azure function host may not be processing a message (or executing a function), it is still running (i.e., the host is still running) because of which the job is not considered to be complete. Hence, it is not removed. If you want to use SclaedJobs, you should rather write the code in a python file and execute the python file as a job. |
Beta Was this translation helpful? Give feedback.
-
Report
I have an Azure Function that processes one message at a time.
The processing takes 5-10 minutes (maybe more) so we diced to use the
ScaledJob
approach.This is the configuration of the
ScaledJob
:The idea of
messageCount=1
in the trigger is to create one job (and pod) for each new message.The scale-up is ok...the problem is the scale-down...
Is it something related to the concept of job "completion"??
Expected Behavior
Remove the pod/s and the job when completed.
Actual Behavior
Pods related to jobs remains in Running without a reason.
Steps to Reproduce the Problem
Logs from KEDA operator
KEDA Version
2.2.0
Kubernetes Version
1.18
Platform
Microsoft Azure
Scaler Details
Azure Service Bus
Anything else?
No response
Beta Was this translation helpful? Give feedback.
All reactions