KEDA 2.3 kafka_scaler error "invalid offset found for topic" #1900
Replies: 2 comments 3 replies
-
@grassiale WDYT? |
Beta Was this translation helpful? Give feedback.
-
Hello! Do you have a low message rate? I'm asking because for what I know you will only lose committed offsets when a consumer is not connected to Kafka for a certain amount of time. One week by default. It never occurred to me that offsets were missing for some partitions only and not all of them. |
Beta Was this translation helpful? Give feedback.
-
There is some issue/bug in the kafka_scaler that makes it get lost regarding some Kafta topic partition offset. Once this situation happens, an error (one for each affected partition) is shown in the keda-operator logs :
INFO kafka_scaler invalid offset found for topic our_topic in group our_Group and partition 71, probably no offset is committed yet
Once we reach this situation, any “affected partition” will no trigger any new message, regardless the amount of new messages incoming to that specific partition. But if we sent them to a “non-affected” partition, KEDA keeps on working fine. This situation is unpredictable, at least for us, and we do not know when will a partition start being affected in this way. When we destroy the kafka topic, and create again it fresh-new, all partitions are unaffected. But, as time goes by, the number of "affected partitions" keeps on growing. Losing the events history, when recreating the topic, is harmful for us, so it is not a real option.
Regarding the KEDA configuration, this is a snippet of our ScaledJobs:
Due to our business logic, we cannot use Offset Reset Policy 'earliest' since we need to quickly respond to the new events, without going through all the previous ones.
Beta Was this translation helpful? Give feedback.
All reactions