-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
5 create sqs wrapper and services for polling messages #6
base: main
Are you sure you want to change the base?
5 create sqs wrapper and services for polling messages #6
Conversation
…h-between-local-and-aws Updated to be able to run things locally in docker
), | ||
backoff_limit=3, | ||
# keep jobs around for a day before deleting | ||
ttl_seconds_after_finished=86400 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for debugging purposes I'm guessing? or does this help with performance? I don't think jobs reuse containers, right?
also, I'm guessing this is just POC for now, and will be rolled out to the other 2 processes as well?
#slice is start indice inclusive, so 0 to max_errors will return 1000000 errors (0-999999) if the | ||
#max_errors is 1000000 and there are more than that. Adding +1 actually returns | ||
#max_errors + 1 which would be one more than the max_errors intended |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, although maybe it doesn't matter for the new paradigm, but the intent was to make sure the downstream special case triggers, i.e. adding that extra line in the csv file, since at the max_errors, it doesn't trigger, but max_errors + 1 triggers the line to be added
Closes #5
Please read all the comments in the associated story, I've documented everything there.