Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lock isn't deleted if task killed by hard-time limit #40

Open
Strawl opened this issue Oct 5, 2021 · 1 comment
Open

Lock isn't deleted if task killed by hard-time limit #40

Strawl opened this issue Oct 5, 2021 · 1 comment

Comments

@Strawl
Copy link

Strawl commented Oct 5, 2021

If i set a hard time limit and the task exceeds that limit the task gets killed. But Celery Singleton doesn't delete it, so it keeps it from creating new tasks.

Proof:
Hard time limit exceeded:

[2021-10-01 07:33:44,026: ERROR/MainProcess] Hard time limit (300s) exceeded for tasks.<some_task>[04c8d60c-c7c7-43f0-ad8c-0d478da732a0]
[2021-10-01 07:33:44,138: ERROR/MainProcess] Process 'ForkPoolWorker-3' pid:3944596 exited with 'signal 9 (SIGKILL)'

Then celery says this task still exists:

Traceback (most recent call last):
File "/home/build/dev/ansible-integration/project_overview/env/lib/python3.6/site-packages/celery/app/trace.py", line 515, in trace_task
priority=task_priority
File "/home/build/dev/ansible-integration/project_overview/env/lib/python3.6/site-packages/celery/canvas.py", line 219, in apply_async
return _apply(args, kwargs, **options)
File "/home/build/dev/ansible-integration/project_overview/env/lib/python3.6/site-packages/celery_singleton/singleton.py", line 116, in apply_async
return self.on_duplicate(existing_task_id)
File "/home/build/dev/ansible-integration/project_overview/env/lib/python3.6/site-packages/celery_singleton/singleton.py", line 141, in on_duplicate
task_id=existing_task_id,
celery_singleton.exceptions.CelerySingletonException: Attempted to queue a duplicate of task ID 04c8d60c-c7c7-43f0-ad8c-0d478da732a0

For now ill try using soft time limits, which should solve my problem all together, that way my tasks can finish up normally (so it is deleted from that celery-singleton storage or whatever). I don't think this is really that big of an issue, just please address this somewhere so people know.

@steinitzu
Copy link
Owner

I see, since hard time limit force terminates and doesn't allow for any cleanup.

I think you could set lock_expiry to match the time limit https://github.com/steinitzu/celery-singleton#lock_expiry
The key gets expiry time in redis so it's always deleted

Maybe celery-singleton could also detect the presence of hard timeout and do this automatically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants