Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add inspect-ready-q and inspect-scheduled-q HTTP API endpoints #231

Open
edgarsendernet opened this issue Jul 1, 2024 · 6 comments
Open
Labels
enhancement New feature or request Needs Funding This feature is available for sponsorship.

Comments

@edgarsendernet
Copy link

Having the ability to quickly see queue configuration would be very helpful when managing and tuning KumoMTA.
Parameters:

  • queue name
  • limit (the number of message spool IDs to return, default 50)

Response would contain:

  • configuration values (queue config or egress_path config)
  • next delivery time (for scheduled queues)
  • array of message spool IDs in the queue. These could be used to fetch message data from inspect-message endpoint.
@wez
Copy link
Collaborator

wez commented Jul 2, 2024

The system is not optimized for that kind of per-message observation. To enable it will require data structure changes that will harm the maximum throughput of the system, which is undesirable.

The closest approximation would be to schedule some work to de-queue some number of messages from matching queue(s) and apply some function to them, and then re-insert them into the queues again afterwards.

You could sort of do this for yourself today using the rebind API and trigger_rebind_event = true to pass every message in the queue to a lua function. It's not an ideal match up today because there isn't really a way to share state, limit the number of matches or return scoped output from that event.

Querying other metadata about the queues overall is more easily doable without imposing a performance cost on the rest of the system.

@edgarsendernet
Copy link
Author

I was thinking to implement this without changing the underlying timeq queues, as it's needed only when investigating something.
Basic idea would be to clone the requested queue, then .skip() some amount of ms (maybe max message age) so that all the entries become due, and then return any entries up to specified limit.
Not sure if that makes sense?

@wez wez added enhancement New feature or request Needs Funding This feature is available for sponsorship. labels Jul 11, 2024
@MHillyer
Copy link
Collaborator

Can you provide more detail around the uses cases for this?

What would happen to cause you to trigger these commands, and what would you do with the information provided? What is the work to be done that needs these?

@edgarsendernet
Copy link
Author

This would help a lot when troubleshooting deferrals caused by message content. For example, Gmail may delay emails with "421-4.7.0 Our system has detected that this message is suspicious due to the nature of the content and/or the links within. To best protect our users from spam, the message has been blocked.".

Without the ability to look into the content of the message, the only workarounds would be to log the entire message body somewhere else (so unnecessarily duplicating data) or doing what Wez mentioned above with lua, but it's a very hacky approach.

@MHillyer
Copy link
Collaborator

MHillyer commented Aug 2, 2024

The log for that event will have the message ID, which can be used with https://docs.kumomta.com/reference/http/api_inject_v1/ to review the content of the message, so that use case is already covered.

@edgarsendernet
Copy link
Author

Not really, as in this case you may wish to suspend or bounce the entire queue, not just the single message.

Knowing what's inside the queue that is going to be administratively bounced or suspended is important IMO.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Needs Funding This feature is available for sponsorship.
Projects
None yet
Development

No branches or pull requests

3 participants