-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dicussion: admission control done in Kernel? #7347
Comments
I checked the MRD that there are three requirements:
But I just feel none of these features should be built into the Kernel? |
Thanks for raising this question. I suggest we can do some research on how mature products in the industry do for resource isolation and management. To my knowledge, every system deployed on-prem requires careful resource arrangement so that some jobs can get prioritized over the others. But such a strategy is never perfect. In my experience, the users (who have to be aware of these resource issues) have to set the correct expectation (SLA) for their jobs. It requires a top-down alignment across the whole company to know which departments are the high priority. |
This issue has been open for 60 days with no activity. Could you please update the status? Feel free to continue discussion or close as not planned. |
solved by #18383 |
When the compute node is already under pressure, i.e. high CPU usage and high memory usage because the workload is intensive or too many queries already running concurrently,
it makes little sense to allow new queries to be scheduled on this compute node. (just return a warning or disallow but leave a backdoor to schedule jobs onto it anyway, e.g. user insists, for stress testing, etc.)
Some option:
create materialized view
to RisingWave, it returns a notice.I think for Cloud, eventually, this is a must-do.
But it is a question of what kind of input the admission control wants from the kernel to work well.
Also, the nicer way I suppose, is to alert the users in advance and be proactive instead of returning the warning/error message only after users try to schedule a new query
The text was updated successfully, but these errors were encountered: