You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After adding the abstractions that facilitate working with queues in multi-threaded scenarios, looks like the only way to actually consume descriptor chains right now is via the iter method called on QueueState after locking, which returns an iterator. Moreover the iterator is now the only one with the go_to_previous_position method, because it’s guaranteed that only one consumer can have an iterator at any given time, so there’s no race condition. That generates some limitations:
Holding the interator (so we can also call go_to_previous_position when necessary) automatically implies having a very large critical section, because no one else gets to pull descriptor chains out of the queue while an iterator is held.
Multithreading-related concerns aside, there are a number of use cases where even with a simple QueueState, an iterator is unwieldy because it keeps &mut self borrowed while it lives, which leads to all sorts of problems with the borrow checker.
In terms of alleviating the descriptor chain consumption side of things, one simple way of addressing both the above concerns is introducing some sort of pop_descriptor_chain method. It generates a comparatively much smaller critical section (so multiple parallel consumers can make progress simultaneously), and also produces a descriptor chain without tying it to a &mut self borrow for the entire lifetime. Looks like adding this method is totally fine, as there are no potential race conditions.
The trickier part is how to handle the equivalent of go_to_previous_position. We have seen at least several use cases where we first need to pop a descriptor chain from the queue, and then while processing it we realize there’s a blocker somewhere (for example, the backend cannot currently execute an operation for some reason), so we cannot mark it as used. This means we either need to return it to the queue, or store it somewhere. The former is complicated by race conditions in multithreaded scenarios, while the latter generated unwanted extra complexity. One straightforward way to address this is to add a companion trait to QueueStateT (named along the lines of QueueStateOwnedT: QueueStateT or something like that) which stands for an object that's owned by a single thread and thus can safely and directly implement operations such as go_to_previous_position and iter. Will post an example patch later, and looking forward to any other thoughts and perspectives here.
The text was updated successfully, but these errors were encountered:
After adding the abstractions that facilitate working with queues in multi-threaded scenarios, looks like the only way to actually consume descriptor chains right now is via the
iter
method called onQueueState
after locking, which returns an iterator. Moreover the iterator is now the only one with thego_to_previous_position
method, because it’s guaranteed that only one consumer can have an iterator at any given time, so there’s no race condition. That generates some limitations:go_to_previous_position
when necessary) automatically implies having a very large critical section, because no one else gets to pull descriptor chains out of the queue while an iterator is held.QueueState
, an iterator is unwieldy because it keeps&mut self
borrowed while it lives, which leads to all sorts of problems with the borrow checker.In terms of alleviating the descriptor chain consumption side of things, one simple way of addressing both the above concerns is introducing some sort of
pop_descriptor_chain method
. It generates a comparatively much smaller critical section (so multiple parallel consumers can make progress simultaneously), and also produces a descriptor chain without tying it to a&mut self
borrow for the entire lifetime. Looks like adding this method is totally fine, as there are no potential race conditions.The trickier part is how to handle the equivalent of
go_to_previous_position
. We have seen at least several use cases where we first need to pop a descriptor chain from the queue, and then while processing it we realize there’s a blocker somewhere (for example, the backend cannot currently execute an operation for some reason), so we cannot mark it as used. This means we either need to return it to the queue, or store it somewhere. The former is complicated by race conditions in multithreaded scenarios, while the latter generated unwanted extra complexity. One straightforward way to address this is to add a companion trait toQueueStateT
(named along the lines ofQueueStateOwnedT: QueueStateT
or something like that) which stands for an object that's owned by a single thread and thus can safely and directly implement operations such asgo_to_previous_position
anditer
. Will post an example patch later, and looking forward to any other thoughts and perspectives here.The text was updated successfully, but these errors were encountered: