You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So far, there are just a few async features implemented: UDP sockets and ZTimer are supported.
More features on the wishlist (picking from #63) are:
Should be easy:
TCP
UART
mbox
CAN (because it is mbox based)
client side gcoap
Looks doable but a bit weird:
GPIO interrupts. The RIOT way of doing GPIO interrupts (set up at init time) disagrees with embedded-nal that expects to set the edge mode on every await (plus expects level modes, but that should be emulatable). Worse, because the callback can not be updated later without re-initializing the GPIO, we'll need a static place where the callback lives so that when the pin is moved between tasks, the next interrupt can be dispatched to the right task.
Mutex and message (including msg_bus, which is currently the only way to be notified of IP address changes) are tricky, as for neither, you can get a callback in the originating context. As callbacks are how wakers are triggered, implementing them will need cooperation from the executor. (This is in contrast to what is implemented now: There, the embassy-executor-riot knows nothing of what is implemented on top). The fundamental issues are:
A thread can only wait for one mutex at a time. An executor may have different tasks that wait for different mutexes. (Similarly, an application may find itself in a situation where it needs to wait for either of two mutexes).
As long as mutexes are only used like lightweight critical sections (preventing simultaneous access to any given data structure, eg. while inserting into a linked list), that's all not too bad: For those, even an async environment may just block on them. (Maybe the Rust lint for holding a mutex across await points is even user configurable and could apply to our Mutex locks). However, our use of mutexes in RIOT is mixed (for example, SUIT has a worker lock that's held for full a full firmware update).
Messages are queued or even delivered only on wait. Even if a task could make the executor wait for its requested message when it is idle, unlike a thread a task can't guarantee that its executor is always idle when the task is idle -- so tasks will practically need queues more often than not. But they're accessible only in the sequence they come in (except responses to sent messages -- sending a message and getting the response might be straightforward to implement async'ly). So when one task is awaiting to process messages of type A, and task B has generally ordered messages of type B (but is currently doing some stuff inbetween, like working off the last message over the network), then when a new message of the type that B ordered arrives, followed by one for A, the executor either needs to drop the one for B (in the model of B not having a queue) or needs to delay what is there for A.
We might introduce some per-task queues where the executor dispatches messages as soon as they arrive (and that can probably be done without changes to RIOT), but this feels a lot like just reimplementing mboxes. So maybe just shift some stuff to mboxes in RIOT?
The text was updated successfully, but these errors were encountered:
Support for netreg should be relatively straightforward -- it'd just depend on MODULE_GNRC_NETAPI_MBOX or MODULE_GNRC_NETAPI_CALLBACKS to be active, which is a reasonable requirement.
So far, there are just a few async features implemented: UDP sockets and ZTimer are supported.
More features on the wishlist (picking from #63) are:
Mutex and message (including msg_bus, which is currently the only way to be notified of IP address changes) are tricky, as for neither, you can get a callback in the originating context. As callbacks are how wakers are triggered, implementing them will need cooperation from the executor. (This is in contrast to what is implemented now: There, the embassy-executor-riot knows nothing of what is implemented on top). The fundamental issues are:
A thread can only wait for one mutex at a time. An executor may have different tasks that wait for different mutexes. (Similarly, an application may find itself in a situation where it needs to wait for either of two mutexes).
As long as mutexes are only used like lightweight critical sections (preventing simultaneous access to any given data structure, eg. while inserting into a linked list), that's all not too bad: For those, even an async environment may just block on them. (Maybe the Rust lint for holding a mutex across await points is even user configurable and could apply to our Mutex locks). However, our use of mutexes in RIOT is mixed (for example, SUIT has a worker lock that's held for full a full firmware update).
Messages are queued or even delivered only on wait. Even if a task could make the executor wait for its requested message when it is idle, unlike a thread a task can't guarantee that its executor is always idle when the task is idle -- so tasks will practically need queues more often than not. But they're accessible only in the sequence they come in (except responses to sent messages -- sending a message and getting the response might be straightforward to implement async'ly). So when one task is awaiting to process messages of type A, and task B has generally ordered messages of type B (but is currently doing some stuff inbetween, like working off the last message over the network), then when a new message of the type that B ordered arrives, followed by one for A, the executor either needs to drop the one for B (in the model of B not having a queue) or needs to delay what is there for A.
We might introduce some per-task queues where the executor dispatches messages as soon as they arrive (and that can probably be done without changes to RIOT), but this feels a lot like just reimplementing mboxes. So maybe just shift some stuff to mboxes in RIOT?
The text was updated successfully, but these errors were encountered: