-
Notifications
You must be signed in to change notification settings - Fork 292
OpenAMP Application Services Subgroup Meeting Notes 2020
Nathalie Chan King Choy edited this page Oct 28, 2020
·
4 revisions
- App-services architecture and functionality overview (Dan Milea, 15 min)
- MMIO and MSI discussion and demo (Josh Pincus, 30 min)
- Call for input on open source reference creation and on the meeting format going forward (all, 15 min)
- Dan Milea WR
- Etsam Anjum
- Joshua Pincus
- Maarten Koning
- Stefano Stabellini
- Tomas Evensen
- Nathalie Chan King Choy
- Dan & Josh to send slides
- Stefano to start a thread w/ the folks who are working on shared memory & VirtIO
- Recording link (I am not sure how long before the recordings expire or I will hit my storage limit, so if you need to catch up by watching the recording, please download it in the next couple weeks)
- Dan
- Slides
- Recap: Why we're interested in doing VirtIO
- Enables IP reuse
- Ideally solution would scale from homogeneous CPU clusters (e.g. hypervisor based deployments) to heterogeneous clusters (e.g. OpenAMP deployments)
- Acknowledge that adding SW increases code size
- VirtIO solution is meant to enhance or work alongside RPMsg
- No hypervisor w/ OpenAMP
- Hypervisor-less VirtIO
- PMM = Physical memory manager, instead of VMM
- VirtIO spec (maybe version 1.2) includes shared memory regions definitions that allow devices & drivers to share memory region. Prepared & shared by device & used by driver.
- In POC has virtqueues and buffers, which maps onto regular OpenAMP shared memory configurations
- Hypervisor-less VirtIO POC
- LKVM small code base
- LKVM could be replaced by other base notifications on OpenAMP platform
- There's work to do to update to use 1.1 MMIO spec
- Virtual devices only: No device pass-through yet
- There can be several shared memory regions, but for now, there's one that holds everything that needs to be shared, which maps really well to OpenAMP configurations
- Next steps
- Goal is to map closely to OpenAMP deployment
- VMM based notifications -> HW notifications
- Homogeneous CPU setup -> Heterogeneous CPU setup
- Performance: related to notifications.
- Works like regular VirtIO
- Nothing spectacularly different, so will pass on to Josh for VirtIO MSI investigation ;)
- Joshua
- Slides
- Goals: Migrate to emulation-free environment
- Dan's POC statically defines some of the configuration details
- Joshua focused on step 2: improving the interrupt latency of the MMIO transport
- Lower performance than PCI counterpart, 3 main reasons why:
- 1 interrupt
- Largest problem is process for ACK-ing interrupt requires 2 emulated operations (expensive)
- Notification of events requires emulated operations
- VirtIO on X86 can use PCI or MMIO. Most ppl use PCI b/c better performance. Can ACK w/ APIC directly.
- VirtIO on Arm mostly relies on MMIO, so stuck w/ legacy problems above.
- Lower performance than PCI counterpart, 3 main reasons why:
- Intel developed MMIO + MSI prototype
- Intel provided patches to Linux VirtIO MMIO driver & VirtIO, but not to QEMU or LKVM
- Demo: Went with 1:1 relationship between vectors & queues
- Major improvement was from receiving an MSI and handling it
- Test: Netperf TCP_RR (request receive) shows huge improvement
- Using VirtIO networking device w/ PCI
- VirtIO MMIO w/ MSI on par with PCI: Only the notify requires emulation. -> Ties in to Dan's work
- Without MSI, the # of traps leads to 1.2 M traps for the ~30s test.
- Demo #1: without MSI support
- Linux guest running on top of LKVM, Yocto distribution
- Cat /proc/interrupts: #5 is the networking device
- Getting ~11K round-trip transfers, ~663,000 interrupts
- Demo #2: with MMIO transport w/ our changes
- Cat /proc/interrupts shows the MSI on #26-28
- Get roughly 80-90% improvement. ~19K round-trip transfers
- Interaction w/ guest & emulation was just for configuration
- Next: Do HW notify so that guest driver can indicate updated queues to device via HW mechanism, so can get out of the emulator (emulator is expensive)
- Maarten: We are setting up for LKVM doing shared memory horizontally
- LKVM not using KVM or virtualization, With MSI we can achieve same performance as PCI transport horizontally between runtimes
- Tomas: X86 based -> when would you move over to Arm based, to see how things would improve?
- Joshua:
- Haven't done experiment yet on Arm (in next few weeks). Want to show VxWorks as a guest.
- I think we'll show parity b/c Intel's motivation was Arm
- We're showing the improvement on x86 that they got on Arm
- Joshua:
- Tomas: How would this fit into OpenAMP? Would this parallel universe w/ VirtIO, or integration?
- Dan:
- Simplest use case: 1:1 mapping with OpenAMP
- Go w/ performance penalty of single notification source & not using what Josh showed
- Use VirtIO with OpenAMP with new sources & increasing the code size
- Logical layout is pretty similar
- Could even keep the virtio based Rpmsg exchange & have another virtio for enhanced IPC
- Or, Could remove either of those, or keep 1 when you
- Simplest use case: 1:1 mapping with OpenAMP
- Maarten: Would be like low level APIs. When get to application level, ___
- By spending that extra SW infrastructure & memory, it's OK b/c you have apps written using standard interfaces that are sitting on top
- Dan:
- Tomas: Do you have a sense of footprint (e.g. socket interface or TCP/IP on top)?
- Maarten: Depends on runtime. IP stack in Zephyr < Linux or VxWorks
- Tomas: Agree you can pay a certain amt of penalty for this use case, but not Linux-level penalty
- Maarten:
- Zephyr-like magnitude
- Higher level standard APIs instead of lower level "fake" mappings that requires you to engineer every invocation. Low level = syntax compatibility, higher level -> semantic compatibility
- Stefano: Shared memory is not spec-compliant, so recommend to engage w/ VirtIO community
- Maarten: There's the spec for 1.1, but then if you compile in GitHub, you get more spec & it's in there. Wonder if that will become 1.2. https://github.com/oasis-tcs/virtio-spec
- Stefano: Not aware of those changes. Aware that there isn't a way to choose a window for memory at the front-end side to be used for VirtIO that is spec-compliant.
- Dan & Josh to send slides
- Stefano to start a thread w/ the folks who are working on shared memory & VirtIO
- Stefano: MSI vs interrupts - great that you have #s b/c earlier discussions were very hand-wavy
- Maarten: Implementation is smaller than using all the PCI emulation infrastructure
- Stefano: VirtIO MMIO is not really going away. Just Q as to if we need MSI for performance or not. Was on Xen deployment & wasn't obvious that we need MSI. Would love to share these #s.
- Tomas: SystemDT work on defining shared areas. Would be great if aligned.
- Maarten: This is a use case that would benefit
- Tomas: SystemDT have common way to describe this for everyone & each SoC vendor can do it their way at low-level & just have vendor-specific back-end for Lopper
- Maarten: If we consider the VirtIO spec that's on GitHub instead of the PDF, then we're on a path to shared memory configuration
- Tomas: When could we share on OpenAMP GitHub?
- Maarten:
- Have to decide how we want to try to influence LKVM as a PMM, for hypervisor-less configurations. For hypervisor-less you wouldn't even have to launch KVM.
- Could do in separate branch or repo
- The maintainers are at Arm. We haven't reached out to them yet.
- Maybe we should do that & circle back with the group.
- Maarten:
- arnaud pouliquen
- Bill MIlls Linaro
- Dan Milea WR
- Ed Mooring
- Joshua Pincus
- Loic Pallardy
- Maarten Koning
- Mingkai Hu
- Poonam
- Tomas Evensen
- Stefano Stabellini
- Mandate of app-services working group
- Find easy ways for us to combine active elements to use resources that are local, remote, and in an OpenAMP-enabled system, heterogeneous system
- What group wanted to pursue was
- Console sharing
- File sharing
- IPC
- Wind River demonstrated Isled daemon running on Linux at Linaro Connect SAN19
- Able to serve up resources to a more resource constrained execution environment (e.g. file r/w, console redirection to Linux)
- WR was planning to open source that so it could be used as basis for a daemon representing a resource-constrained execution environment & presenting resources to it in an easy way, so that apps could focus on higher-level APIs & implementation of underlying SW serving up those resources could be remote from the resource constraints
- Today, Dan from WR presented slides about the history & current thinking & opportunity for alignment with other activities going on in Linaro
- Isled (2019): sharing resources from rich OS (e.g. Linux) to resource-constrained (e.g. RTOS)
- Building blocks: Shared memory, OpenAMP, RPMsg
- Services trying to offer: serial console, file system access, port forwarding (for TCF debugging infra)
- On top of that: OpenAMP/RPMsg
- On top of RPMsg: Custom extensible RPC mechanism
- Decided that wanted to align more w/ standards based approach
- Isled evolution (2020)
- Can replace custom RPC framework w/ something standard: e.g. VirtIO
- Now Linux daemon can be based on something like LKVM
- LKVM: Small tool that allows users to create & control KVM guests. Runs on Intel, Arm, MIPS, etc. Really small.
- Suggest to OpenAMP App services group to use VirtIO for framework
- Standard
- Allows IP reuse
- Exploration activities
- Hypervisor-less VirtIO
- VirtIO MMIO MSI Interrupts
- Linaro Stratos project aligns w/ some work we're trying to do
- Overlaps:
- STR-14: Unprivileged VirtIO device back-ends, pre-shared memory
- STR-9: VirtIO performance optimizations
- STR-11: safety/sercurity use cases
- Attendees between Stratos meetings & OpenAMP meetings
- Major differences
- Stratos still deals w/ hypervisors & tries to improve infra on top of hypervisors. We're interested in pure VirtIO without hypervisor. Instead of using RPMsg as unifying API, using VirtIO
- Overlaps:
- Isled (2019): sharing resources from rich OS (e.g. Linux) to resource-constrained (e.g. RTOS)
- Stefano:
- Attending Stratos meetings & agree on overlap w/ STR-9 & 14.
- Context: If you try Linux on other side, it will not work b/c not an easy way to have Linux use pre-shared memory for front-end. Missing some plumbing. Qualcomm submitted patch mechanism, but some resistance upstream. Need to get understanding of all the virtIO use cases - goal of Stratos.
- STR-14 addresses what you're interested in for OpenAMP
- STR-9 translates, but more complicated. About dynamically shared memory in secure way. Applicable to OpenAMP, but will require additional interfaces to share the memory.
- Dan:
- LKVM straightforwardness allows easy experimentation (order of magnitude easier than QEMU)
- Stefano:
- LKVM is a good start. Expecting there won't be any tie to any specific VMM.
- Maarten:
- This is a pivot b/c w/ this approach, if you were OS company or OSS project that wanted to run beside Linux & provide higher-level app services, then you would enable VirtIO in your runtime & put everything on top of VirtIO (console, file system, etc.) & sent over MMIO transport to a "pseudo-VMM" running on a remote OS like Linux.
- Tomas:
- Virtualization & AMP are very similar, esp in Embedded.
- Sensitive to code size, so I'm all for going this direction, especially using DT as a way of configuring.
- Has to be scalable.
- Maarten:
- Also in larger systems where you want to have API compatibility, don't think that should be entangled w/ OpenAMP implementation at low level
- This is more about enabling the services to have higher degree of middleware & app compatibility w/ SW
- Think as long as we don't entangle VirtIO layer in OpenAMP, will meet scalability requirement
- Bill:
- Thought we should do more w/ direct use of VirtIO
- Small RTOS + Small RTOS is a use case
- Beefing up VirtIO between 2 processors not shared betw hyp
- Uncomfortable w/ how you're presenting it as a big shift. What I think we should be doing is opening up multiple use cases.
- Beefing up VirtIO doesn't mean we can't continue to use RPMsg & potentially enhancing some of these so they can also run on top of VirtIO or RPMsg, depending on need of the use case
- Distinction you're drawing of DT vs. resource table: Both have advantages in different use cases
- Not criticizing what you're trying to do, just how you're presenting it. Think we should not say we're throwing away the previous work, but beefing up VirtIO & will have advantage to multiple use cases, incl. traditional ones.
- Maarten:
- That makes sense.
- Pivot is more around WR open sourcing Isled, as opposed to leveraging LKVM as a pseudo-VMM to VirtIO. Maybe that's more of an internal pivot than for OpenAMP group.
- Goals for what group is trying to achieve stays the same
- Bill
- Makes sense
- And maybe we enhance LKVM to understand RPMsg
- Dan
- User space -> linking to OpenAMP should be trivial
- VirtIO approach as superset
- Definitely a way to include RPMsg in this direction
- Maarten:
- LKVM code base is really just there for VirtIO termination. Like idea of enabling LKVM to also be RPMsg termination.
- Dan:
- LKVM is a command line tool w/ basic building blocks of a VMM.
- Bill:
- If you were writing a custom app that wanted to talk to a custom component in your RTOS, this wouldn't be applicable unless you wanted to create a pipe?
- Maarten:
- VirtIO vsock could create socket in Linux to connect to the virtIO vsock in your runtime, like port forwarding
- Bill:
- Trying to understand the scope of what you're suggesting. Do we also need library that applications would use?
- Stefano:
- Typically for app-specific data exchange, there needs to be some common transport (e.g. RPMessage, vsocks)
- Highlights that this is not a replacement for RPMsg. Work in Stratos born out of different concerns, like security. It just happened to align extremely well w/ OpenAMP b/c it's so similar to Embedded hypervisor case.
- RPMsg still has a place & for application-specific data exchanges, could still be the way to do it.
- Bill:
- Or, direct use of vsock to VirtIO, leaving LKVM out of it?
- Stefano:
- Typically need at least 1 point of contact for VirtIO MMIO emulation, so you need LKVM involved in setup of communication channel
- Bill:
- In RPMsg, we're circling around, where applications can create their own end points & do comm. Just trying to correlate those 2 points.
- Think LKVM approach is good for providing generalized OS services to a remote core. Just trying to understand what else it is/isn't
- Stefano:
- There are ways to work around limitation
- VirtIO MMIO is simpler version of VirtIO PCI. But there is still a config space that looks a bit like PCI config space, so you go through that to discover all the VirtIO protocols that are available. Usually over 1-2 pages, so usually just 1 VirtIO back-end provider for an LKVM instance. You could have 2 LKVM instances running if you provide 2 different VirtIO MMIO regions. Or, you could have 1 with VirtIO net & 1 with VirtIO block w/ different instance & different config space - theoretically possible.
- The single source of concentration of these services is VirtIO MMIO configuration
- Joshua:
- You could use the DTB or something like it to provide glue. e.g. 2 RTOSes & 1 endpoint. DTB communicated between them conveys the memory, type of device & all the rest. Then when guest starts talking to memory, it doesn't care who it's talking to, as long as the DTB describes the memory in question.
- Ed:
- Concern about how far down this scales
- Was talking to someone doing OpenAMP on a core talking to Linux that has 64K of RAM total, and it barely fits right now
- Maarten:
- The APIs & goals we set for app-services were for enabling those higher-level pieces of SW that want to use sockets & want to do higher-level functions. Those higher level systems have a larger code base.
- Don't think the app-services goals displace our low level API capabilities. As Tomas said, we don't want to be entangled. When you want the higher level functions, you add them.
- Joshua
- Freedom to be able to swap out the transport to use RPMsg under the hood b/c lightweight & small, or if you want to use vsock, or other, you should be able to swap it out & keep the higher-level functionality more or less the same. You can make choice to use 1 RPMsg channel to convey info instead of multiple queues & interrupts.
- Loic
- For VM, it's good direction
- But, with remote processor, when limited in terms of processes, memory, etc. RPMsg was a good solution for multiplexing services
- Know there is lot of discussion on VirtIO by Stratos project. Would like to see where we are going. Not sure that co-processor use case is well understood by everybody.
- All this development will be difficult to apply to platforms where very limited HW resources
- Stefano:
- Is there any very small VirtIO front-end impl somewhere that we know of?
- Loic:
- Not sure. I implemented VirtIO console on our processors a few yrs ago. Need to duplicate memory areas. Need to add new protocol for multiplexing. RPMsg was 1 port, 1 set of processes, mailboxes & after it was dynamic discovery of the services. That was good b/c changing firmware loading co-processor, were able to adapt the services dynamically w/o changing Linux kernel
- Were able to de-couple the update of the co-processor & Linux. Was important in set top boxes where using RPMsg for video co-processors where want to add new decode standard, able to deploy changes asynchronously on the different sides.
- Need to understand how this could work w/ VirtIO. Seems to be static definition of VirtIO link & services on top.
- Maarten: Next steps:
- Thinking about requirements for app services framework that might leverage VirtIO & enumerate, so we can drive towards agreement on where to focus
- What use cases & what not
- Let's think about & discuss at next call
- Bill: Do you have any code running like this today?
- Maarten: We can demo 9p file system and console running over VirtIO with LKVM at next call
- Bill: Is code on consumer side, open source, or tied in to VxWorks
- Maarten:
- Potentially portable to different runtimes.
- Leveraging BSD code
- Bill: Is that side currently open source licensed?
- Maarten: BSD licensed
Arnaud Pouliquen (ST), Clement Leger (Kalray), Ed Mooring (Xilinx), Etsam Anjun (MGC), Grant Likely (Arm), Ioannis Glaropoulos, Mark Grosen (TI), Nathalie Chan King Choy (Xilinx), Sebastien Le Duc, Tomas Evensen (Xilinx), Xiang Xiao (XIaomi), Maarten Koning (WR), Dan Milea (WR)
Agenda:
- Design options: native, RPMsg, virtIO drivers
- Update on isled - in the process of being open-sourced
- BUD20
Biggest app services needs:
- Remote file access
- Remote console
- Proxy ports (e.g. debug)
- Messaging APIs
Arnaud P:
- The number of virtIO devices supported by the h/w would be a limiting factor
- RPMsg-based protocol family - AF_RPMSG was a previous effort (circa 2016)
- resource constrained systems would benefit from using a proxy implementation instead of virtIO;
- the memory footprint would be reduced
- virtIO introduces a lot of complexity in systems which lack a hypervisor infrastructure
- virtIO may take a lot of bandwidth on resource constrained h/w
- proxy makes sense; can use the virtIO protocol as a starting point
- what about systems without shared memory interconnect?
- It's possible to have an RPMg implementaiton on top of a serial link
- RPMsg is a shim layer which relies heavily on virtIO which is based on shared memory
- replacing the shared memory transport would require significant effort
- if RPMsg is used as protocol, it's impossible to detect when/if the other side is restarted
- instead of virtio we could create ethernet proxy for isled
- isled to instantiate a tun/tap driver on Linux and push packets over RPMsg;
- remote would need a custom driver to process ethernet packets
BUD20: there will be an App services working group at Linaro BUD20 on Wed, March 25, 2020. Schedule to be defined.