Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gralloc: Add support for IMPLEMENTATION_DEFINED #44

Open
wants to merge 1 commit into
base: lineage-18.1
Choose a base branch
from

Conversation

rothn
Copy link

@rothn rothn commented Nov 20, 2022

Currently, gralloc lacks support for
HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED, which other implementations treat as NV12.

Change gralloc to support HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED as NV12 in line with these other implementations. Validated with the libcamera Android HAL.

Currently, gralloc lacks support for
HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED, which other implementations
treat as NV12.

Change gralloc to support HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED as
NV12 in line with these other implementations. Validated with the
libcamera Android HAL.
Copy link

@jmondi jmondi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not reviewing but just commenting that I'm stuck at this point as well...

As I understand it, IMPLEMENTATION_DEFINED is a wild-card format, that each platform has to define usually to a format the graphic device can display. The gralloc module does the translation to a DRM_FORMAT when it receive an allocation request, and the camera HAL maps it to a compatible format (likely the same format), so that higher layers can use IMPLEMENTATION_DEFINED as a wild card and the platform specific layers do the translation (of course, they should agree on what to translate it to).

I'm now struggling to understand how Waydroid, being platform-agnostic handles this platofmr-specifc mapping, and I'm a bit stuck staring at https://github.com/waydroid/android_external_minigbm trying to find out what backend it uses and how should I infer the correct mapping...

Maybe @aleasto could help clarify this ?

@aleasto
Copy link
Member

aleasto commented Dec 7, 2022

We use hardware/waydroid/gralloc by default on DRI platforms. We also build the gbm_mesa_driver of https://github.com/waydroid/android_external_minigbm which is similar but exposes some more modern features used in upstream mesa. We will probably switch to the latter by default in the future.

@jmondi
Copy link

jmondi commented Dec 7, 2022

thanks @aleasto, but I'm now then missing where gbm_bo_create() is implemented then, if not in external-minigbm (which confusingly, contains the gbm_mesa_driver too as well as other backends).

I'm still trying to figure out what backend is used for my device (pinephone pro) and why I have a failure at gbm_bo_create() after I modified gralloc_gbm to map IMPLEMENTATION_DEFINED on DRM_FORMAT_NV12

@aleasto
Copy link
Member

aleasto commented Dec 7, 2022

@aleasto
Copy link
Member

aleasto commented Dec 7, 2022

And yes, minigbm is terribly confusing. It started as an implementation of the gbm interface with different backends for different gpu vendors, and now one of the backends calls into mesa's libgbm instead.

@jmondi
Copy link

jmondi commented Dec 7, 2022

Oh that's directly from mesa: https://github.com/waydroid/android_external_mesa3d/tree/lineage-18.1/src/gbm

yeah but if I'm not mistaken that's a bare clone of mesa meant to be build externally and not integrated in the AOSP build ?

@aleasto
Copy link
Member

aleasto commented Dec 7, 2022

No, mainline mesa nowadays has the capability of building inline in AOSP. That's the mesa we're using.

@jmondi
Copy link

jmondi commented Dec 7, 2022

No mainline mesa nowadays has the capability of building inline in AOSP. That's the mesa we're using.

that's my understanding as well, it has to be built with the ndk and manually deployed. How can it be in use if I haven't manually deployed it on my device ? :)

edit: wrong quote

@aleasto
Copy link
Member

aleasto commented Dec 7, 2022

No, it can build both against the ndk and inside AOSP.
You can find mesa in the current waydroid images at /vendor/lib64/dri/libgallium_dri.so and /vendor/lib64/libgbm_mesa.so.
If you're running waydroid on a DRI platform that's what you're currently using.

@jmondi
Copy link

jmondi commented Dec 7, 2022

btw: cross referencing https://bugs.libcamera.org/show_bug.cgi?id=172

@jmondi
Copy link

jmondi commented Dec 12, 2022

@nroth waydroid/android_external_mesa3d#4
As I don't seem to be able to mention you there

@pinchartl
Copy link

And yes, minigbm is terribly confusing. It started as an implementation of the gbm interface with different backends for different gpu vendors, and now one of the backends calls into mesa's libgbm instead.

Is that used at all ? I had a look at Waydroid's gralloc implementation with Jacopo, and at least on the PinePhone Pro, the implementation being loaded at runtime is gbm_gralloc, which links directly to mesa's libgbm. When is the minigbm's gralloc module used ?

@aleasto
Copy link
Member

aleasto commented Dec 14, 2022

And yes, minigbm is terribly confusing. It started as an implementation of the gbm interface with different backends for different gpu vendors, and now one of the backends calls into mesa's libgbm instead.

Is that used at all ? I had a look at Waydroid's gralloc implementation with Jacopo, and at least on the PinePhone Pro, the implementation being loaded at runtime is gbm_gralloc, which links directly to mesa's libgbm. When is the minigbm's gralloc module used ?

It's not used at the moment.
You could still manually select it by setting ro.hardware.gralloc=minigbm_gbm_mesa

@pinchartl
Copy link

OK, thanks for the confirmation. What was the reason for adding a mesa libgbm backend to minigbm (by @rsglobal in waydroid/android_external_minigbm@4957355) ? Was it envisioned that minigbm would be used at that point, with a later change of plans ? What is the advantage of using that backend compared to using gbm_gralloc directly, which also uses mesa's libgbm with fewer levels of indirection ?

@rothn
Copy link
Author

rothn commented Dec 14, 2022 via email

@rsglobal
Copy link
Contributor

What was the reason for adding a mesa libgbm backend to minigbm

  1. gbm_gralloc is very primitive and not enough to cover all cases for camera and codecs. Also Rob Herring will likely never accept new features there, but critical fixes only.
  2. Minigbm still require manual per-device format/constraint hardcoding for camera and codecs, but at least minigbm already has all necessary helpers for this.
  3. Minigbm has more superior APIs (gralloc4, CrosAPI).

@pinchartl
Copy link

What was the reason for adding a mesa libgbm backend to minigbm

1. gbm_gralloc is very primitive and not enough to cover all cases for camera and codecs. Also Rob Herring will likely never accept new features there, but critical fixes only.

Agreed, gbm_gralloc is limited and is likely not a good path forward for cameras and codecs. Part of the reason is that it relies on libgbm from mesa, where gbm stands for graphics buffer allocator, not generic buffer allocation. Mesa's libgbm doesn't support YUV formats. While that could be added, it will probably encounter pushback from upstream, given libgbm's nature (we would have to design YUV support from a graphics point of view, and prove that it is useful for GPUs and display controllers without codecs and cameras).

2. Minigbm still require manual per-device format/constraint hardcoding for camera and codecs, but at least minigbm already has all necessary helpers for this.

3. Minigbm has more superior APIs (gralloc4, CrosAPI).

No disagreement that minigbm is the way to go, but I don't think a mesa libgbm backend will help much, as it doesn't support YUV formats.

@rothn
Copy link
Author

rothn commented Dec 18, 2022

In the long term, seems like we want to go with minigbm to handle Android gralloc() because it's more full-featured. For now, do folks mind continuing to use the thin gralloc_gbm implementation while we get the libcamera pipeline working?

If we use libcamera's HAL, the requirements are simple for us: support MJEPG and NV12. For now, if we use libcamera's HAL, we will HAVE to map IMPLEMENTATION_DEFINED to NV12:

from src/android/camera_capabilities.cpp

		/*
		 * \todo Translate IMPLEMENTATION_DEFINED inspecting the gralloc
		 * usage flag. For now, copy the YCbCr_420 configuration.
		 */
		HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED, {
			{ formats::NV12, formats::NV21 },
			true,
			"IMPLEMENTATION_DEFINED"
		}

@pinchartl
Copy link

In the long term, seems like we want to go with minigbm to handle Android gralloc() because it's more full-featured. For now, do folks mind continuing to use the thin gralloc_gbm implementation while we get the libcamera pipeline working?

I don't mind at all, but you will need to find a way to provide an NV12 buffer with two planes. The mesa libgbm doesn't support YUV formats, and is not likely to accept YUV support upstream easily. It may be possible to hack Waydroid's version of mesa, but I don't know how much work that would require, and if switching to minigbm wouldn't be simpler.

@jmondi
Copy link

jmondi commented Dec 19, 2022

Just as a note, last week I tried moving my setup to use minigbm. The minigbm rockchip backend requires out-of-tree kernel modifications to support a custom ioctl that allows buffer allocation on the Rockchip display driver
https://github.com/intel/minigbm/blob/master/rockchip.c#L217

As I didn't want to backport support for that ioctl I used instead the dumb_device backend
https://github.com/waydroid/android_external_minigbm/blob/lineage-18.1/dumb_driver.c
which should work with generic buffers.

The result is that the container starts but I don't see any buffer being drawn to the screen, which makes me wonder: how does rendering works with Waydroid ? I see the minigbm backend operating on /dev/dri/card0, does mesa instead operates on the GPU device at /dev/dri/renderD128 ?

@rothn
Copy link
Author

rothn commented Dec 19, 2022

@aleasto any thoughts?

@rothn
Copy link
Author

rothn commented Dec 20, 2022 via email

@jmondi
Copy link

jmondi commented Dec 20, 2022

I see several things in gralloc_gbm that makes me wonder if YUV support is actually tested.

Look at get_gbm_format(): YCbCr420 formats are mapped to a 0 (which afaict it's not a valid GBM_FORMAT_ entry) and
YV12 is mapped on an RGB format, and I suspect the reason it's because it's the closer format supported by mesa.

         case HAL_PIXEL_FORMAT_YV12:                                             
                 /* YV12 is planar, but must be a single buffer so ask for GR88 */ 
                 fmt = GBM_FORMAT_GR88;     

with "adjustments" to the buffer size to include the extra CbCr plane (not sure about the width halving though)

        /* Adjust the width and height for a GBM GR88 buffer */                 
        if (handle->format == HAL_PIXEL_FORMAT_YV12) {                          
                 data.width /= 2;                                                
                 data.height += handle->height / 2;                              
         }   

and the fact you see greyscale image might mean that only the Y plane is rendered, as the extra CbCr plane doesn't exist for mesa at rendering time ?

Anyway, it seems all too fragile to consider it a viable way forward

@pinchartl
Copy link

The mesa libgbm doesn't support YUV formats, and is not likely to accept YUV support upstream easily. It may be possible to hack Waydroid's version of mesa, but I don't know how much work that would require, and if switching to minigbm wouldn't be simpler.

Correct me if I'm wrong, but it seems like the existing code for gbm_gralloc already supports YUV. For example, search for HAL_PIXEL_FORMAT_YCrCb_420_SP and HAL_PIXEL_FORMAT_YV12 in gralloc_gbm.cpp.

gbm_gralloc has limited YUV support implemented as a hack, but libgbm doesn't. Note how HAL_PIXEL_FORMAT_YCbCr_422_SP, HAL_PIXEL_FORMAT_YCrCb_420_SP and HAL_PIXEL_FORMAT_YCbCr_420_888 and all translated by get_gbm_format() to 0 (which, in this context, corresponds to GBM_BO_FORMAT_XRGB8888, further translated in the mesa libgbm implementation to GBM_FORMAT_XRGB8888 by format_canonicalize()). Translating YUV formats to RGB formats that provide the correct buffer size in bytes may work in some cases, but has two important drawbacks:

  • As the allocator is implemented on the GPU side by libgbm, if the GPU uses tiled formats, using the allocated buffer for YUV may result in various problems depending on the platform.
  • There is no way to allocate multi-planar formats with one dmabuf per plane.

Furthermore, if you look at home HAL_PIXEL_FORMAT_YCbCr_422_SP and HAL_PIXEL_FORMAT_YCrCb_420_SP are handled in gralloc_gbm, they are both mapped to the same GBM format (GBM_BO_FORMAT_XRGB8888) and the same bpp value (1). The buffer will thus be twice as big as required for 422_SP, wasting memory (it's even worse for 420_SP). This is likely easy to fix, but seems to indicate that YUV support in gralloc_gbm isn't and has never been a first-class citizen.

@pinchartl
Copy link

I see several things in gralloc_gbm that makes me wonder if YUV support is actually tested.

Look at get_gbm_format(): YCbCr420 formats are mapped to a 0 (which afaict it's not a valid GBM_FORMAT_ entry)

In mesa's libgbm implementation, 0 is considered to be GBM_BO_FORMAT_XRGB8888 and is mapped to GBM_FORMAT_XRGB8888 by format_canonicalize().

and YV12 is mapped on an RGB format, and I suspect the reason it's because it's the closer format supported by mesa.

I wouldn't even call it close :-)

         case HAL_PIXEL_FORMAT_YV12:                                             
                 /* YV12 is planar, but must be a single buffer so ask for GR88 */ 
                 fmt = GBM_FORMAT_GR88;     

with "adjustments" to the buffer size to include the extra CbCr plane (not sure about the width halving though)

        /* Adjust the width and height for a GBM GR88 buffer */                 
        if (handle->format == HAL_PIXEL_FORMAT_YV12) {                          
                 data.width /= 2;                                                
                 data.height += handle->height / 2;                              
         }   

and the fact you see greyscale image might mean that only the Y plane is rendered, as the extra CbCr plane doesn't exist for mesa at rendering time ?

I don't think that's related. Rendering is completely decoupled from capture.

Anyway, it seems all too fragile to consider it a viable way forward

That I agree with. It may be possible to get it working with further hacks on selected platforms and for selected use cases, but I think it's a dead end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants