Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to render depth information? #1

Open
kvuong2711 opened this issue Aug 25, 2024 · 9 comments
Open

How to render depth information? #1

kvuong2711 opened this issue Aug 25, 2024 · 9 comments

Comments

@kvuong2711
Copy link

Hi Lorenz,

Thanks for the cool work! I know "rendering depth" is on your to-do list, but I'm wondering if you have any quick pointers on how to implement this function inside your current code base (e.g., is it something relatively simple to do).

Thank you for your help!

@lolleko
Copy link
Owner

lolleko commented Aug 25, 2024

Hey Khiem,

Unfortunately, it is not straight forward.

To get accurate and usable depth results from the unreal engine you need to use a custom shader to write the scene depth values into textures. I did this in a previous work you can find the relevant code here: https://github.com/search?q=repo%3Aunrealgt%2Funrealgt%20depth&type=code
The custom postprocess materials/shaders encode the perspective or planar depth values in an RGBimage.
A python script which is also in the repo/search i linked can then be used to extract mm depth information from the RGB.

In theory something similar should be possible in this tool as well. However,
this tool only generates panoramic images, this is accomplished in the Unreal Engine by rendering a cube map.
Cube map rendering does not allow to inject custom post process shaders/materials. Meaning you wont be able to get proper depth values from the scene.

Adding a depth shader/material would be possible, if you switch from rendering cube maps to rendering individual image slices, and then render the slices using a normal scene capture instead of a cube map scene capture.

So rough outline on what could be done:

Option 1:

  1. Modify UMTSamplerComponentBase to render 11 slices instead of one cubemap
    a. Instead of creating one AMTSceneCaptureCube in https://github.com/lolleko/mesh-data-synthesizer/blob/main/MeshSynth/Source/MeshSynth/Sampler/MTSamplerComponentBase.cpp#L28 create n (e.g. 11) AMTSceneCapture. Each of these should be rotatated so it captures a slice of the 360 panorama
    b. Add the post process material from https://github.com/unrealgt/unrealgt to the AMTSceneCapture
    c. Trigger capture of all n/11 AMTSceneCaptures for example enqueueing them all here: https://github.com/lolleko/mesh-data-synthesizer/blob/main/MeshSynth/Source/MeshSynth/Sampler/MTSamplerComponentBase.cpp#L256 Note this probably requires modifications in other parts of the code too.
  2. In theory the image writing doesn't need to be touched, as the tool is already capable of rendering panoramic and non-panoramic images and is also capable of rendering more than one image per sample location.

Option 2:

  1. Unreal Engine adds support to inject postprocess materials into cube map rendering (unlikely, but i can try to create a PR for https://github.com/EpicGames/UnrealEngine )
  2. Add the post process material to the panorma render

@kvuong2711
Copy link
Author

kvuong2711 commented Aug 25, 2024

Hi Lorenz @lolleko,

Thank you for your detailed response, I appreciate it. I admit that I'm not familiar with UE so please bear with me :). The task I'm trying to do is a little simpler: for a given geo-referenced camera pose (e.g., lat/long/alt + rotation), I just want to render both perspective RGB + depth for that perspective viewpoint. So I guess I don't have to deal with rendering cube map/panorama like what you discussed.

I looked up online and found a few other options that can render the depth with Unreal Engine, such as using another plugin Movie Render Queue Additional Render Passes. This seems to me a reasonable tool that allows me to render depth. In fact, I saw one or two projects also used this tool to render depth from python script (e.g., https://github.com/PerceivingSystems/bedlam_render/tree/main/unreal/render). Have you had any experience with this tool and is it similar to your custom depth shader? Since I don't have experience with writing custom shaders/plugins for UE, I'm hoping to find a simple solution that doesn't require writing those. Hopefully the question is not too out of scope!

Thank you!

Repository owner deleted a comment Aug 26, 2024
Repository owner deleted a comment Aug 26, 2024
Repository owner deleted a comment Aug 26, 2024
Repository owner deleted a comment Aug 26, 2024
Repository owner deleted a comment Aug 26, 2024
@lolleko
Copy link
Owner

lolleko commented Aug 26, 2024

I think I would be able to make the changes to make this possible, as this is indeed a lot simpler.
But you would still need to open the unreal engine editor and open the MeshSynth project and do some minor level editing and configuring in the editor.
Would you be open to learning/using unreal engine to that degree?

Unfortunatly i don't think there is a solution that would offer what you need out of the box, neither with this project nor with other tools.

Repository owner deleted a comment Aug 26, 2024
Repository owner deleted a comment Aug 26, 2024
Repository owner deleted a comment Aug 26, 2024
@kvuong2711
Copy link
Author

kvuong2711 commented Aug 26, 2024

Hi Lorenz @lolleko,

Definitely! If you can draft up a quick sample of taking as input a location + rotation (maybe w.r.t. ECEF or local ENU, etc. -- up to you) then output the rendered RGB + depth, that would be awesome. From this starting point, I'm happy to make a PR to extend it to be more thorough if needed.

Can I also ask are you compiling the project on Ubuntu or Windows? Opening the project with UE 5.4.3 on Ubuntu has been giving me problems.

Thanks!

Repository owner deleted a comment Aug 27, 2024
Repository owner deleted a comment Aug 27, 2024
Repository owner deleted a comment Aug 28, 2024
Repository owner deleted a comment Aug 28, 2024
@lolleko
Copy link
Owner

lolleko commented Aug 29, 2024

Hey @kvuong2711 ,

I will try to code up a PoC, that will give you the ability to generate depth images alongside regular images.
However, positioning the camera where the samples are captured is something you will need to do on your own.
This will most likely be similar to how the "ReSamplerComponent" works (https://github.com/lolleko/mesh-data-synthesizer/blob/main/MeshSynth/Source/MeshSynth/Sampler/MTReSamplerComponent.cpp) which I used to create synth versions of Pitts30 and Tokyo247.

I will post more detailed instructions once the PoC is ready.

@lolleko
Copy link
Owner

lolleko commented Aug 29, 2024

I didn't test on Linux yet, this has also been on my TODO list for a long time. Windows is fully supported, Linux might require some fixes.

and also sorry about all the bot comments here, not sure what's going on.

@kvuong2711
Copy link
Author

Hi @lolleko,

Is there any update regarding this? 😃

@lolleko
Copy link
Owner

lolleko commented Sep 28, 2024

Hey,

took some hours to continue working on this, but don't think i can invest more any time soon.
You can find a early working version here: https://github.com/lolleko/mesh-data-synthesizer/tree/depth-capture
This adds depth capture functionality to the MTReSamplerComponent This component is used to replicate existing datasets e.g. Tokyo247 or Pitts.
You can add your own input here https://github.com/lolleko/mesh-data-synthesizer/blob/depth-capture/MeshSynth/Source/MeshSynth/Sampler/MTReSamplerComponent.cpp#L15 just look at the existing datasets and change it according to your needs.

If you enable depth capture on the Resamplercomponent (disabled by default) it will generate the following pairs:
image

image

The depth images use a custom format. The depth is encoded in mm as a 24bit integer inside the RGB channels, have a look at https://colab.research.google.com/drive/1wgKrLEiQ4yHdqjLnJmFPXk_D8z3TRsYE?usp=sharing for example conversion code back to meters.

Known Issues

  • Unreliable (Sometimes what is visible on the depth image does not match the color image (parts of the 3d model missing)
  • Genereated depth pngs seem buggy (cant open them in windows, but just fine via python)

@github-staff github-staff deleted a comment from mayank785 Oct 23, 2024
@colintle
Copy link

colintle commented Nov 4, 2024

@kvuong2711 were you able to test this out?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

12 participants
@lolleko @kvuong2711 @colintle and others