Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to edit the scene? #29

Open
Lazyangel opened this issue Jul 10, 2024 · 9 comments
Open

How to edit the scene? #29

Lazyangel opened this issue Jul 10, 2024 · 9 comments

Comments

@Lazyangel
Copy link

Lazyangel commented Jul 10, 2024

Hello, I'm new to NeRF&3DGS, I want to know how to edit the scene after the training has been completed.By modifying the annotation.json file, I can change the objects present in the scene, but I don't know how to add or remove objects.
For instance, I wanted to duplicate an object in the scene. Here are the steps I took:

  1. In aggregate_lidar/dynamic_objects, I duplicated the point cloud of the object and modified its name.
  2. I added the corresponding annotation in the annotation.json file.
  3. I added the corresponding attr in the xxx.ckpt file (I am unsure if this step is correct).

But while rendering I got an error: KeyError: 'gauss_params.means'.How can i solve this problem?

I would be very grateful if anyone can help me!

@m15310926778
Copy link

m15310926778 commented Jul 11, 2024

@Lazyangel Hello, may I ask how to change the annotation.json file to alter the appearance of objects in the scene? Among them, annotation.json is in the input data. Does it mean that after updating annotation.json, it needs to be retrained and then rendered?

@Lazyangel
Copy link
Author

Lazyangel commented Jul 11, 2024

Hello, how I edit an object is as follows:

  1. Change the translation/size/rotation attributes of the object in annotation.json (all timestamp).
  2. No need to retrain, just render to get new scenes.
    I'm not sure if this is the best way to edit the scene, hope this could help you.@m15310926778

@m15310926778
Copy link

Thank you very much. I'll give it a try

@m15310926778
Copy link

By the way, I saw the author provide such a script in other answers, but I didn't understand the logic. You can take a look, which may be helpful for future work.
`import os
import shutil
import torch

def change_object(ckpt1, ckpt2, gid1, gid2, save_path):
obj_key1 = f'object_{gid1}'
obj_key2 = f'object_{gid2}'

obj1_attr_list = [attr for attr in ckpt1['pipeline'].keys() if obj_key1 in attr]
for attr in obj1_attr_list:
    attr2 = attr.replace(obj_key1, obj_key2)
    if attr2 not in ckpt2['pipeline'].keys():
        continue
    ckpt1['pipeline'][attr] = ckpt2['pipeline'][attr2]
    
torch.save(ckpt1, save_path)

if name == 'main':
root1 = '/path/to/scene1'
root2 = '/path/to/scene2'

ckpt1 = torch.load(f'{root1}/nerfstudio_models/step-000069999.ckpt')
ckpt2 = torch.load(f'{root2}/nerfstudio_models/step-000069999.ckpt')
gid1 = 'source_object_gid'
gid2 = 'target_object_gid'
save_folder = '/path/to/save/new/scene'
if not os.path.exists(save_folder):
    os.mkdir(save_folder)
    os.mkdir(f'{save_folder}/nerfstudio_models')
if not os.path.exists(f'{save_folder}/config.yml'):
    shutil.copy(f'{root1}/config.yml', save_folder)
save_path = f'{save_folder}/nerfstudio_models/step-000069999.ckpt'
change_object(ckpt1, ckpt2, gid1, gid2, save_path)`

@Lazyangel
Copy link
Author

By the way, I saw the author provide such a script in other answers, but I didn't understand the logic. You can take a look, which may be helpful for future work. `import os import shutil import torch

def change_object(ckpt1, ckpt2, gid1, gid2, save_path): obj_key1 = f'object_{gid1}' obj_key2 = f'object_{gid2}'

obj1_attr_list = [attr for attr in ckpt1['pipeline'].keys() if obj_key1 in attr]
for attr in obj1_attr_list:
    attr2 = attr.replace(obj_key1, obj_key2)
    if attr2 not in ckpt2['pipeline'].keys():
        continue
    ckpt1['pipeline'][attr] = ckpt2['pipeline'][attr2]
    
torch.save(ckpt1, save_path)

if name == 'main': root1 = '/path/to/scene1' root2 = '/path/to/scene2'

ckpt1 = torch.load(f'{root1}/nerfstudio_models/step-000069999.ckpt')
ckpt2 = torch.load(f'{root2}/nerfstudio_models/step-000069999.ckpt')
gid1 = 'source_object_gid'
gid2 = 'target_object_gid'
save_folder = '/path/to/save/new/scene'
if not os.path.exists(save_folder):
    os.mkdir(save_folder)
    os.mkdir(f'{save_folder}/nerfstudio_models')
if not os.path.exists(f'{save_folder}/config.yml'):
    shutil.copy(f'{root1}/config.yml', save_folder)
save_path = f'{save_folder}/nerfstudio_models/step-000069999.ckpt'
change_object(ckpt1, ckpt2, gid1, gid2, save_path)`

Yes, I'm also not quite sure how to use this script.

@m15310926778
Copy link

@Lazyangel Hello, why did I change the positions of some participants in annotation.json, but the rendered video did not show any changes? I feel that directly using the rendering instructions given in the project does not involve the annotation content. Do I need to specify some configurations in the rendering instructions?

This is my rendering instruction.
bash scripts/shells/render.sh C:\Conda\street-gaussians-ns\output\street-gaussians-ns\street-gaussians-ns\2024-07-09_214744\config.yml 0

I don't feel like annotation.py was used. Furthermore, it seems unclear how to modify other configurations of render. May I ask how you did it?Thanks!

@Lazyangel
Copy link
Author

@m15310926778 Hello,

  1. I didn't add any extra rendering commands, I just modified the annotation.json file, and the resulting rendered image is as follows:the black car offset the lane about 1m
    image
    image
  2. I'm not familiar with nerfstudio framework. I guess it's possible that certain configurations in the config.yml file are causing the Nerfstudio framework to read the annotation.json file.

@m15310926778
Copy link

Thank you for your response!
1、Did you add this black car? Why hasn't this black car appeared in my rendered video?
屏幕截图 2024-07-18 110341

rgb.mp4

2、Also, I would like to confirm if this annotation.json file is located in the downloaded input data?
3、Finally, how do you determine which object the data in the annotation refers to, such as how did you modify it to this black car or did you make random changes?
Thank you again!

@Lazyangel
Copy link
Author

  1. Please check the frame_select option in sgn_dataparser.py. I set frame_select=[0, 85], it's likely that your frame_select range is [80, 160], which would result in our rendered videos being different, as they would include different frames.
  2. yes
  3. I also agree that correlating the gid in the annotation file with objects in the images can be a challenging thing. So I just keep trying.This method is not efficient, so if you have a better approach, please feel free to share.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants