Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's the different to the "segment anything + inpainting" pipeline? #2

Open
longcw opened this issue Jan 12, 2024 · 3 comments
Open

Comments

@longcw
Copy link

longcw commented Jan 12, 2024

Thanks for sharing this cool demo. It looks similar to the pipeline of masking out the foreground and then inpainting the background, optionally with some IP adaptor-like background control. Since there is no paper released, what's the difference between that? Is it possible to maintain a consistent ID while also allowing for deformation such as automatic adjustments to shape, size, and viewing angle?

@chenbinghui1
Copy link
Collaborator

@longcw
Thanks for your attention. SAM and inpainting are basic tricks in our pipeline, while we propose many other improvements for the generation of background/human/human-object-interaction and so on.

Exactly maintaining object's ID and automatic adjustments of shape, size, viewing angle are trade-off theoretically. Now ReplaceAnything v1.0 cannot automatic adjust like shape, size and viewing angle. We are developing V2.0 for changing shape, size and viewing angle.

@longcw
Copy link
Author

longcw commented Jan 12, 2024

Thank you for the quick reply. Indeed making the inpainting stable with high quality is nontrivial already. Looking forward to the paper and the v2, also hopefully for the code :)

@GallonDeng
Copy link

also looking forward to the paper and the v2, v1 too slow and the size not right

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants