You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing this cool demo. It looks similar to the pipeline of masking out the foreground and then inpainting the background, optionally with some IP adaptor-like background control. Since there is no paper released, what's the difference between that? Is it possible to maintain a consistent ID while also allowing for deformation such as automatic adjustments to shape, size, and viewing angle?
The text was updated successfully, but these errors were encountered:
@longcw
Thanks for your attention. SAM and inpainting are basic tricks in our pipeline, while we propose many other improvements for the generation of background/human/human-object-interaction and so on.
Exactly maintaining object's ID and automatic adjustments of shape, size, viewing angle are trade-off theoretically. Now ReplaceAnything v1.0 cannot automatic adjust like shape, size and viewing angle. We are developing V2.0 for changing shape, size and viewing angle.
Thank you for the quick reply. Indeed making the inpainting stable with high quality is nontrivial already. Looking forward to the paper and the v2, also hopefully for the code :)
Thanks for sharing this cool demo. It looks similar to the pipeline of masking out the foreground and then inpainting the background, optionally with some IP adaptor-like background control. Since there is no paper released, what's the difference between that? Is it possible to maintain a consistent ID while also allowing for deformation such as automatic adjustments to shape, size, and viewing angle?
The text was updated successfully, but these errors were encountered: