You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think the biggest possible drawback vs current mirroring/blurring method is that consistency/movement between frames might not be great, as usually is the case in generative video models thus far.
Models trained on each video (or at least a couple of neighboring frames) would perform much better, but it would be incredibly slow and complex
The text was updated successfully, but these errors were encountered:
With recent advances in generative image outpainting would it be possible to have the corners "filled" in alpha channel instead of black?
Of course if everything was handled inside vapoursynth it would be even better. There are many open source outpainting options on github:
https://github.com/Udit9654/Outpainting-Images-and-Videos-using-GANs
https://github.com/basilevh/image-outpainting
https://github.com/nanjingxiaobawang/SieNet-Image-extrapolation
https://github.com/lkwq007/stablediffusion-infinity/
https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy
I think the biggest possible drawback vs current mirroring/blurring method is that consistency/movement between frames might not be great, as usually is the case in generative video models thus far.
Models trained on each video (or at least a couple of neighboring frames) would perform much better, but it would be incredibly slow and complex
The text was updated successfully, but these errors were encountered: