Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Do not merge] Add yield callback to prior pipeline #1

Open
wants to merge 1 commit into
base: wuerstchen-v3
Choose a base branch
from

Conversation

apolinario
Copy link
Collaborator

No description provided.

@apolinario apolinario requested a review from dome272 February 13, 2024 08:53
@apolinario apolinario changed the title Add yield callback to prior pipeline [Do not merge] Add yield callback to prior pipeline Feb 13, 2024
@FurkanGozukara
Copy link

when i turn off preview images it gives error how to fix? @apolinario

the error happens here : image_embeddings=prior_output.image_embeddings,

def generate(
    prompt: str,
    negative_prompt: str = "",
    seed: int = 0,
    width: int = 1024,
    height: int = 1024,
    prior_num_inference_steps: int = 30,
    # prior_timesteps: List[float] = None,
    prior_guidance_scale: float = 4.0,
    decoder_num_inference_steps: int = 12,
    # decoder_timesteps: List[float] = None,
    decoder_guidance_scale: float = 0.0,
    num_images_per_prompt: int = 2,
) -> PIL.Image.Image:
    #prior_pipeline.to(device)
    #decoder_pipeline.to(device)
    #previewer.eval().requires_grad_(False).to(device).to(dtype)
    generator = torch.Generator().manual_seed(seed)
    prior_output = prior_pipeline(
        prompt=prompt,
        height=height,
        width=width,
        num_inference_steps=prior_num_inference_steps,
        timesteps=DEFAULT_STAGE_C_TIMESTEPS,
        negative_prompt=negative_prompt,
        guidance_scale=prior_guidance_scale,
        num_images_per_prompt=num_images_per_prompt,
        generator=generator,
        callback=callback_prior,
        callback_steps=callback_steps
    )

    if PREVIEW_IMAGES:
        for _ in range(len(DEFAULT_STAGE_C_TIMESTEPS)):
            r = next(prior_output)
            if isinstance(r, list):
                yield r[0]
        prior_output = r

    decoder_output = decoder_pipeline(
        image_embeddings=prior_output.image_embeddings,
        prompt=prompt,
        num_inference_steps=decoder_num_inference_steps,
        # timesteps=decoder_timesteps,
        guidance_scale=decoder_guidance_scale,
        negative_prompt=negative_prompt,
        generator=generator,
        output_type="pil",
    ).images

    #Save images
    output_folder = 'outputs'
    if not os.path.exists(output_folder):
        os.makedirs(output_folder)

    for image in decoder_output:
        # Generate timestamped filename
        timestamp = datetime.datetime.now().strftime('%Y_%m_%d_%H_%M_%S_%f')
        image_filename = f"outputs/{timestamp}.png"
        image.save(image_filename)


    yield decoder_output[0]

@FurkanGozukara
Copy link

FurkanGozukara commented Feb 13, 2024

@apolinario another thing is

    prior_pipeline.enable_model_cpu_offload()
    decoder_pipeline.enable_model_cpu_offload()

uses exactly same VRAM i just tested it

edit i found the mistake on demo code

amazing vram usage dropped to 9 GB

@FurkanGozukara
Copy link

FurkanGozukara commented Feb 13, 2024

another error is

/opt/conda/lib/python3.10/site-packages/diffusers/utils/pil_utils.py:43: RuntimeWarning: invalid value encountered in cast
  images = (images * 255).round().astype("uint8")

it gives this error on Kaggle since we have to use FP16

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants