Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For each model tester #23

Open
barepixels opened this issue Feb 17, 2024 · 1 comment
Open

For each model tester #23

barepixels opened this issue Feb 17, 2024 · 1 comment

Comments

@barepixels
Copy link

Trying for the first time, got error

Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.00 seconds
Total time: 42.06 seconds
use_experimental_async_task_batch: True
enable_test_loras_mode: False
enable_test_base_model_mode: True
enable_test_refiner_model_mode: False
Traceback (most recent call last):
File "E:\Fooocus-MindOfMatter-Edition\Fooocus-MindOfMatter-Edition\modules\exp_async_worker.py", line 921, in worker
handler(task)
File "E:\Fooocus-MindOfMatter-Edition\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus-MindOfMatter-Edition\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus-MindOfMatter-Edition\Fooocus-MindOfMatter-Edition\modules\exp_async_worker.py", line 163, in handler
loras = [[str(args.pop()), float(args.pop()), bool(args.pop())] for _ in range(modules.config.default_loras_max_number)]
AttributeError: module 'modules.config' has no attribute 'default_loras_max_number'
Total time: 0.03 seconds

@barepixels
Copy link
Author

barepixels commented Feb 17, 2024

try dev version

E:\Fooocus-MindOfMatter-Edition-DEV>.\python_embeded\python.exe -s .\Fooocus\entry_with_update.py --preset $presetrnpause
Already up-to-date
Update succeeded.
[System ARGV] ['.\Fooocus\entry_with_update.py', '--preset', '$presetrnpause']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.865
Load preset [E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\presets$presetrnpause.json] failed

Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Total VRAM 24575 MB, total RAM 32705 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.73 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
use_experimental_async_task_batch: True
enable_test_loras_mode: False
enable_test_base_model_mode: True
enable_test_refiner_model_mode: False
Traceback (most recent call last):
File "E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\modules\exp_async_worker.py", line 921, in worker
handler(task)
File "E:\Fooocus-MindOfMatter-Edition-DEV\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus-MindOfMatter-Edition-DEV\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\modules\exp_async_worker.py", line 163, in handler
loras = [[str(args.pop()), float(args.pop()), bool(args.pop())] for _ in range(modules.config.default_loras_max_number)]
AttributeError: module 'modules.config' has no attribute 'default_loras_max_number'
Total time: 0.03 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant