You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Command: scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path "/IllustrationJuanerGhibli_v20.safetensors" --dump_path "/Diffusers_IllustrationJuanerGhibli_v20" --from_safetensors --device cuda
Error:
Traceback (most recent call last):
File "/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py", line 160, in
pipe = download_from_original_stable_diffusion_ckpt(
File "/diffusers/src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1471, in download_from_original_stable_diffusion_ckpt
converted_unet_checkpoint = convert_ldm_unet_checkpoint(
File "/diffusers/src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 431, in convert_ldm_unet_checkpoint
new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'
I also tried using the StableDiffusionXLPipeline.from_single_file function as per code below but then for this error:
Code:
import spaces
import os
import gradio as gr
import torch
from diffusers import StableDiffusionXLPipeline, AutoencoderKL, DDIMScheduler
# Access token for HF if needed
HF_TOKEN = os.getenv("HF_TOKEN")
print(HF_TOKEN)
# --------------------------------------------------------------------
# 1) MODEL + PIPELINE SETUP
# --------------------------------------------------------------------
base_model = "https://huggingface.co/soulfulmachine/Ghibli/blob/main/IllustrationJuanerGhibli_v20.safetensors"
# vae_model_path = "./models/sdxl_vae.safetensors"
device = "cuda" # or "cpu" if you don't have a GPU
noise_scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
steps_offset=1,
)
# Load VAE from a single .safetensors file
# vae = AutoencoderKL.from_single_file(vae_model_path).to(dtype=torch.float16)
# Load the main SDXL model (UNet, text encoders, etc.) from a single .safetensors file
pipe = StableDiffusionXLPipeline.from_single_file(
base_model,
torch_dtype=torch.float16,
scheduler=noise_scheduler,
# vae=vae,
use_auth_token=HF_TOKEN,
use_safetensors=True,
add_watermarker=False,
load_safety_checker=False
).to(device)
# --------------------------------------------------------------------
# 2) INFERENCE FUNCTION
# --------------------------------------------------------------------
@spaces.GPU
def generate_image(prompt, negative_prompt, steps, scale, width, height):
"""
Generates images using the loaded SDXL pipeline.
Returns a list of 2 images by default.
"""
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
num_images_per_prompt=2,
guidance_scale=scale,
width=width,
height=height,
num_inference_steps=steps,
).images
return images # Gradio can display multiple images if returned as a list
# --------------------------------------------------------------------
# 3) BUILD GRADIO INTERFACE
# --------------------------------------------------------------------
<Gradio code is truncated>
Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s]
Loading pipeline components...: 17%|█▋ | 1/6 [00:00<00:00, 3211.57it/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 495, in from_single_file
loaded_sub_model = load_single_file_sub_model(
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 168, in load_single_file_sub_model
raise SingleFileComponentError(
diffusers.loaders.single_file_utils.SingleFileComponentError: Failed to load CLIPTextModel. Weights for this component appear to be missing in the checkpoint.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Sorry for the noob question. I have been researching this for hours now but unable to make much progress.
I am trying to load this checkpoint merge on CivitAI as a Huggingface Model: https://civitai.com/models/989221/illustration-juaner-ghibli-style-2d-illustration-model-flux
Then load this Huggingface Model in a Space and generate an image.
Can you please give me some pointers on how to do this?
I tried the following steps but received errors:
Command: python scripts/convert_flux_to_diffusers.py --checkpoint_path "/ IllustrationJuanerGhibli_v20.safetensors" --output_path "/Diffusers_IllustrationJuanerGhibli_v20" --transformer
Error:
Killed: 9
Command: scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path "/IllustrationJuanerGhibli_v20.safetensors" --dump_path "/Diffusers_IllustrationJuanerGhibli_v20" --from_safetensors --device cuda
Error:
Traceback (most recent call last):
File "/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py", line 160, in
pipe = download_from_original_stable_diffusion_ckpt(
File "/diffusers/src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1471, in download_from_original_stable_diffusion_ckpt
converted_unet_checkpoint = convert_ldm_unet_checkpoint(
File "/diffusers/src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 431, in convert_ldm_unet_checkpoint
new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'
Code:
Error:
IllustrationJuanerGhibli_v20.safetensors: 0%| | 0.00/11.9G [00:00<?, ?B/s]
IllustrationJuanerGhibli_v20.safetensors: 2%|▏ | 241M/11.9G [00:01<00:49, 233MB/s]
IllustrationJuanerGhibli_v20.safetensors: 10%|█ | 1.23G/11.9G [00:02<00:15, 670MB/s]
IllustrationJuanerGhibli_v20.safetensors: 24%|██▍ | 2.85G/11.9G [00:03<00:08, 1.10GB/s]
IllustrationJuanerGhibli_v20.safetensors: 38%|███▊ | 4.49G/11.9G [00:04<00:05, 1.31GB/s]
IllustrationJuanerGhibli_v20.safetensors: 52%|█████▏ | 6.21G/11.9G [00:05<00:03, 1.46GB/s]
IllustrationJuanerGhibli_v20.safetensors: 65%|██████▍ | 7.73G/11.9G [00:06<00:02, 1.48GB/s]
IllustrationJuanerGhibli_v20.safetensors: 79%|███████▉ | 9.37G/11.9G [00:07<00:01, 1.53GB/s]
IllustrationJuanerGhibli_v20.safetensors: 93%|█████████▎| 11.0G/11.9G [00:08<00:00, 1.56GB/s]
IllustrationJuanerGhibli_v20.safetensors: 100%|█████████▉| 11.9G/11.9G [00:08<00:00, 1.39GB/s]
model_index.json: 0%| | 0.00/536 [00:00<?, ?B/s]
model_index.json: 100%|██████████| 536/536 [00:00<00:00, 3.34MB/s]
scheduler/scheduler_config.json: 0%| | 0.00/273 [00:00<?, ?B/s]
scheduler/scheduler_config.json: 100%|██████████| 273/273 [00:00<00:00, 1.78MB/s]
text_encoder/config.json: 0%| | 0.00/613 [00:00<?, ?B/s]
text_encoder/config.json: 100%|██████████| 613/613 [00:00<00:00, 1.97MB/s]
text_encoder_2/config.json: 0%| | 0.00/782 [00:00<?, ?B/s]
text_encoder_2/config.json: 100%|██████████| 782/782 [00:00<00:00, 5.12MB/s]
(…)t_encoder_2/model.safetensors.index.json: 0%| | 0.00/19.9k [00:00<?, ?B/s]
(…)t_encoder_2/model.safetensors.index.json: 100%|██████████| 19.9k/19.9k [00:00<00:00, 77.5MB/s]
tokenizer/merges.txt: 0%| | 0.00/525k [00:00<?, ?B/s]
tokenizer/merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 44.5MB/s]
tokenizer/special_tokens_map.json: 0%| | 0.00/588 [00:00<?, ?B/s]
tokenizer/special_tokens_map.json: 100%|██████████| 588/588 [00:00<00:00, 2.30MB/s]
tokenizer/tokenizer_config.json: 0%| | 0.00/705 [00:00<?, ?B/s]
tokenizer/tokenizer_config.json: 100%|██████████| 705/705 [00:00<00:00, 2.79MB/s]
tokenizer/vocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s]
tokenizer/vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 23.1MB/s]
tokenizer_2/special_tokens_map.json: 0%| | 0.00/2.54k [00:00<?, ?B/s]
tokenizer_2/special_tokens_map.json: 100%|██████████| 2.54k/2.54k [00:00<00:00, 15.9MB/s]
spiece.model: 0%| | 0.00/792k [00:00<?, ?B/s]
spiece.model: 100%|██████████| 792k/792k [00:00<00:00, 186MB/s]
tokenizer_2/tokenizer.json: 0%| | 0.00/2.42M [00:00<?, ?B/s]
tokenizer_2/tokenizer.json: 100%|██████████| 2.42M/2.42M [00:00<00:00, 29.5MB/s]
tokenizer_2/tokenizer_config.json: 0%| | 0.00/20.8k [00:00<?, ?B/s]
tokenizer_2/tokenizer_config.json: 100%|██████████| 20.8k/20.8k [00:00<00:00, 66.7MB/s]
transformer/config.json: 0%| | 0.00/378 [00:00<?, ?B/s]
transformer/config.json: 100%|██████████| 378/378 [00:00<00:00, 1.94MB/s]
(…)ion_pytorch_model.safetensors.index.json: 0%| | 0.00/121k [00:00<?, ?B/s]
(…)ion_pytorch_model.safetensors.index.json: 100%|██████████| 121k/121k [00:00<00:00, 90.0MB/s]
vae/config.json: 0%| | 0.00/820 [00:00<?, ?B/s]
vae/config.json: 100%|██████████| 820/820 [00:00<00:00, 4.90MB/s]
Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s]
Loading pipeline components...: 17%|█▋ | 1/6 [00:00<00:00, 3211.57it/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 495, in from_single_file
loaded_sub_model = load_single_file_sub_model(
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 168, in load_single_file_sub_model
raise SingleFileComponentError(
diffusers.loaders.single_file_utils.SingleFileComponentError: Failed to load CLIPTextModel. Weights for this component appear to be missing in the checkpoint.
Beta Was this translation helpful? Give feedback.
All reactions