animateDiff¸¦ Ãß°¡ÇÏ·Á°í Çߴµ¥, ÇÏ°í ³ª¼ ±¸µ¿ÇÏ´Ï±î ¿¬°áµÈ ¸ðµ¨ÀÌ Çϳª¶óµµ ÀÖ¾î¾ß ÇÑ´Ù°í Çؼ
°Ë»öÇØ°®°í
mm_sd_v15_v2.ckpt¶ó´Â°É ³Ö¾îÁÖ¾ú´õ´Ï
±×°Ç ÇØ°áµÈ °Í °°Àºµ¥ ´Ù½Ã ÀÌ·± ¿¡·¯°¡ ¶ß¸é¼ ±¸µ¿ÀÌ Áߴܵ˴ϴÙ.
µµ¿ÍÁÖ¼¼¿ä...
¿ÏÀü Ãʺ¸ÀÔ´Ï´Ù ¤Ð¤Ð
*Âü°í·Î¿ä stable diffusuin web ui´Â µ¹¸®¸é ÀÌ·± ¿¡·¯°¡ ¶ß¸é¼ ¸¶Áö¸· »ý¼º ´Ü°è¿¡¼ À̹ÌÁö°¡ ¾È ³ª¿É´Ï´Ù.
NVIDIA driver was found. Automatically changed backend to 'cuda'. You can manually select which backend will be used through '--backend' argument.
fatal: No names found, cannot describe anything.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: 1.6.1
Commit hash: 03eec1791be011e087985ae93c1f66315d5a250e
Launching Web UI with arguments: --medvram --medvram-sdxl --xformers --api --autolaunch --skip-python-version-check --no-half
WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu)
Python 3.10.11 (you have 3.10.11)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 14.3s (prepare environment: 0.7s, import torch: 5.5s, import gradio: 2.0s, setup paths: 2.4s, initialize shared: 0.2s, other imports: 1.9s, setup codeformer: 0.1s, load scripts: 0.8s, create ui: 0.4s, gradio launch: 0.3s).
Applying attention optimization: InvokeAI... done.
Model loaded in 2.4s (load weights from disk: 0.5s, create model: 0.4s, apply weights to model: 1.4s).
100%|##########| 20/20 [03:28<00:00, 10.43s/it]
*** Error completing request
*** Arguments: ('task(6ycnc8z2mi0e12x)', 'cute cat', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001EE0C884CA0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\txt2img.py", line 64, in txt2img
processed = processing.process_images(p)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\processing.py", line 733, in process_images
res = process_images_inner(p)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\processing.py", line 879, in process_images_inner
x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\processing.py", line 594, in decode_latent_batch
sample = decode_first_stage(model, batch[i:i + 1])[0]
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 76, in decode_first_stage
return samples_to_images_tensor(x, approx_index, model)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 58, in samples_to_images_tensor
x_sample = model.decode_first_stage(sample.to(model.first_stage_model.dtype))
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
return self.first_stage_model.decode(z)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\modules\lowvram.py", line 71, in first_stage_model_decode_wrap
return first_stage_model_decode(z)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 90, in decode
dec = self.decoder(z)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 631, in forward
h = self.mid.attn_1(h)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 258, in forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
return _memory_efficient_attention(
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 306, in _memory_efficient_attention_forward
op = _dispatch_fw(inp)
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 94, in _dispatch_fw
return _run_priority_list(
File "C:\Users\yejupa\AppData\Roaming\StabilityMatrix\Packages\stable-diffusion-webui-directml\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 69, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 4096, 1, 512) (torch.float32)
key : shape=(1, 4096, 1, 512) (torch.float32)
value : shape=(1, 4096, 1, 512) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
`cutlassF` is not supported because:
device=cpu (supported: {'cuda'})
Operator wasn't built - see `python -m xformers.info` for more info
`flshattF` is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
max(query.shape[-1] != value.shape[-1]) > 128
Operator wasn't built - see `python -m xformers.info` for more info
`tritonflashattF` is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
max(query.shape[-1] != value.shape[-1]) > 128
Operator wasn't built - see `python -m xformers.info` for more info
triton is not available
`smallkF` is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
Operator wasn't built - see `python -m xformers.info` for more info
unsupported embed per head: 512
---
¹¹°¡ ¹®Á¦ÀÎÁö Çϳªµµ ¸ð¸£°Ú³×¿ä ¤Ð¤Ð