You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
warn(msg)
C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
==============================================================================
The following directories listed in your path were found to be non-existent: {WindowsPath('C')}
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
DEBUG: Possible options found for libcudart.so: set()
CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6.
CUDA SETUP: Loading binary C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
Traceback (most recent call last):
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\transformers\utils\import_utils.py", line 1086, in _get_module
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\launch.py", line 48, in
main()
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\launch.py", line 44, in main
start()
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\modules\launch_utils.py", line 432, in start
import webui
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\webui.py", line 13, in
initialize.imports()
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\modules\initialize.py", line 16, in imports
import pytorch_lightning # noqa: F401
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning_init_.py", line 35, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning\callbacks_init_.py", line 14, in
from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning\callbacks\batch_size_finder.py", line 24, in
from pytorch_lightning.callbacks.callback import Callback
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning\callbacks\callback.py", line 25, in
from pytorch_lightning.utilities.types import STEP_OUTPUT
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning\utilities\types.py", line 27, in
from torchmetrics import Metric
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics_init_.py", line 14, in
from torchmetrics import functional # noqa: E402
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics\functional_init_.py", line 82, in
from torchmetrics.functional.text.bleu import bleu_score
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics\functional\text_init_.py", line 30, in
from torchmetrics.functional.text.bert import bert_score # noqa: F401
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics\functional\text\bert.py", line 24, in
from torchmetrics.functional.text.helper_embedding_metric import (
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics\functional\text\helper_embedding_metric.py", line 26, in
from transformers import AutoModelForMaskedLM, AutoTokenizer, PreTrainedModel, PreTrainedTokenizerBase
File "", line 1075, in _handle_fromlist
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\transformers\utils\import_utils.py", line 1076, in getattr
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\transformers\utils\import_utils.py", line 1088, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
開発お疲れ様です。
起動できていたwebUIが起動できなくなってしまいました。お手数かと存じますが、確認のほどよろしくお願いします。
発生事象
WebUIを起動をクリックしたところ
以下のエラーが表示されるようになりました。
Python 3.10.0 | packaged by conda-forge | (default, Nov 10 2021, 13:20:59) [MSC v.1916 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
Launching Web UI with arguments: --ckpt-dir C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\models\Stable-diffusion --embeddings-dir C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\embeddings --hypernetwork-dir C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\models\hypernetworks --xformers --port 60031
False
===================================BUG REPORT===================================
C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
warn(msg)
C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
==============================================================================
The following directories listed in your path were found to be non-existent: {WindowsPath('C')}
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
DEBUG: Possible options found for libcudart.so: set()
CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6.
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
CUDA SETUP: Loading binary C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh
CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
Traceback (most recent call last):
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\transformers\utils\import_utils.py", line 1086, in _get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\importlib_init_.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\transformers\modeling_utils.py", line 85, in
from accelerate import version as accelerate_version
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\accelerate_init_.py", line 3, in
from .accelerator import Accelerator
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\accelerate\accelerator.py", line 35, in
from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\accelerate\checkpointing.py", line 24, in
from .utils import (
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\accelerate\utils_init_.py", line 131, in
from .bnb import has_4bit_bnb_layers, load_and_quantize_model
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\accelerate\utils\bnb.py", line 42, in
import bitsandbytes as bnb
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes_init_.py", line 6, in
from . import cuda_setup, utils, research
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\research_init_.py", line 1, in
from . import nn
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\research\nn_init_.py", line 1, in
from .modules import LinearFP8Mixed, LinearFP8Global
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in
from bitsandbytes.optim import GlobalOptimManager
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\optim_init_.py", line 6, in
from bitsandbytes.cextension import COMPILED_WITH_CUDA
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\bitsandbytes\cextension.py", line 20, in
raise RuntimeError('''
RuntimeError:
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\launch.py", line 48, in
main()
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\launch.py", line 44, in main
start()
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\modules\launch_utils.py", line 432, in start
import webui
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\webui.py", line 13, in
initialize.imports()
File "C:\Users\harum\AppData\Roaming\flat\features\stable-diffusion-webui\repository\modules\initialize.py", line 16, in imports
import pytorch_lightning # noqa: F401
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning_init_.py", line 35, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning\callbacks_init_.py", line 14, in
from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning\callbacks\batch_size_finder.py", line 24, in
from pytorch_lightning.callbacks.callback import Callback
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning\callbacks\callback.py", line 25, in
from pytorch_lightning.utilities.types import STEP_OUTPUT
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\pytorch_lightning\utilities\types.py", line 27, in
from torchmetrics import Metric
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics_init_.py", line 14, in
from torchmetrics import functional # noqa: E402
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics\functional_init_.py", line 82, in
from torchmetrics.functional.text.bleu import bleu_score
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics\functional\text_init_.py", line 30, in
from torchmetrics.functional.text.bert import bert_score # noqa: F401
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics\functional\text\bert.py", line 24, in
from torchmetrics.functional.text.helper_embedding_metric import (
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\torchmetrics\functional\text\helper_embedding_metric.py", line 26, in
from transformers import AutoModelForMaskedLM, AutoTokenizer, PreTrainedModel, PreTrainedTokenizerBase
File "", line 1075, in _handle_fromlist
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\transformers\utils\import_utils.py", line 1076, in getattr
module = self._get_module(self._class_to_module[name])
File "C:\Users\harum\AppData\Roaming\flat\miniconda\envs\stable-diffusion-webui\lib\site-packages\transformers\utils\import_utils.py", line 1088, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
バグが発生した状況
2023/09/01の12時ごろ自作LORAを導入しようとしていた過程で急にflatから起動できなくなりました。
それより以前は問題なく起動できていました。
また「webui-user.bat」からは起動できます。
再起動などを繰り返しましたが、解決しませんでした。
CUDAというものをインストールしないとダメなのでしょうか?
プログラムやPC環境に疎く、バグ報告というより質問のような形になってしまい申し訳ございません。
よろしくお願いします。
The text was updated successfully, but these errors were encountered: