Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

问题:object of type 'NoneType' has no len() #8

Open
tholiite opened this issue Oct 21, 2024 · 6 comments
Open

问题:object of type 'NoneType' has no len() #8

tholiite opened this issue Oct 21, 2024 · 6 comments

Comments

@tholiite
Copy link

ComfyUI Error Report

Error Details

  • Node Type: JoyCaption2_simple
  • Exception Type: TypeError
  • Exception Message: object of type 'NoneType' has no len()

Stack Trace

  File "G:\ComfyUI-aki-v1.4\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "G:\ComfyUI-aki-v1.4\execution.py", line 228, in get_output_data
    output = merge_result_data(results, obj)

  File "G:\ComfyUI-aki-v1.4\execution.py", line 175, in merge_result_data
    output_is_list = [False] * len(results[0])

System Information

  • ComfyUI Version: v0.2.3
  • Arguments: main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.4.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 10736762880
    • VRAM Free: 5725170576
    • Torch VRAM Total: 7516192768
    • Torch VRAM Free: 3936897936
@TTPlanetPig
Copy link
Owner

This issue is most likely due to the version of peft
try to update to the required 0.12.0
for comfyui windows version under python_embeded folder
./python.exe -m pip install peft==0.12.0

@TTPlanetPig
Copy link
Owner

and also, please check if you have correctly put the Joy caption model into the folder.
this one will not be auto downloaded!!
-Joy capiton lora: https://huggingface.co/spaces/fancyfeast/joy-caption-alpha-two download all files and place in ComfyUI\models\Joy_caption\cgrkzexw-599808, i will suggest you use huggingface-cli to avoid mistaken on the names.

@KannManMachen
Copy link

KannManMachen commented Oct 24, 2024

Hey @TTPlanetPig – first of all THX a lot for this great custom node :) ! I am currently testing different vision models against each other. JoyCaption is really strong!

I had the same error object of type ‘NoneType’ has no len()

I found the instructions for installing the JoyCaption lora (is it really a “lora”?) somewhat misleading.

If you visit the URL https://huggingface.co/spaces/fancyfeast/joy-caption-alpha-two, you get to a hugging face space / app and not to the overview of the stored data. This would then be the link https://huggingface.co/spaces/fancyfeast/joy-caption-alpha-two/tree/main.

And then I was confused by the subfolder “cgrkzexw-599808” inside of the folder structure on the HF page, because you had written that the corresponding data in ComfyUI should be copied to a subfolder “cgrkzexw-599808” (...\models\Joy_caption\cgrkzexw-599808). At first I didn't know if I could ignore the data on the first level and only need the data in the “cgrkzexw-599808” folder. Answer: you really need all data from https://huggingface.co/spaces/fancyfeast/joy-caption-alpha-two/tree/main inklusive the folder “cgrkzexw-599808”

However, this only seemed to be important for the manual download option. In the end, I was able to download the complete dataset with the command git clone https://huggingface.co/spaces/fancyfeast/joy-caption-alpha-two C:\test.

Cheers

@AssassinsLament
Copy link

I have had this issue before, and after checking to make sure all the models are in the correct place, I realized that it wasn't loading the LLM model into memory. Because of how I set my clip and vae into specific cpu/gpu, some how it couldn't load the unsloth model into cuda:0... once i disabled the clip and vae forcing, it worked fine.

@leoleelxh
Copy link

I have had this issue before, and after checking to make sure all the models are in the correct place, I realized that it wasn't loading the LLM model into memory. Because of how I set my clip and vae into specific cpu/gpu, some how it couldn't load the unsloth model into cuda:0... once i disabled the clip and vae forcing, it worked fine.

same issues! how to disabled the clip and vae?

@MarisKay
Copy link

MarisKay commented Nov 18, 2024

unbelievable, no matter what i try - it still gives me the same error that was mentioned above. Linux, python 3.12, Everything was done according to instructions, files placed according to provided directory structure. No luck, something is terribly wrong either with testing or describing how to install. Please fix! Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants