You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi and thanks for your hard work. I am trying to generate a prompt but I get the following error regardless of the LLM I use.
Full stack trace:
2024-06-19T13:05:41.382058852Z Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2024-06-19T13:05:41.382520912Z Seed is too large. Truncating to 32-bit: 3011316651
2024-06-19T13:05:41.382541762Z 2024-06-19 13:05:41,382 - ComfyUI_omost.omost_nodes - WARNING - Seed is too large. Truncating to 32-bit: 3011316651
2024-06-19T13:05:41.387806127Z The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
2024-06-19T13:05:41.387828417Z Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
2024-06-19T13:05:41.393552811Z !!! Exception during processing!!! 'NoneType' object has no attribute 'cdequantize_blockwise_fp32'
2024-06-19T13:05:41.394116180Z Traceback (most recent call last):
2024-06-19T13:05:41.394125810Z File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
2024-06-19T13:05:41.394144320Z output_data, output_ui = get_output_data(obj, input_data_all)
2024-06-19T13:05:41.394146820Z File "/workspace/ComfyUI/execution.py", line 81, in get_output_data
2024-06-19T13:05:41.394149030Z return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
2024-06-19T13:05:41.394152070Z File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
2024-06-19T13:05:41.394154470Z results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
2024-06-19T13:05:41.394156520Z File "/workspace/ComfyUI/custom_nodes/ComfyUI_omost/omost_nodes.py", line 279, in run_llm
2024-06-19T13:05:41.394158670Z generated_text = self.run_local_llm(
2024-06-19T13:05:41.394160780Z File "/workspace/ComfyUI/custom_nodes/ComfyUI_omost/omost_nodes.py", line 243, in run_local_llm
2024-06-19T13:05:41.394163650Z output_ids: torch.Tensor = llm_model.generate(
2024-06-19T13:05:41.394166290Z File "/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
2024-06-19T13:05:41.394168220Z return func(*args, **kwargs)
2024-06-19T13:05:41.394170850Z File "/venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1758, in generate
2024-06-19T13:05:41.394173030Z result = self._sample(
2024-06-19T13:05:41.394175040Z File "/venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 2397, in _sample
2024-06-19T13:05:41.394176990Z outputs = self(
2024-06-19T13:05:41.394179450Z File "/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
2024-06-19T13:05:41.394181710Z return forward_call(*args, **kwargs)
2024-06-19T13:05:41.394184090Z File "/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
2024-06-19T13:05:41.394186280Z output = module._old_forward(*args, **kwargs)
2024-06-19T13:05:41.394188250Z File "/venv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1164, in forward
2024-06-19T13:05:41.394190370Z outputs = self.model(
2024-06-19T13:05:41.394192430Z File "/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
2024-06-19T13:05:41.394194630Z return forward_call(*args, **kwargs)
2024-06-19T13:05:41.394196650Z File "/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
2024-06-19T13:05:41.394198700Z output = module._old_forward(*args, **kwargs)
Running on Runpod with a 4090.
The text was updated successfully, but these errors were encountered:
Hi and thanks for your hard work. I am trying to generate a prompt but I get the following error regardless of the LLM I use.
Full stack trace:
Running on Runpod with a 4090.
The text was updated successfully, but these errors were encountered: