-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Token indices sequence length #856
Comments
I have this same issue as well especially when working with Ollama LLAMA models |
for big websites you should use openai |
## [1.35.0](v1.34.2...v1.35.0) (2025-01-06) ### Features * ⏰added graph timeout and fixed model_tokens param ([#810](#810) [#856](#856) [#853](#853)) ([01a331a](01a331a)) * ⛏️ enhanced contribution and precommit added ([fcbfe78](fcbfe78)) * add codequality workflow ([4380afb](4380afb)) * add timeout and retry_limit in loader_kwargs ([#865](#865) [#831](#831)) ([21147c4](21147c4)) * serper api search ([1c0141f](1c0141f)) ### Bug Fixes * browserbase integration ([752a885](752a885)) * local html handling ([2a15581](2a15581)) ### CI * **release:** 1.34.2-beta.1 [skip ci] ([f383e72](f383e72)), closes [#861](#861) [#861](#861) * **release:** 1.34.2-beta.2 [skip ci] ([93fd9d2](93fd9d2)) * **release:** 1.34.3-beta.1 [skip ci] ([013a196](013a196)), closes [#861](#861) [#861](#861) * **release:** 1.35.0-beta.1 [skip ci] ([c5630ce](c5630ce)), closes [#865](#865) [#831](#831) * **release:** 1.35.0-beta.2 [skip ci] ([f21c586](f21c586)) * **release:** 1.35.0-beta.3 [skip ci] ([cb54d5b](cb54d5b)) * **release:** 1.35.0-beta.4 [skip ci] ([6e375f5](6e375f5)), closes [#810](#810) [#856](#856) [#853](#853)
@Qunlexie @saboor2632 there is indeed an issue with the method to calculate chunks for ollama models, in tokenizer_ollama.py. We are using |
Is this a bug that should be raised to langchain? I belive the limit to the tokens is the cause of the issue where only parts of the web page is being retrieved rather all of it.
How would this work? Do you have a practical example? for how this will work with ScrpapeGraph. I really believe that getting Ollama to work properly is key for open source. Happy to get your thoughts |
I face the same problem, hoping author can fix it |
I am facing this issue whenever i run my code for scraping bbc or any other site
Error:
Token indices sequence length is longer than the specified maximum sequence length for this model (5102 > 1024). Running this sequence through the model will result in indexing errors
It does not give me complete results
The text was updated successfully, but these errors were encountered: