You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To the best of my knowledge, the LlamaParse constructor's split_by_page parameter will force the parser to output either one Document per page (if the value is true) or one Document for the entire pdf (if the value is false).
This may be too coarse-grained for some applications. It would be great to allow the user to set a chunk size and a chunking strategy similar to what Unstructured are doing.
This could help especially when the pages to be extracted are dense, chunks are very large, and the embedding vector becomes too generic.
The text was updated successfully, but these errors were encountered:
To the best of my knowledge, the
LlamaParse
constructor'ssplit_by_page
parameter will force the parser to output either oneDocument
per page (if the value is true) or oneDocument
for the entire pdf (if the value is false).This may be too coarse-grained for some applications. It would be great to allow the user to set a chunk size and a chunking strategy similar to what Unstructured are doing.
This could help especially when the pages to be extracted are dense, chunks are very large, and the embedding vector becomes too generic.
The text was updated successfully, but these errors were encountered: