-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: BAAI/bge-small-en-v1.5 Fails to Load for HuggingFaceEmbedding in LlamaIndex (Stuck Without Progress) #17297
Comments
Are you uploading a lot of data? It can be very slow especially without gpu |
I use the data from sample notebook docs/docs/examples/cookbooks/GraphRAG_v2.ipynb. I adjust the code to use HuggingFace llm and embedding models instead of Open AI |
Hey there, @DylanNguyen2111! I'm here to help you with any bugs, questions, or contributions 🤖 There are a few steps you can take to troubleshoot the issue of the BAAI/bge-small-en-v1.5 model stalling during initialization or embedding generation in LlamaIndex:
If these steps do not resolve the issue, consider testing with a different model to determine if the problem is specific to the BAAI/bge-small-en-v1.5 model or a broader issue with the setup. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
I mean this data from the sample notebook in this Llamaindex github repo. I dont think it take up to 10 minutes without any progress with these amount of data and my laptop is macbook M2 Pro. I'm running this notebook by cpu news = pd.read_csv( news.head() |
news = pd.read_csv( news.head() |
Can you give some minimum code to reproduce? It works fine for me tbh (also on an M2) Also works in google colab 🤔 |
Here is the entire code I used to run on my macbook Vscode -- coding: utf-8 --"""GraphRAG_v2_with_Neo4j.ipynb Automatically generated by Colab. Original file is located at GraphRAG Implementation with LlamaIndex - V2GraphRAG (Graphs + Retrieval Augmented Generation) combines the strengths of Retrieval Augmented Generation (RAG) and Query-Focused Summarization (QFS) to effectively handle complex queries over large text datasets. While RAG excels in fetching precise information, it struggles with broader queries that require thematic understanding, a challenge that QFS addresses but cannot scale well. GraphRAG integrates these approaches to offer responsive and thorough querying capabilities across extensive, diverse text corpora. This notebook provides guidance on constructing the GraphRAG pipeline using the LlamaIndex PropertyGraph abstractions using Neo4J. This notebook updates the GraphRAG pipeline to v2. If you haven’t checked v1 yet, you can find it here. Following are the updates to the existing implementation:
Installation
!pip install llama-index llama-index-graph-stores-neo4j graspologic numpy==1.24.4 scipy==1.12.0 future"""## Load Data We will use a sample news article dataset retrieved from Diffbot, which Tomaz has conveniently made available on GitHub for easy access. The dataset contains 2,500 samples; for ease of experimentation, we will use 50 of these samples, which include the import pandas as pd news = pd.read_csv( news.head() """Prepare documents as required by LlamaIndex""" documents = [ """## Setup API Key and LLM""" import osos.environ["OPENAI_API_KEY"] = "sk-.."from llama_index.llms.openai import OpenAIllm = OpenAI(model="gpt-4")!pip install llama-index!pip install llama-index-llms-huggingface!pip install llama-index-embeddings-huggingface!pip install llama-index-embeddings-huggingface-apifrom llama_index.llms.huggingface import HuggingFaceLLM hf_token = "" Initialize the Hugging Face LLMllm = HuggingFaceLLM( """## GraphRAGExtractor The GraphRAGExtractor class is designed to extract triples (subject-relation-object) from text and enrich them by adding descriptions for entities and relationships to their properties using an LLM. This functionality is similar to that of the Here's a breakdown of its functionality: Key Components:
Main Methods:
Extraction Process: For each input node (chunk of text):
NOTE: In the current implementation, we are using only relationship descriptions. In the next implementation, we will utilize entity descriptions during the retrieval stage. import asyncio nest_asyncio.apply() from typing import Any, List, Callable, Optional, Union, Dict from llama_index.core.async_utils import run_jobs class GraphRAGExtractor(TransformComponent):
"""## GraphRAGStore The The class uses community detection algorithms to group related nodes in the graph and then it generates summaries for each community using an LLM. Key Methods:
import re from llama_index.core.llms import ChatMessage class GraphRAGStore(Neo4jPropertyGraphStore):
"""## GraphRAGQueryEngine The GraphRAGQueryEngine class is a custom query engine designed to process queries using the GraphRAG approach. It leverages the community summaries generated by the GraphRAGStore to answer user queries. Here's a breakdown of its functionality: Main Components:
Key Methods:
Query Processing Flow:
Example usage:
""" from llama_index.core.query_engine import CustomQueryEngine import re class GraphRAGQueryEngine(CustomQueryEngine):
"""## Build End to End GraphRAG Pipeline Now that we have defined all the necessary components, let’s construct the GraphRAG pipeline:
Create nodes/ chunks from the text.""" from llama_index.core.node_parser import SentenceSplitter splitter = SentenceSplitter( len(nodes) """### Build ProperGraphIndex using KG_TRIPLET_EXTRACT_TMPL = """ -Steps-
Format each relationship as ("relationship"$$$$<source_entity>$$$$<target_entity>$$$$$$$$<relationship_description>)
-Real Data- entity_pattern = r'("entity"$$$$"(.+?)"$$$$"(.+?)"$$$$"(.+?)")' def parse_fn(response_str: str) -> Any: kg_extractor = GraphRAGExtractor( """## Docker Setup And Neo4J setup To launch Neo4j locally, first ensure you have docker installed. Then, you can launch the database with the following docker command.
From here, you can open the db at http://localhost:7474/. On this page, you will be asked to sign in. Use the default username/password of neo4j and neo4j. Once you login for the first time, you will be asked to change the password. from llama_index.graph_stores.neo4j import Neo4jPropertyGraphStore Note: used to be
|
Do you need any further information from me ? |
Question Validation
Question
I am trying to use the BAAI/bge-small-en-v1.5 model as the embedding model with LlamaIndex's HuggingFaceEmbedding integration. The model loads successfully via Hugging Face when used independently, but when I integrate it into LlamaIndex, the process stalls indefinitely during the initialization or embedding generation phase.
Here is the code snipet I used:
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
Load FinBERT as the embedding model
embed_model = HuggingFaceEmbedding(
model_name="BAAI/bge-small-en-v1.5",
device="cuda" if torch.cuda.is_available() else "cpu"
)
from llama_index.core import PropertyGraphIndex
index = PropertyGraphIndex(
nodes=nodes,
kg_extractors=[kg_extractor],
property_graph_store=graph_store,
embed_model=embed_model,
show_progress=True,
)
The text was updated successfully, but these errors were encountered: