Replies: 7 comments
-
Looking at the API Reference, from haystack.nodes import FARMReader
reader = FARMReader(model_name_or_path="/model_path", use_gpu=True)
context="passage of data to extract the answer"
question_list=["Question 1", "Question 2", ...]
result=reader.predict_batch(queries=question_list, documents=[Document(context)]) Does it work? |
Beta Was this translation helpful? Give feedback.
-
Hii @anakin87 |
Beta Was this translation helpful? Give feedback.
-
even when using this predict_batch and run_batch the processing time increase |
Beta Was this translation helpful? Give feedback.
-
@julian-risch any ideas? |
Beta Was this translation helpful? Give feedback.
-
@anakin87, @julian-risch any update ? |
Beta Was this translation helpful? Give feedback.
-
@anakin87 any update? |
Beta Was this translation helpful? Give feedback.
-
Is it possible in Haystack version >2.0? |
Beta Was this translation helpful? Give feedback.
-
Hello, I want to know is there any way I can process the prediction of the whole batch of questions at once not on single question
I am using
from haystack.nodes import FARMReader
reader = FARMReader(model_name_or_path="/model_path", use_gpu=True)
context="passage of data to extract the answer"
question_list="list of 50 questions I have trained my model"
for i in question_list:
answer=reader.predict_on_texts(i,[context])
I want to process the 50 questions in one iteration not in the loop or iterate over questions one by one
I have tried predict_batch and run_batch but still the processing time is same or higher how can I process the multiple query on a single document in less then a minute or so
Beta Was this translation helpful? Give feedback.
All reactions