Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Follow up prompts #430

Open
jesse-lane-ai opened this issue Jul 1, 2024 · 6 comments
Open

Follow up prompts #430

jesse-lane-ai opened this issue Jul 1, 2024 · 6 comments
Labels
enhancement New feature or request stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed

Comments

@jesse-lane-ai
Copy link

it’s not apparent how to request a series of prompts on the knowledge graph. Like if I wanted to ask a series of questions. I don’t want to make multiple api calls on the same website. Maybe I’m not understanding something.

How do I save the knowledge graph and then iterate prompt requests on it without actually calling the website again?

@jesse-lane-ai
Copy link
Author

Is there a workflow to just return the graph ?

@jesse-lane-ai
Copy link
Author

Also what testing has been done on inputting raw source html to the source parameter?

@DiTo97
Copy link
Collaborator

DiTo97 commented Jul 2, 2024

it’s not apparent how to request a series of prompts on the knowledge graph. Like if I wanted to ask a series of questions. I don’t want to make multiple api calls on the same website. Maybe I’m not understanding something.

How do I save the knowledge graph and then iterate prompt requests on it without actually calling the website again?

hi @jesse-lane-ai,

one way to achieve the above is to use the cache attribute to store the contents of a website, so that you only have to call the language model on it multiple times, once per search question you might have, instead of re-running the whole pipeline multiple times (automatically handled under the hood).

see more on the cache in the graph config's additional parameters section in the documentation.

@DiTo97
Copy link
Collaborator

DiTo97 commented Jul 2, 2024

Also what testing has been done on inputting raw source html to the source parameter?

that should be always possible in the fetch node

any graph using that should work just fine

@DiTo97
Copy link
Collaborator

DiTo97 commented Jul 2, 2024

Is there a workflow to just return the graph ?

you mean returning the vector store / KG?

not yet, but we already had some requests for it, @VinciGit00

@f-aguzzi f-aguzzi added the enhancement New feature or request label Aug 3, 2024
Copy link

dosubot bot commented Jan 9, 2025

Hi, @jesse-lane-ai. I'm Dosu, and I'm helping the Scrapegraph-ai team manage their backlog. I'm marking this issue as stale.

Issue Summary:

  • You raised a concern about making multiple prompts on a knowledge graph without repeated API calls.
  • You are looking for a way to save the knowledge graph for local iterations.
  • @DiTo97 suggested using the cache attribute to store website contents for efficiency.
  • Inputting raw source HTML in the fetch node was mentioned as a potential solution.
  • A workflow to return the vector store or knowledge graph is not yet available but has been requested.

Next Steps:

  • Is this issue still relevant to the latest version of the Scrapegraph-ai repository? If so, please comment to keep the discussion open.
  • If there is no further activity, this issue will be automatically closed in 7 days.

Thank you for your understanding and contribution!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed
Projects
None yet
Development

No branches or pull requests

3 participants