Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questiones about graph-constrained decoding. (following the closed issue GCR Compare with RoG) #5

Open
liane886 opened this issue Dec 28, 2024 · 1 comment

Comments

@liane886
Copy link

Hi,

Thank you for your valuable contribution. I have a question regarding how the LLM generates appropriate KG-grounded reasoning paths. Specifically, I noticed that the KG information does not appear to be explicitly provided as a prompt when the LLM is tasked with generating the reasoning paths.

How does the LLM manage to produce the relations and entities directly from the KG in this case? Is this because the model has been fine-tuned using ground-truth paths, or is there another mechanism involved?

Best.

@RManLuo
Copy link
Owner

RManLuo commented Dec 30, 2024

Hi, Thanks for your interest in our work. In GCR, we adopt the graph-constrained decoding to ensure the truthfulness of LLM generations, which is realized by the KG-Trie. You can find more details in our paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants