InfraNodus API can be used to generate questions that can help better understand the content and see how it can be further developed in an interesting way.
In order to do that, you can either save an existing text as an InfraNodus graph or send the text with the API request directly. The response will be a set of AI-generated questions that you can then add to your LLM instructions.
Step 1: Add a text to InfraNodus
e.g. the contents of this support portal.
Step 2: Create a POST request to the following endpoint:
https://infranodus.com/api/v1/graphAndAdvice?doNotSave=true&addStats=true&optimize=gap&includeGraphSummary=true&includeGraph=false
the POST parameters to use:
{
"requestMode": "question",
"modelToUse": "gpt-4o-mini",
"name": "support_nodus_labs",
"aiTopics": "true"
}
This is telling the InfraNodus API to identify the two topical clusters that are not yet linked with the biggest gap between them and to then generate a set of research questions that bridge those gaps.
See all API points available at InfraNodus API Access Points
Step 3: Extract the aiAdvice response
It is an array of generated questions that bridge the gap will be in the response body aiAdvice array.
{
"aiAdvice": [
{
"text": "Question 1?",
"finish_reason": "stop"
},
{
"text": "Question 2?",
"finish_reason": "stop"
},
{
"text": "Question 3?",
"finish_reason": "stop"
}
],
Those questions can be extracted and used in a prompt to generate interesting conversation starters in a chat.
Building this Logic into Dify AI Chatbot Workflow
You can integrate this logic into your Dify AI chatbot workflow. In order to do that, you would need to create a HTTP node with the request above and use the results of this request to generate a knowledge base search.
You can then send the original query along with the knowledge base search results (as the context) to an LLM mode and generate the final response.
You can also augment this workflow with multiple enhancements, such as adding the `graphSummary` data generated by this very same endpoint to your LLM query, so the response touches upon the most important topics in the knowledge base (as we describe in Improve RAG for Your LLM Knowledge using InfraNodus and Open-WebUI, Dify, or ChatGPT).
Here is an example of how this workflow would look in Dify and you can download a sample YML file for the workflow (which you can import in Dify) at https://github.com/infranodus/dify-infranodus/blob/main/generate-questions-knowledge-base.yml
As you can see on the workflow, we also use variable assigners and JSON aggregators in order to store the results of the text processed the first time in order to reduce the waiting time when the answer is generated.
We also add an If / Then node after the aggregator so that all subsequent user queries (when they want to follow up on the original questions) don't include the original question generated by InfraNodus AI (but only the content from the KB retrieved on its basis).
This is especially useful if you want to discover how the insights generated in the first run could be extended to other contexts.
You can find the working version of this chatbot that is based on this support portal, so you can test it out:
https://aistudio.infranodus.com/chat/MCMNnbYHFrgKMreu
Comments
0 comments
Please sign in to leave a comment.