When you build an AI chatbot in a popular AI automation tool n8n, you may need to connect it to a custom knowledge base so that it retrieves the responses from the context you specify. The most typical way of doing that is connecting the AI agent to a vector store, like Pinecone, and using its RAG similarity search to retrieve the statements relevant to the user's query and to then augment the prompt with this context to provide better results.
This approach, however, is problematic for several reasons.
First is the fact that it's quite difficult to set up (see the image above of a sample workflow). You need to create your own Pinecone instance, build a separate workflow to upload and update the documents in the vector store, use AI embeddings model to create vector representations, etc.
The second, more serious problem is that traditional similarity search used in RAG systems fails to retrieve the relations present in documents and doesn't have a holistic view of the context. So if you ask it a general question, like "what is this about" it won't be able to find relevant statements to enrich the context. Moreover, it doesn't catch the complex ontological relations between the different document chunks, so when a question requires to find how certain phenomena relate to one another, it's going to not perform so well.
Using GraphRAG as a Knowledge Base in n8n
That's where InfraNodus GraphRAG can be very useful as it addresses both problems. First, it is much easier to set up. You can import any content into your InfraNodus knowledge graphs and then expose it via an HTTP node as a tool available to n8n. The whole process is very simple. You simply upload the data to the graphs (using a visual interface or an n8n workflow) and then expose the GraphRAG endpoints in InfraNodus as external tools for AI agents.
You can find the sample workflows to download here: https://github.com/infranodus/n8n-infranodus-workflow-templates
The request will be sent from AI agent to the respective knowledge graph(depending on the prompt) and InfraNodus will generate a relevant response along with the statements to be used to augment the final answer using that graph's knowledge.
This is especially interesting if you want to provide several contexts to your model (e.g. multiple knowledge bases or books or categories of research papers on different topics) and to let the model synthesize the final answer after receiving responses from each of those graphs.
You can learn more about this workflow here: Using InfraNodus Knowledge Graphs as Experts for AI Chatbot Agents in n8n
In order to build the same workflow using traditional RAG vector store, you'd need to first create a vector store (or namespace) for each domain of knowledge, manually upload the documents to each vector store, and then build a complex retrieval workflow using many more nodes:
Another big disadvantage of using a traditional vector store is that you can't visualize your knowledge base to get an understanding of the content inside. Whereas with InfraNodus you can always access the underlying "brain" of the knowledge base, visualize the main topics inside, and even edit the graph to emphasize some concepts and topics over the others:
To try out this workflow, check our n8n integrations on our GitHub repo: https://github.com/infranodus/n8n-infranodus-workflow-templates
Learn more about our GraphRAG: Portable GraphRAG: Supercharge Your AI Thinking with Knowledge Graphs
Comments
0 comments
Please sign in to leave a comment.