Premium and Pro subscribers can use InfraNodus API to generate AI advice and a knowledge graph from any text. The insights include advanced network graph analytics measures, such as clusters of concepts (communities), word rankings, structural gaps, etc.
Here are the access points that you can use to access the API. Please, note, that the API is still in public beta, so we recommend you drop us a message if you intend to use it in production, so we can verify the paths you'll be using and possible volumes.
You can also find all the endpoints in this Postman JSON file.
Authorization
To authorize, you need to obtain your API token at https://infranodus.com/subscription
You will then need to add this API token to your request header using
Authorization: Bearer your_api_token_here
For example, if you're using Python, you'd make a request that would include:
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer your_api_token_here'
}
1. Obtain Graph and Statements
This API endpoint lets you submit a text as a string (via POST request) and obtain a graph and statements JSON object as a response.
POST request URL:
https://infranodus.com/api/v1/graphAndStatements
POST request parameters (added to the URL query):
doNotSave: boolean (default: true)
addStats: boolean (default: true)
Note, that by default, your text will NOT be saved to your graphs or our database, it will only be processed on InfraNodus servers in the EU and you will get the graph object back.
POST request body:
{
name: string (e.g. "name_of_your_graph"),
text: string (e.g. "text_string_to_process")
}
Additionally, you can also use the following POST request parameters to change the way your text is processed. It may be useful if you use special syntax (like [[wikilinks]] or #hashtags).
"contextSettings": {
"partOfSpeechToProcess": "HASHTAGS_AND_WORDS",
"doubleSquarebracketsProcessing": "PROCESS_AS_MENTIONS",
"mentionsProcessing": "CONNECT_TO_ALL_CONCEPTS"
}
Response:
The returned object will be in the JSON format, following the schema detailed here. Note, that we use a modified version of the Graphology JSON graph with some additional metrics and gaps. This means you can use Graphology and related libraries, such as Gephi or Sigma.Js to manipulate / visualize this data.
2. Obtain AI-Generated Advice Based on the Knowledge Graph
This API endpoint lets you submit a knowledge graph in InfraNodus format and obtain an AI-generated question or idea that takes the structural and semantic information into account to generate a response.
POST request URL:
https://infranodus.com/api/v1/graphAiAdvice
POST request parameters (added to the URL query):
optimize: [optional], string (gaps (default) | reinforce | develop | imagine)
transcend: [optional], if present, go beyond the graph structure
When using `optimize=develop` the algorithm will get the top nodes from all the clusters of the graph, helping develop the current discourse. Otherwise, it focuses on pinnedNodes and topicsToProcess
When using `optimize=reinforce` the algorithm will get the top nodes from the top clusters of the graph, reinforcing the current discourse.
When using `optimize=gaps` the algorithm will identify the gaps in the discourse graph structure and direct the prompt to bridge those gaps. When the graph has a diverse topical structure, the focus will be made on the most commonly appearing gap. When the graph is highly connected, the algorithm will encourage to explore the least connected topics in order to shift attention towards the periphery. You can also communicate the depth of the gap that you want to go to with the additional gapDepth parameter. The higher the number, the further gap will be selected, the smaller topics will come into focus. Good for reiteratively revealing non-obvious relations.
When using `optimize=latent` the algorithm will identify the ideal points of entry into the graph or a discourse and that can also potentially connect it to other discourses, thus helping explore beyond the periphery
POST request body:
{
prompt: '', // string, required if userPrompt not present. Prompt to send to the graph
userPrompt: [
{
role: 'user',
content: '' // string, prompt to send to the graph
}
], // array, required if prompt not present. User prompt for a chat-based model (when gpt-4 is used)
promptContext: '', // string, optional. Text context to send with the graph (max 36Kb)
promptChatContext: [
{
role: 'user',
content: '' // string with the context of previous messages
},
], // array, optional. Previous chat messages for the context, optional, max 56Kb
requestMode: '', // string, required. Options: 'question' | 'idea' | 'fact' | 'continue' | 'challege' | 'response' | 'gptchat' | 'summary' | 'graph summary'
promptGraph: '', // string, optional. Will be automatically generated if none is provided
language: '', // string, optional, options: 'EN', 'FR', 'DE', 'CN', 'TW', 'JP', 'ES', 'PT'
modelToUse 'gpt-3.5-turbo', (string, optional, options: 'gpt-4' | 'gpt-3.5-turbo-instruct')
pinnedNodes: ['node name 1', node name 2'], // array, optional, focus on specific nodes in the graph
topicsToProcess: ['2', '4'], // array, optional, focus on specific topics
graph: {
nodes: [],
edges: [],
attributes: {
top_nodes: [],
top_clusters: [],
gaps: [],
statementHashtags: []
}
}, object, required. Graph in Graphology JSON structure to generate insights from
}
Note, that the following parameters are required, the rest can be auto-generated:
- graph — the JSON of the graph to query in graphology format
- prompt (string) or userPrompt (array for chat-based models) — the LLM model prompt
- requestMode — what should be generated (e.g. a question or a response)
- language — the language that should be used for responding
Response:
The returned object will be in the JSON format and contain AI-generated content in a text strings array.
{
"aiAdvice": [
{
"text": "How does the sensory perception of \"deliciousness\" differ between fruits with similar acidity levels, like oranges and lemons, compared to sweetness-dominant fruits like apples?",
"finish_reason": "stop"
}
],
"usage": 338,
"created_timestamp": 1722261741
}
3. Obtain AI-Generated Advice for a Text
This API endpoint lets you submit plain text and obtain an AI-generated question, statement, or summary that takes the underlying graph structure into account to generate precise results. This endpoint also returns the graph, so you can use it with other requests to produce different insights.
POST request URL:
https://infranodus.com/api/v1/graphAndAdvice
POST request parameters (added to the URL query):
doNotSave: boolean (default: true)
addStats: boolean (default: true)
optimize: [optional], string (gaps (default) | reinforce | develop | imagine)
transcend: [optional], if present, go beyond the graph structure
includeStatements: [optional], boolean, false by default
When using `optimize=develop` the algorithm will get the top nodes from all the clusters of the graph, helping develop the current discourse. Otherwise, it focuses on pinnedNodes and topicsToProcess
When using `optimize=reinforce` the algorithm will get the top nodes from the top clusters of the graph, reinforcing the current discourse.
When using `optimize=gaps` the algorithm will identify the gaps in the discourse graph structure and direct the prompt to bridge those gaps. When the graph has a diverse topical structure, the focus will be made on the most commonly appearing gap. When the graph is highly connected, the algorithm will encourage to explore the least connected topics in order to shift attention towards the periphery. You can also communicate the depth of the gap that you want to go to with the additional gapDepth parameter. The higher the number, the further gap will be selected, the smaller topics will come into focus. Good for reiteratively revealing non-obvious relations.
When using `optimize=latent` the algorithm will identify the ideal points of entry into the graph or a discourse and that can also potentially connect it to other discourses, thus helping explore beyond the periphery
POST request body:
{
requestMode: '', // string, required. Options: 'question' | 'idea' | 'fact' | 'continue' | 'challege' | 'response' | 'gptchat' | 'summary' | 'graph summary'
modelToUse 'gpt-3.5-turbo', (string, optional, options: 'gpt-4' | 'gpt-3.5-turbo-instruct')
pinnedNodes: ['node name 1', node name 2'], // array, optional, focus on specific nodes in the graph
name: string, required (e.g. "name_of_your_graph"),
text: string, required (e.g. "text string to process"),
aiTopics: false // boolean, optional, default: false — add AI topic names to top_clusters
}
- `text` — note that your text will be broken into statements. If you'd like to enforce this, separate the statements in your string with a line break `\n` symbol
Response:
The returned object will be in the JSON format and contain AI-generated content in a text strings array as well as the graph produced from the content.
{
"aiAdvice": [
{
"text": "How does the perception of deliciousness in fruits influence consumer choices, and what role does the sensory coherence between fruits like apples, oranges, and lemons play in the creation of fruit blends that make sense to the palate?",
"finish_reason": "stop"
}
],
"graph": {
"graphologyGraph": {}
},
"statements": [],
"usage": integer (tokens used)
"created_timestamp": timestamp (date created)
}
the field `aiAdvice` will contain up to 3 options.
the field `graph` will contain the `graphologyGraph` JSON object with all the graph stats
the field `statements` will be added if `includeStatemnets` request is added to the URL
the field `usage` and `created_timestamp` contain meta information about the generated response
4. Generate DOT Graph from the Graphology JSON Graph or Plain Text
This API endpoint ingests a graph in Graphology JSON format and returns a modified version of Graphviz' DOT graph back, which is more compact than JSON and suitable for forwarding to an LLM model for adding additional context to a query.
It allows you not only to reduce the prompt size but also to condense the most important information about your knowledge graph based on the criteria you choose in order to help the model generate a more relevant response to your query. We use it in the backend to provide additional context to the AI queries made using InfraNodus.
POST request URL to use for sending the graph:
https://infranodus.com/api/v1/dotGraph
POST request URL to use for sending plain text:
https://infranodus.com/api/v1/dotGraphFromText
POST request parameters (added to the URL query):
optimize: [optional] string (auto (default) | reinforce | develop)
When using `optimize=develop` and when no pinnedNodes or topicsToProcess are provided , the algorithm will get the top nodes from all the clusters of the graph, helping develop the current discourse
When using `optimize=reinforce` and when no pinnedNodes or topicsToProcess are provided, the algorithm will get the top nodes from the top clusters of the graph, reinforcing the current discourse.
When using `optimize=gaps` and when no pinnedNodes or topicsToProcess are provided, the algorithm will focus on the structural gaps to help bridge the content
When using `optimize=latent` the algorithm will identify the ideal points of entry into the graph or a discourse and that can also potentially connect it to other discourses, thus helping explore beyond the periphery
POST request body for sending a graph:
{ pinnedNodes: [], topicsToProcess: [], graph: { nodes: [], edges: [], graph: { top_nodes: [], top_clusters: [], gaps: [], statementHashtags: [] } }}
- `graph` (required) should contain the nodes and edges in Graphology JSON format
– `pinnedNodes` is an optional array listing the nodes that should be extracted from their graphs (and their relations)
— `topicsToProcess` is an optional array of cluster ids (integers) to be extracted from the graph
If no pinnedNodes and topicsToProcess are provided, InfraNodus will perform automatic extraction of the most suitable parts of the graph based on the structure of your graph (read more about the logic here).
POST request body for sending a text:
{
name: "name of the text",
text: "apple and oranges are delicious fruits \n however, oranges are delicious and lemons also make sense",
stopwords: ["car", "tool"],
aiTopics: true // optional, show AI-generated names for topical clusters
}
- `name` — required, the name of the text graph created (can be a random string in case the graph is not saved)
- `text` — required, a string with text. Note that your text will be broken into statements. If you'd like to enforce this, separate the statements in your string with a line break `\n` symbol
- 'stopwords' — optional, an array that can be used to exclude some words from analysis. Useful for processing the results of search queries, which are biased towards the search terms.
- `aiTopics` — optional, boolean, whether AI topic names should be added to the clusters (default: false)
Response:
The returned value will be in the string format and contain information about the main relations identified in your original JSON graph object:
{
graphKeywords: "apple <-> orange [label="delicious, fruit"]" and "lemon <-> make [label="sense"]
and orange <-> lemon [label="make, sense"], apple <-> orange [label="delicious, fruit"]",
clusterIds: [],
clusterDistance: 0,
clusterKeywords: "\"delicious orange apple fruit\" and \"lemon make sense\"",
allClusters: [
{
"text": "delicious orange apple fruit"
},
{
"text": "lemon make sense"
}
],
convertedGraph: // graphObject (see above),
bigrams: [
"orange <-> delicious",
"apple <-> orange",
"delicious <-> fruit",
"apple <-> delicious",
"orange <-> fruit",
"apple <-> fruit",
"lemon <-> make",
"make <-> sense",
"lemon <-> sense"
]
}
...
Appendix 1: AI Request Types
The parameter `requestType` is used to tell the model what kind of insight it should provide. It influences the final prompt that is sent to an LLM to produce an answer.
Possible values are:
- 'summary' — generates a summary of the text provided, augmented with the graph structure and your custom prompt, if provided
- 'question' — generates a question that bridges a gap between structural gaps found in a graph (can be modified or amplified in a certain direction with a prompt and the previous context, if provided
- 'paraphrase' — extracts the graph's structure and paraphrases the main topics in one paragraph (like a summary but generated taking the existing context less into account and focusing more on the graph's structure and the main topics and concepts detected)
- 'outline' — generates a title and an outline for an article based on the graph's structure
- 'continue' — generates a statement that connects the concepts in the prompt taking the graph structure into account
- 'response' — generates a direct response to the prompt, taking the graph's structure and previous context into account (difference to "continue" — stronger connection to the graph structure and context text and conversation if sent)
- 'idea' — generates an innovative idea that bridges a gap between structural gaps found in a graph (can be modified or amplified with a prompt and the previous context, if provided)
- 'fact' — generates a factual statement that bridges a gap between structural gaps found in a graph (can be modified or amplified with a prompt and the previous context, if provided)
- 'challenge' — generates a challenging statement that bridges a gap between structural gaps found in a graph (can be modified or amplified with a prompt and the previous context, if provided)
- 'custom' — your own prompt provided in the prompt parameter value (note, that if you decide to add the graph data to it, your prompt will be followed by something like "\n that is also related to ...", followed by a list of graph clusters and concepts — so you might want to adjust your custom prompt to fit into the style of the final instruction that will be submitted to the model
Comments
0 comments
Please sign in to leave a comment.