Skip to content

OLLAMA

Multimodal Graph Query Using Ollama

The BFS-generated nodes and the user's query were input into Ollama, and the output was obtained using the following model:

Model Used: llama3.2:1b

For more details, refer to the Ollama GitHub Repository.

Code Snippet

model1 = OllamaLLM(model="llama3.2:1b")

def run_query_chain(question, reviews, bfs):
    """Run the LangChain template for a single query."""
    template = """
    You are a product recommendation assistant.
    You will receive:
    - a list of product nodes with their details (name, category, price, imagePath)
    - a list of related nodes or connections
    - a natural-language question from the user

    Use only this information to answer.
    Give a paragraph answer based on the input
    Query: <the original question>
      Here are a few options:
    1. {{<Product Name>}} ₹{{<Price>}} Category: {{<Category>}}
       Image: {{<Image Path>}}
    2. {{<Product Name>}} ₹{{<Price>}} Category: {{<Category>}}
       Image: {{<Image Path>}}
    Do not include commentary or explanations outside give a paragraph answer based on this  

    Nodes: {reviews}
    Connections: {bfs}
    Question: {question}
    """
    prompt = ChatPromptTemplate.from_template(template)
    chain = prompt | model1
    result = chain.invoke({"reviews": reviews, "question": question, "bfs": bfs})
    return result