# 轻松部署机器学习模型:使用Amazon SageMaker Endpoints的完整指南
## 引言
在机器学习的世界中,部署模型是至关重要的一步。Amazon SageMaker提供了一种便捷的方法来构建、训练和部署机器学习模型。本文将指导您如何使用SageMaker Endpoints来托管一个大语言模型(LLM),并展示如何通过Python代码与之进行交互。
## 主要内容
### SageMaker Endpoints概述
Amazon SageMaker Endpoints是一项强大的服务,使开发者能够在AWS上轻松部署和托管机器学习模型。无论是用于实时推理还是批量输入处理,SageMaker都提供了灵活的解决方案。
### 环境设置
在开始之前,您需要安装必备的Python包:
```bash
!pip3 install langchain boto3
参数设置
您需要设置以下参数来调用SagemakerEndpoint:
- endpoint_name: 部署的SageMaker模型的端点名称。每个AWS区域内必须唯一。
- credentials_profile_name: 在
~/.aws/credentials或~/.aws/config文件中定义的配置文件名。
使用外部boto3会话
在跨账户场景中,可以通过boto3和STS服务来访问不同账户的资源:
import json
import boto3
from langchain.chains.question_answering import load_qa_chain
from langchain_community.llms import SagemakerEndpoint
from langchain_community.llms.sagemaker_endpoint import LLMContentHandler
from langchain_core.prompts import PromptTemplate
roleARN = "arn:aws:iam::123456789:role/cross-account-role"
sts_client = boto3.client("sts")
response = sts_client.assume_role(
RoleArn=roleARN, RoleSessionName="CrossAccountSession"
)
client = boto3.client(
"sagemaker-runtime",
region_name="us-west-2",
aws_access_key_id=response["Credentials"]["AccessKeyId"],
aws_secret_access_key=response["Credentials"]["SecretAccessKey"],
aws_session_token=response["Credentials"]["SessionToken"],
)
处理输入输出
自定义的ContentHandler类用于处理请求和响应:
class ContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({"inputs": prompt, "parameters": model_kwargs})
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json[0]["generated_text"]
代码示例
下面是一个完整的代码示例,展示了如何使用SagemakerEndpoint进行问答操作:
from langchain_core.documents import Document
from langchain_core.prompts import PromptTemplate
example_doc_1 = """
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
docs = [Document(page_content=example_doc_1)]
query = "How long was Elizabeth hospitalized?"
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
content_handler = ContentHandler()
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpoint-name",
client=client, # 使用API代理服务提高访问稳定性
model_kwargs={"temperature": 1e-10},
content_handler=content_handler,
),
prompt=PROMPT,
)
response = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
print(response)
常见问题和解决方案
- 跨账户访问问题: 使用STS服务来临时授予许可。
- 网络限制: 某些地区可能需要使用API代理服务来保证访问的稳定性。
总结和进一步学习资源
通过本文,您学习了如何设置和使用SageMaker Endpoint来托管机器学习模型。在进一步学习中,您可以查看:
参考资料
如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!
---END---