最近AI大模型简直是太🔥了,作为一名牛马coder,逐渐被时代的浪潮推向AI大模型的学习,有痛苦也有快乐,这玩意也忒难了😭
本文想记录一下我用AI实现一个谷歌插件的全流程,希望能对和我一样刚入门AI学习的小白提供一些思路。
话不多说,先上成果:github.com/mengxun1437…
没办法,目前插件还在chrome审核过程中,想要了解的同学,可以先看看我的github仓库,本地build体验下
已经上线了!!!插件地址
大家如果使用的话,不要忘了自己在Settings里配置一下Github_TOKEN和LLM模型的相关配置哦~
我为什么会做这么一个插件?
最近在查找一个问题的时候,发现需要阅读大量的github的issues,就算阅读完,也不一定能从大量的issue中找到问题的答案,对于程序🐒来说,效率也太低了吧!
我就想,为啥不能让AI帮我来做这个事情呢。
直接上科技,学了这么久的AI, 是时候实践一下了!
构思一下如何实现
其实很简单,就是两步
第一步,issuse有很多,我需要先通过关键词过滤出来和我问题相关的一些issue(这个同时也减少了大模型的token,给我省省💰,太穷了🥹)
第二步,把搜索到的结果给我LLM大模型,让AI帮我总结该如何解决。(也可以基于这些搜索结果,问ai一些其他的问题)
说干就干
?我需要自己实现么,NO!!! 我可没有开发过chrome extension, 自己搞,太复杂了🤕
具体的如何设置cline插件和大模型,可以参考这篇文档cline插件设置
开发时候,需要注意让AI模块化的实现需求
核心代码剖析:
issue数据获取
// 获取相关的issue
const searchQuery = [
`repo:${owner}/${repo}`,
'type:issue',
...(onlyClosed ? ['state:closed'] : []),
...(query ? [query] : [])
].join(' ');
const response = await octokit.request('GET /search/issues', {
q: searchQuery,
per_page: perPage,
page,
sort: 'created',
order: 'desc'
});
// 获取某一个issue的具体内容
const response = await octokit.request('GET /repos/{owner}/{repo}/issues/{issue_number}', {
owner,
repo,
issue_number: issueNumber
});
LLM调用
import OpenAI from 'openai';
import { getConfig } from '../storage';
export interface LLMResponse {
content: string;
tokensUsed: number;
}
export async function generateWithOpenAI(
systemPrompt: string,
prompt: string,
onStream?: (chunk: string) => void
): Promise<LLMResponse> {
const llmConfig = (await getConfig())?.llmConfig;
if (!llmConfig?.apiKey) {
throw new Error('OpenAI API key not configured');
}
const openai = new OpenAI({
baseURL: llmConfig?.apiUrl || 'https://api.openai.com/v1',
apiKey: llmConfig.apiKey,
dangerouslyAllowBrowser: true
});
const model = llmConfig?.model || 'gpt-4';
if (onStream) {
// Stream mode
let fullContent = '';
const stream = await openai.chat.completions.create({
model,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: prompt }
],
temperature: 0.7,
stream: true
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
fullContent += content;
onStream(content);
}
return {
content: fullContent,
tokensUsed: 0 // Can't get token count in streaming mode
};
} else {
// Normal mode
const response = await openai.chat.completions.create({
model,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: prompt }
],
temperature: 0.7
});
return {
content: response.choices[0]?.message?.content || '',
tokensUsed: response.usage?.total_tokens || 0
};
}
}
使用stream的方式可以让用户的交互感更好点
提示词Prompt
我对提示词的技巧可以说还是很模糊,这个是我给插件的提示词,或许不是很好,后续考虑将提示词暴露出来给用户自行设置
export function getIssueSystemPrompt(userData: string) {
return `You are an experienced Q&A bot that provides answers based on user-provided data and questions.
1. Only respond based on the user's input; do not fabricate information.
2. If your answer includes relevant knowledge sources, please format it appropriately (provide links if available).
3. Ensure your response is in the same language as the user's input.
## User Data
${userData}
## Examples
- **Input:** "What is the capital of France?"
- **Output:** "The capital of France is Paris. [Source](https://en.wikipedia.org/wiki/Paris)"
- **Input:** "¿Cuál es la capital de España?"
- **Output:** "La capital de España es Madrid. [Fuente](https://es.wikipedia.org/wiki/Madrid)"`;
}
export async function analyzeIssue(
userQuestion: string,
userData: string,
onStream?: (chunk: string) => void
) {
const userPrompt = `My Question Is: ${userQuestion}`;
return await generateWithOpenAI(getIssueSystemPrompt(userData), userPrompt, onStream);
}
到这里,效果就已经实现完成啦,简直是太酷了🎉!
整个过程,预计0.7d的开发量,而我只要做的就是对着大模型Say:NO NO NO, you should...
有个更好用的功能!
我们在issue页面右键,AI可以帮我们自动总结,并给出合理的建议,大大提升了我们看issue的效率😁。
核心的提示词是这样的
export function getSummarySystemPrompt() {
return `You are a skilled summarizer. Based on the provided GitHub Issue data, summarize the key points discussed in the Issue. Indicate whether there is a conclusion, and if so, state what it is. For bug-related issues, list the proposed solutions and identify which ones are effective. Consider the reactions, such as thumbs up or other emojis, as valuable reference points.
## Requirements
- Summarize key points from the GitHub Issue.
- State if there is a conclusion and what it is.
- For bug issues, list proposed solutions and their effectiveness.
- Use reactions (like emojis) as reference.
## Output Format
- A clear summary with sections for key points, conclusion, and bug solutions.
## Examples
1. **Input**: GitHub Issue data discussing a feature request.
**Output**:
- **Key Points**: Users want feature X for better usability.
- **Conclusion**: The team will consider implementing feature X.
2. **Input**: GitHub Issue data about a bug.
**Output**:
- **Key Points**: Users report bug Y affecting functionality.
- **Conclusion**: No final conclusion yet.
- **Proposed Solutions**:
- Solution A: Effective
- Solution B: Not effective
- Solution C: Needs further testing`;
}
export async function summaryIssue(issueData: string, onStream?: (chunk: string) => void) {
const userLanguage = (await getConfig())?.language || 'en';
const userPrompt = `## GitHub Issue Data
${issueData}
## Language
Response me by: ${userLanguage}
`;
return await generateWithOpenAI(getSummarySystemPrompt(), userPrompt, onStream);
}
畅想一下
LLM能做的事情很多,如果有更好的点子,欢迎大家一起交流学习。