GPT(Generative Pre-trained Transformer)
本质:seq2seq
历史(参数数量):
- GPT(2018):117M
- GPT-2(2019):1542M
- GPT-3(2020):175B
chatGPT训练流程:
- pre-train: self-supervised learning (foundation model) - GPT
- finetune: supervised learning - chatGPT
- reinforcement learning: PPO
新的研究课题
- 精准提出需求:prompting
- 更正错误:neural editing
- 侦测AI生成的文字、图片、视频
- 隐私泄露:machine unlearning