在OpenAI的新技术Sora面前,我们都是好奇的孩子,渴望触摸那未知的奥秘。然而,就在我们准备踏上探索之旅时,总有人在路边摆开小摊,大声疾呼:‘探索Sora的秘密课程,特价销售!’然而,当你兴冲冲地付完款,打开课程一看,才发现里面不过是些连小学生都会的‘秘密’。这不禁让人想问,这是新时代的探索,还是新瓶装旧酒的生意经?
就在所有人都以为‘Sora探索课程’将成为新一轮割韭菜大赛的当口,OpenAI悠然地宣布了官方的申请测试方法。这一消息犹如一道清新的春风,吹散了市场上那些浮躁的尘埃。对于那些真正渴望了解Sora、希望亲手触碰科技脉动的探索者来说,这无疑是最好的消息!!
发布时间预测
模型对比
如何申请
下面这张图是Sam发的推特,招募Sora红队网络人员进行内测。
1.红队网络(官方)
按照要求填写,提交等待审核即可。
2.openAI官方论坛(未证实)
注意事项
-
很久很久之前关注过GPT4白名单的小伙伴,对这个表单可能感觉似曾相识,不能说100%一直也可以说是似曾相识,所以按照过往经验信息尽量使用漂亮国相关信息
-
随缘申请~等待通知
OpenAI 红队网络
简介
The term red teaming has been used to encompass a broad range of risk assessment methods for AI systems, including qualitative capability discovery, stress testing of mitigations, automated red teaming using language models, providing feedback on the scale of risk for a particular vulnerability, etc. In order to reduce confusion associated with the term “red team”, help those reading about our methods to better contextualize and understand them, and especially to avoid false assurances, we are working to adopt clearer terminology, as advised in Khlaaf, 2023, however, for simplicity and in order to use language consistent with that we used with our collaborators, we use the term “red team”.
红队一词已被用来涵盖人工智能系统的广泛风险评估方法,包括定性能力发现、缓解措施的压力测试、使用语言模型的自动化红队、提供特定漏洞风险规模的反馈等为了减少与“红队”一词相关的混淆,帮助那些阅读我们的方法的人更好地结合上下文并理解它们,特别是为了避免错误的保证,我们正在努力采用更清晰的术语,正如 Khlaaf, 2023 中建议的那样,然而,为了简单起见,并且为了使用与我们与合作者使用的语言一致的语言,我们使用术语“红队”。
Red teaming is an integral part of our iterative deployment process. Over the past few years, our red teaming efforts have grown from a focus on internal adversarial testing at OpenAI, to working with a cohort of external experts to help develop domain specific taxonomies of risk and evaluating possibly harmful capabilities in new systems. You can read more about our prior red teaming efforts, including our past work with external experts, on models such as DALL·E 2 and GPT-4.
红队是我们迭代部署过程中不可或缺的一部分。在过去的几年里,我们的红队工作已经从专注于 OpenAI 的内部对抗性测试发展到与一群外部专家合作帮助开发特定领域的风险分类法并评估新系统中可能有害的功能。您可以详细了解我们之前的红队工作,包括我们过去与外部专家在 DALL·E 2 和 GPT-4 等模型上的合作。
Today, we are launching a more formal effort to build on these earlier foundations, and deepen and broaden our collaborations with outside experts in order to make our models safer. Working with individual experts, research institutions, and civil society organizations is an important part of our process. We see this work as a complement to externally specified governance practices, such as third party audits.
今天,我们正在发起一项更正式的努力,以这些早期的基础为基础,加深和扩大我们与外部专家的合作,以使我们的模型更安全。与个别专家、研究机构和民间社会组织合作是我们流程的重要组成部分。我们认为这项工作是对外部指定治理实践(例如第三方审计)的补充。
The OpenAI Red Teaming Network is a community of trusted and experienced experts that can help to inform our risk assessment and mitigation efforts more broadly, rather than one-off engagements and selection processes prior to major model deployments. Members of the network will be called upon based on their expertise to help red team at various stages of the model and product development lifecycle. Not every member will be involved with each new model or product, and time contributions will be determined with each individual member, which could be as few as 5–10 hours in one year.
OpenAI 红队网络是一个由值得信赖且经验丰富的专家组成的社区,可以帮助更广泛地为我们的风险评估和缓解工作提供信息,而不是在主要模型部署之前进行一次性参与和选择流程。该网络的成员将根据其专业知识被要求在模型和产品开发生命周期的各个阶段为红队提供帮助。并非每个成员都会参与每个新模型或产品,并且时间贡献将由每个成员决定,一年内可能只有 5-10 小时。
Outside of red teaming campaigns commissioned by OpenAI, members will have the opportunity to engage with each other on general red teaming practices and findings. The goal is to enable more diverse and continuous input, and make red teaming a more iterative process. This network complements other collaborative AI safety opportunities including our Researcher Access Program and open-source evaluations.
除了 OpenAI 委托的红队活动之外,成员还将有机会就一般红队实践和调查结果进行相互交流。目标是实现更加多样化和持续的输入,并使红队成为一个更加迭代的过程。该网络补充了其他协作人工智能安全机会,包括我们的研究人员访问计划和开源评估。
如何加入
This network offers a unique opportunity to shape the development of safer AI technologies and policies, and the impact AI can have on the way we live, work, and interact. By becoming a part of this network, you will be a part of our bench of subject matter experts who can be called upon to assess our models and systems at multiple stages of their deployment.
该网络提供了独特的机会来塑造更安全的人工智能技术和政策的发展,以及人工智能对我们生活、工作和互动方式的影响。通过成为该网络的一部分,您将成为我们主题专家的一员,他们可以被要求在部署的多个阶段评估我们的模型和系统。
Assessing AI systems requires an understanding of a wide variety of domains, diverse perspectives and lived experiences. We invite applications from experts from around the world and are prioritizing geographic as well as domain diversity in our selection process.
评估人工智能系统需要了解广泛的领域、不同的观点和生活经验。我们邀请来自世界各地的专家提出申请,并在我们的选择过程中优先考虑地理和领域的多样性。
报酬和保密
All members of the OpenAI Red Teaming Network will be compensated for their contributions when they participate in a red teaming project. While membership in this network won’t restrict you from publishing your research or pursuing other opportunities, you should take into consideration that any involvement in red teaming and other projects are often subject to Non-Disclosure Agreements (NDAs) or remain confidential for an indefinite period.
OpenAI 红队网络的所有成员在参与红队项目时都将获得贡献补偿。虽然该网络的成员资格不会限制您发表研究成果或寻求其他机会,但您应该考虑到,参与红队和其他项目通常需要遵守保密协议 (NDA) 或无限期保密。
韭菜的自我修养
在这个信息爆炸的时代,每当OpenAI悄然推出新技术时,总能激起一场轩然大波。就拿最近的Sora来说,虽然OpenAI仅仅发布了一个让人眼前一亮的Demo,但这并没有阻止那些“先知先觉”的互联网创业者们。他们似乎总能在第一时间抓住机会,推出各种“独家”课程,声称能让你成为Sora领域的行家里手。当然,这些课程的价格不菲,但真正的内容往往是公开信息的简单重复。这种现象不禁让人想起那句老话,“在金矿发现者和卖铁锹的人之间,后者往往更能发财”
所以亲们,下次当你看到那些闪耀着金光的‘独家秘密课程’时,不妨先问问自己:我是真的需要它,还是仅仅被那无处不在的营销噱头所吸引?记住,真正的知识和探索,往往不在高价的课程中,而在于你对世界的好奇和探求。
千言万语汇成一句话“拒绝韭菜从你我做起”