About the Team
Trust and Safety is at the foundation of OpenAI’s mission. The team is a part of OpenAI’s broader Applied AI group, which is charged with turning OpenAI’s advanced AI model technology into useful products. We see this as a path towards safe and broadly beneficial AGI by gaining practical experience in deploying these technologies safely and easily by developers and customers.
Within the Applied AI group, the Trust and Safety team protects OpenAI’s technologies from abuse. We develop tools and processes to detect, understand, and mitigate large-scale misuse. We’re a small, focused team that cares deeply about safely enabling users to build useful things with our products.
In the summer of 2020, we introduced GPT-3 as the first product on the OpenAI API, allowing developers to integrate its ability to understand and generate natural language into their products. The MIT Technology Review listed GPT-3 as one of its 10 Breakthrough Technologies of the past year (alongside mRNA vaccines!). In the summer of 2021, we launched Copilot, powered by our Codex, in partnership with GitHub.
About the Role
As a technical analyst on the Trust and Safety team, you will be responsible for developing novel detection techniques to discover and mitigate abuse of OpenAI’s technologies. The Platform Abuse subteam specializes in detecting new threat vectors and scaling our coverage using state of the art techniques.. This is an operations role based in our San Francisco office and will require participation in an on-call rotation and resolving urgent incidents outside of normal work hours.
Out of transparency and trust, we’d like to note that this role involves grappling with sensitive uses of OpenAI’s technology, including at times sexual, violent, or otherwise-disturbing material.
In this role, you will:
- Detect, respond to, and escalate platform abuse incidents
- Develop new ways to scale our detection coverage, especially using state of the art large language models and embeddings to improve and automate our coverage
- Improve our detection and response processes
- Collaborate with engineering, policy, and research teams to improve our tooling and understanding of abusive content
You might thrive in this role if you:
- Have experience developing innovative detection solutions and conducting open-ended research to solve real-world problems
- Have experience with large language models and deploying scaled detection solutions
- Have experience in a technical analysis role or have experience with log analysis tools like Splunk/Humio
- Have experience on a trust and safety team and/or have worked closely with policy, content moderation, or security teams
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
- Medical, dental, and vision insurance for you and your family
- Mental health and wellness support
- 401(k) plan with 4% matching
- Unlimited time off and 18+ company holidays per year
- Paid parental leave (20 weeks) and family-planning support
- Annual learning & development stipend ($1,500 per year)