OpenAI's Evals & Understanding Team is responsible for evaluating OpenAI’s models based on performance and safety. The team provides metrics and eval frameworks to allow researchers to understand the safety, efficacy, and performance of models as they’re developed. Products like ChatGPT, Dall-E, plugins, browsing, code interpreter, GPT-V rely on both human and synthetic data, as well as model-based experimentation to evaluate success.
Our team builds and deploys the products and experiences necessary to evaluate, debug, and understand our models scale with data from a variety of sources and builds the ML operations, data management tooling, quality and eval systems, model experimentation and insights tools that are leveraged to improve overall AI models.
In this role, you will:
- Architect, build, and design our tooling, infra, products, and evals that power our data generation and management platform, including the feedback mechanisms in products like ChatGPT, along with interfaces viewed by AI trainers.
- Collaborate closely with product managers, researchers, and the rest of our engineering team to create new products around emerging research capabilities and unsolved customer needs
- Iterate rapidly to improve user and developer experience while advancing scalability, performance, observability, and security
You might thrive in this role if you:
- Have meaningful experience with building (and rebuilding) production systems to deliver new product capabilities and to handle increasing scale
- Care deeply about the end user experience and take pride in building products to solve customer needs
- Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed
- Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done
- Build tools to accelerate your own (and your teammates’) workflows, but only when off-the-shelf solutions won’t do
- Are interested in and thoughtful about the impacts of AI technology (see our Charter for examples of our goals) and care deeply about the impact of ML models on people's lives; how to maximize the benefits and mitigate the possible harms.