Qualifications
As the Data Science Manager for Integrity, you will lead a dynamic team of data scientists focused on trust & safety, fraud prevention, risk analysis, and modeling. Your key responsibilities will include:Building a high-performing data science function that adapts swiftly to emerging threats. Defining and executing an analytical strategy that guides how OpenAI identifies, measures, and mitigates integrity risks on a large scale. This role is highly cross-functional, requiring you to collaborate with Integrity Product/Engineering leaders to set strategic roadmaps, refine team structure, enhance technical rigor (including experimentation, causal inference, modeling, and metrics), and cultivate a culture of proactive impact. Given the constantly evolving nature of challenges in this field, you must possess strong judgment, comfort with ambiguity, and the ability to build scalable systems.
About the job
Join OpenAI's Integrity Data Science team, where our mission revolves around the responsible deployment of powerful AI technologies. We are dedicated to ensuring that our users can trust our products by developing measurement systems, experimental practices, and strategies to detect and mitigate misuse, fraud, and evolving adversarial behaviors.
As we expand the scope and urgency of our Integrity initiatives across various product surfaces and market strategies, we are seeking a passionate Data Science Manager. In this role, you will play a pivotal role in scaling our team, enhancing execution across multiple Integrity domains, and fostering collaboration with Product, Engineering, Operations, and related teams (e.g., Growth, Ads).
This position is located at our San Francisco headquarters (in-office).
About OpenAI
OpenAI is at the forefront of artificial intelligence technology, committed to ensuring that AI benefits all of humanity. Our Integrity Data Science team plays a critical role in this mission, working tirelessly to develop systems that protect against misuse and promote trust in our innovations. Join us in shaping a safer AI landscape!