companyOpenAI logo

Data Scientist - Integrity Measurement at OpenAI | San Francisco

OpenAISan Francisco
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Experience

Qualifications

Qualifications:Proven experience in data science, particularly in trust and safety domains. Strong analytical skills and proficiency in statistical methods. Experience with AI and machine learning methodologies. Ability to handle sensitive content with professionalism and discretion. Effective communication skills for presenting data insights and collaborating with cross-functional teams.

About the job

About the Team

The Applied Foundations team at OpenAI is focused on ensuring our innovative technology remains secure against a range of adversarial threats. We are committed to safeguarding the integrity of our platforms as they expand. Our team plays a crucial role in defending against financial abuse, large-scale attacks, and various forms of misuse that could compromise user experience or operational stability.

The Integrity pillar within Applied Foundations is tasked with developing robust systems that identify and respond to harmful actors and activities on OpenAI’s platforms. As these systems evolve to address significant usage harms, we are seeking skilled data scientists to accurately measure the prevalence of these issues and assess the effectiveness of our responses.

About the Role

We are in search of experienced trust and safety data scientists who can enhance, operationalize, and oversee the measurement of complex actor- and network-level harms. The selected data scientist will be responsible for measurement and metrics across various established harm verticals, including estimating the prevalence of on-platform (and occasionally off-platform) harm, while also conducting analyses to uncover gaps and opportunities in our responses.

This position is based in our San Francisco or New York office and may require addressing urgent escalations outside of regular working hours. Many harm areas may involve sensitive content, including sexual, violent, or otherwise disturbing material.

In this role, you will:

  • Lead measurement and quantitative analysis for a range of severe, actor- and network-based usage harm verticals.

  • Develop and apply AI-first methodologies for prevalence measurement and other standardized safety metrics, potentially incorporating off-platform indicators and non-traditional datasets.

  • Create metrics that can be utilized for goal-setting or A/B testing where traditional prevalence or top-line metrics may not apply.

  • Manage dashboards and metrics reporting for harm verticals.

  • Perform analyses and generate insights to guide improvements in review, detection, and enforcement processes, while influencing strategic roadmaps.

  • Optimize LLM prompts specifically for measurement purposes.

  • Collaborate with other safety teams to identify key safety issues and develop relevant policies to address safety needs.

  • Provide metrics for leadership to support informed decision-making.

About OpenAI

OpenAI is at the forefront of AI research and deployment, dedicated to ensuring that these powerful technologies benefit humanity as a whole. Our commitment to safety and integrity drives our team to innovate continually while addressing complex challenges in the digital landscape.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.