companyOpenAI logo

Data Scientist, Integrity Measurement at OpenAI | London, UK

OpenAILondon, UK
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Experience

Qualifications

Candidates should possess a strong background in data science with a focus on safety and integrity metrics, demonstrated experience in quantitative analysis, and a passion for tackling complex problems in digital environments. Proficiency in programming languages such as Python or R, along with familiarity with machine learning techniques, is essential. Excellent communication skills and the ability to work collaboratively in a fast-paced environment are also required.

About the job

Join the Applied Foundations team at OpenAI, where we are committed to not only revolutionizing technology but also safeguarding it against a wide array of adversarial threats. Our mission is to uphold the integrity of our platforms as they expand. We are at the forefront of defending against financial abuse, scaled attacks, and other forms of misuse that could compromise user experience or operational stability.

The Integrity pillar within Applied Foundations is charged with overseeing the systems that identify and address harmful activities on OpenAI’s platforms. As we enhance our capabilities to tackle some of the most pressing usage harms, we are seeking experienced data scientists to robustly measure the prevalence of these issues and evaluate the effectiveness of our responses.


About the Role

We are looking for seasoned trust and safety data scientists to enhance, operationalize, and monitor measurements for intricate actor- and network-level harms. In this role, you will take ownership of measurement and metrics across various established harm verticals, including estimating the prevalence of on-platform (and occasionally off-platform) harm, as well as analyses to uncover gaps and opportunities in our responses.

This position is based in our London office and may require addressing urgent escalations outside of standard working hours. Some areas of concern may involve sensitive content, including sexual, violent, or otherwise disturbing material.

In this role, you will:

  • Take charge of measurement and quantitative analysis for a range of significant actor- and network-based usage harm verticals.

  • Design and implement AI-first methods for prevalence measurement and other safety metrics, potentially utilizing off-platform indicators or non-traditional datasets.

  • Develop metrics suitable for goal-setting or A/B tests when standard top-line metrics are inadequate.

  • Manage dashboards and metrics reporting for harm verticals.

  • Perform analyses and generate insights that drive improvements in review, detection, or enforcement, influencing strategic roadmaps.

  • Optimize LLM prompts for the purpose of measurement.

  • Collaborate with other safety teams to grasp key safety concerns and formulate relevant policies to support safety objectives.

  • Engage in additional duties as required to ensure the integrity and safety of OpenAI’s platforms.

About OpenAI

OpenAI is at the forefront of artificial intelligence research and deployment, dedicated to ensuring that the benefits of AI technology are shared broadly. We prioritize safety, integrity, and ethical considerations in our innovations, making a significant impact in various sectors.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.