companyOpenAI logo

Researcher, Trustworthy AI

OpenAISan Francisco
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Experience

Qualifications

Minimum of 3 years of research experience in AI safety or related fields, proficiency in Python or similar programming languages, strong analytical skills, and a proven ability to translate complex policy issues into actionable research.

About the job

About Our Team

The Safety Systems team at OpenAI is at the forefront of ensuring our advanced AI models are deployed safely for societal benefit. We are dedicated to OpenAI's mission of building safe AGI and promoting a culture centered on trust and transparency.

Our Trustworthy AI team focuses on actionable research that considers the societal implications of AI development. This includes addressing complex policy challenges by creating mechanisms for public input into AI values and analyzing the effects of anthropomorphism in AI. We strive to convert abstract policy dilemmas into practical, measurable solutions that can help prepare society for more intelligent systems. Our work also emphasizes external validations and assurances for AI, aiming to strengthen independent oversight.

About the Role

We are seeking outstanding research scientists/engineers who can enhance our efforts to prepare society for AGI. The ideal candidate will have the capability to transform vague policy issues into quantifiable and actionable research.

This position is based at our headquarters in San Francisco, and we provide relocation assistance for new hires.

In This Role, You Will:

  • Develop research methodologies and strategies to investigate the societal impacts of our models in a manner that informs model design.

  • Innovate and execute experiments that facilitate public engagement in shaping model values.

  • Enhance the robustness of external assurances through comprehensive evaluations of outside findings.

  • Support and expand our capacity to mitigate risks associated with flagship model deployments efficiently.

You Will Excel in This Role If You:

  • Are passionate about OpenAI’s vision of developing safe, universally beneficial AGI and resonate with our charter.

  • Demonstrate a strong commitment to AI safety, aiming to enhance the safety of cutting-edge AI systems for practical application.

  • Bring over 3 years of research experience (whether in industry or academia) and proficiency in Python or analogous programming languages.

  • Exhibit excellent problem-solving skills with a track record of translating complex concepts into actionable insights.

About OpenAI

OpenAI is a pioneering research organization dedicated to developing artificial general intelligence (AGI) that is safe and beneficial to humanity. Our Safety Systems team plays a crucial role in ensuring that our AI models are deployed responsibly, with a focus on societal impact and transparency.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.