companyOpenAI logo

Content Integrity Analyst

OpenAISan Francisco
Hybrid Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Experience

Qualifications

Bachelor's degree in a relevant field; Proven experience in Trust & Safety or Content Integrity roles; Strong analytical and problem-solving skills; Excellent communication and collaboration abilities; Ability to handle sensitive content with discretion; Familiarity with policy development and enforcement in tech environments.

About the job

About Our Team

At OpenAI, our Trust & Safety Operations team plays a pivotal role in safeguarding our platform, ensuring the safety of our users, and protecting the public from potential misuse. We cater to a broad spectrum of clients, including individual users, emerging startups, and established global enterprises, across our diverse offerings like ChatGPT and our API as we continue to innovate.

As a vital part of our Support organization, we work in close collaboration with Product, Engineering, Legal, Policy, Go To Market, and Operations teams to provide exceptional user experiences while minimizing risks and reducing material harm.

About the Role

We are seeking seasoned Trust & Safety / Content Integrity professionals who are adept at investigating intricate cases, implementing and refining usage policies in real-world contexts, and developing scalable systems that effectively mitigate risks over time. In this role, you will act as a subject-matter expert (SME) for critical escalations, collaborating with cross-functional teams to facilitate swift and defensible resolutions. You'll also play a key role in designing processes, tools, and automated solutions that ensure safe operations at scale.

This position is ideal for individuals who possess strong judgment and sharp analytical skills, and who thrive on transforming ambiguity into clear decisions, repeatable workflows, and effective automation.

Important Note: This position may require handling sensitive content, which could include highly confidential, sexual, violent, or otherwise disturbing material.

Location: San Francisco, CA (hybrid: 3 days in office/week).

Key Responsibilities:

  • Apply Usage Policy with Precision: Interpret and enforce OpenAI’s usage policies in complex scenarios, providing detailed guidance to both customers and internal teams while documenting exceptions and suggesting policy improvements.

  • Mitigate Risks and Harm: Assess and manage content and behavior that pose real-world risks, including high-severity cases, ensuring appropriate escalation and resolution.

  • Act as a Subject-Matter Expert: Support incident responses and high-stakes escalations by delivering clear assessments, recommending actionable next steps, and collaborating with Legal, Compliance, Security, Product, and Engineering teams as required.

  • Develop Scalable Trust Workflows: Create and manage processes for human-in-the-loop labeling, user reporting, appeals, enforcement actions, and continuous quality assurance, upholding high standards of quality.

About OpenAI

OpenAI is a leading artificial intelligence research organization dedicated to ensuring that AI benefits all of humanity. Our team is committed to innovation and safety, providing cutting-edge AI solutions while prioritizing user trust and safety. Join us in shaping the future of technology with responsible AI practices.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.