companyOpenAI logo

Trust & Safety Operations Analyst

OpenAISan Francisco
Hybrid Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Senior

Qualifications

Candidates should possess a strong analytical background with expertise in risk management, compliance, or safety operations. A proven track record in handling sensitive content and complex case management is essential. Excellent communication skills, both verbal and written, alongside the ability to collaborate effectively with diverse teams are required. Knowledge of regulatory frameworks and industry standards related to trust and safety is a plus.

About the job

Join Our Dynamic Team

At OpenAI, our Trust, Safety & Risk Operations teams are dedicated to protecting our innovative products, users, and the organization from various threats, including abuse, fraud, scams, and regulatory challenges. We operate at the nexus of operations, compliance, user trust, and safety, collaborating closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are secure, compliant, and reliable for a diverse, global audience.

Our team supports users across ChatGPT, our API, enterprise solutions, and developer tools. We handle sensitive inbound inquiries, develop detection and enforcement systems, and scale operational workflows to address the demands of a fast-paced, high-stakes environment.

Your Role and Responsibilities

We are looking for seasoned analysts with expertise in one or more of the following domains:

  • Content Integrity & Scaled Enforcement – Proactively identify, review, and respond to policy violations, harmful content, and emerging abuse trends on a large scale.

  • Emerging Risk Operations – Detect, assess, and mitigate new and intricate safety, policy, or integrity challenges in the rapidly changing AI landscape.

In this role, you will manage high-sensitivity workflows, serve as the incident manager for complex cases, and develop scalable operational systems, including tools, automation, and vendor processes that uphold user safety and trust while fulfilling our legal, ethical, and product commitments.

Our work culture embraces a hybrid model of three days in the San Francisco office each week, and we provide relocation assistance for new hires.

Please be advised that this role may involve exposure to sensitive content, including material that may be sexual, violent, or otherwise unsettling.

Your Key Responsibilities Include:

  • Manage and resolve high-priority cases within your area of expertise (content enforcement, fraud/scams, compliance, or emerging risks).

  • Conduct thorough risk assessments and investigations utilizing internal tools, product signals, and external data sources.

  • Act as the incident manager for escalated cases necessitating intricate policy, legal, or regulatory analysis.

  • Collaborate with cross-functional teams to design and implement top-tier operational workflows, decision trees, and automation strategies.

  • Establish feedback loops and continuous improvement initiatives to enhance operational effectiveness.

About OpenAI

OpenAI is at the forefront of artificial intelligence innovation, committed to advancing digital technology while ensuring the safety and trust of our users. Our diverse team works collaboratively to create cutting-edge solutions that empower individuals and organizations worldwide. We prioritize integrity, compliance, and user safety, making a positive impact in the AI landscape.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.