About the job
Join Our Innovative Team
At OpenAI, our Applied team is dedicated to responsibly delivering cutting-edge technology to the world. We have launched transformative tools like ChatGPT, DALL·E, and APIs for GPT-4 and GPT-3, alongside managing large-scale inference infrastructure. As we continue to innovate, we are committed to ensuring our powerful tools are utilized ethically and safely.
The Scaled Abuse team operates within our Applied Engineering organization, tackling fraudulent activities on our platform. We are seeking a passionate Data Scientist with expertise in anti-fraud and abuse measures to spearhead the design and development of our next-gen anti-abuse systems.
Your Role
As a Data Scientist on the Platform Abuse team, you will play a pivotal role in safeguarding OpenAI’s products from exploitation. Your responsibilities will include identifying and addressing emerging misuse patterns and enhancing our detection methods. This is an exciting opportunity, as we anticipate that many future abuse strategies have yet to be conceived.
Primary Responsibilities:
Develop and implement systems for fraud detection and resolution, ensuring a balance between fraud loss, implementation costs, and customer experience.
Collaborate with finance, security, product, research, and trust & safety teams to effectively combat fraudulent activity on our platform.
Keep updated with the latest tools and techniques to stay ahead of sophisticated adversaries.
Leverage advanced models such as GPT-5 to enhance our fraud and abuse countermeasures.
Ideal Candidate:
Experience working within a highly technical trust and safety team, or closely with policy, content moderation, or security teams.
Proficiency in programming languages (Python preferred) for exploratory data analysis and deriving actionable insights.
Demonstrated ability to propose, design, and conduct rigorous experiments (A/B tests, quasi-experimental designs, etc.) to validate findings.

