Safeguards Analyst Human Exploitation Abuse jobs in San Francisco – Browse 517 openings on RoboApply Jobs

Safeguards Analyst Human Exploitation Abuse jobs in San Francisco

Open roles matching “Safeguards Analyst Human Exploitation Abuse” with location signals for San Francisco. 517 active listings on RoboApply Jobs.

517 jobs found

1 - 20 of 517 Jobs
Apply
companyAnthropic logo
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC

Join Anthropic as a Safeguards Analyst, where your expertise will play a vital role in combatting human exploitation and abuse. In this position, you will utilize analytical skills to assess risks, develop strategies, and implement safeguards that protect vulnerable populations.Your contributions will be essential in shaping policies and practices that promote safety and integrity across our operations. We are looking for a proactive individual who is passionate about human rights and eager to make a significant impact.

Mar 19, 2026
Apply
companyAnthropic logo
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC; San Francisco, CA | New York City, NY

Join Anthropic as a Safeguards Enforcement Analyst, where you will play a pivotal role in ensuring safety evaluations within our innovative AI systems. This role focuses on analyzing compliance with safeguards and developing strategies to enhance safety protocols. Collaborate with cross-functional teams to assess risks and implement robust solutions that align with our commitment to responsible AI.

Mar 12, 2026
Apply
companyAnthropic logo
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY

Join Anthropic as a Safeguards Policy Analyst focusing on Fraud & Scams. In this pivotal role, you will analyze, develop, and implement policies that protect users from fraudulent activities. Collaborate with cross-functional teams to ensure our systems and processes are secure and user-friendly. Your insights will guide our efforts to mitigate risks and enhance user experience.

Apr 7, 2026
Apply
companyAnthropic logo
On-site|On-site|San Francisco, CA | New York City, NY

Join Anthropic as a Safeguards Analyst focusing on Account Abuse, where you will be instrumental in developing robust systems to detect and combat account misuse on our platform. Your expertise will help us ensure a safe and trustworthy environment for our users. You will be responsible for creating and optimizing frameworks for identifying abuse, evaluating third-party data sources, and collaborating with cross-functional teams to enhance our enforcement operations. This role also involves managing payment fraud operations and assessing new product launches for potential abuse risks. Please note this position may involve exposure to sensitive content and an on-call responsibility across teams.

Feb 6, 2026
Apply
companyAnthropic logo
Full-time|On-site|San Francisco, CA | New York City, NY

We are seeking a dynamic and detail-oriented Vendor and Contract Manager, Safeguards to join our team at Anthropic. In this role, you will be responsible for overseeing vendor relationships and contract management, ensuring compliance with our safeguards and standards. Your expertise will contribute to our mission of creating safe and beneficial AI systems.

Mar 19, 2026
Apply
companyOpenAI, Inc. logo
Full-time|Remote|San Francisco

Join Our TeamAt OpenAI, our mission is to ensure that general-purpose artificial intelligence serves the greater good for all humanity. We are committed to the real-world deployment of our technologies and their continuous improvement based on practical usage and potential misuse.The Intelligence and Investigations team plays a critical role in this mission by identifying, examining, and mitigating the misuse of our products, focusing on significant and innovative harms. Our efforts empower partner teams to create data-driven model policies and develop robust safety measures. By gaining a deep understanding of abuse patterns, we help guarantee that OpenAI's products are utilized safely in the creation of impactful and rewarding applications.About the PositionAs a Technical Abuse Investigator within the Intelligence and Investigations team, your primary responsibility will be to detect, investigate, and thwart malicious activities on OpenAI’s platform. You will enhance portions of the investigative process to enable our team to effectively counteract harm on a larger scale. This position uniquely blends traditional investigative acumen with strong technical skills, as much of the work involves navigating intricate datasets to uncover actionable abuse signals, rather than merely reviewing isolated reports.Beyond performing direct investigations, this role is designed to amplify the capabilities of the broader investigations team. You will work on scaling or automating essential yet intricate processes, crafting and implementing lightweight technical solutions—like notebook templates, data pipelines, or internal utilities—that empower specialized investigators to detect, track, and address abuse more effectively than what a single investigator could achieve. Success will not only be measured by the number of investigations completed but also by how efficiently your contributions allow you and your teammates to operate.You will collaborate closely with engineering, legal, investigations, security, and policy partners to address urgent escalations, examine activities that surpass existing safeguards, and translate investigative findings into scalable detection and enforcement strategies.This role will require participation in an on-call rotation to manage urgent escalations beyond standard work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise disturbing material. This position operates in the PST time zone and is open to remote candidates within the United States, although we have a strong preference for applicants based in San Francisco or New York.

Mar 12, 2026
Apply
companyOpenAI logo
Full-time|Remote|San Francisco

About Our TeamAt OpenAI, our mission is to ensure that general-purpose artificial intelligence serves the betterment of all humanity. We believe that the realization of this mission hinges on real-world deployment and continuous iteration based on our experiences.The Intelligence and Investigations team plays a crucial role in this mission by identifying and probing into misuse of our products, particularly emerging forms of abuse. This work empowers our partner teams to devise data-informed product policies and develop scalable safety measures. A nuanced understanding of abuse enables us to provide users with the tools they need to create positive outcomes with our products.About This RoleAs an Abuse Investigator within the Intelligence and Investigations team, your primary responsibility will be to detect and assess malicious activities on our platform, effectively disrupting any violations of our policies and identifying harmful behaviors. This role necessitates an expert-level comprehension of our products and data, alongside a solid background in investigating threat actors. You will address urgent escalations, particularly those that evade our existing tools and safeguards.This position demands specialized knowledge in identifying, interpreting, and mitigating risks associated with violent behaviors and terrorist activities. You should have experience in investigating complex and harmful threats, along with the capability to discern ambiguous signals in a multifaceted and adversarial threat landscape. A demonstrated ability to quickly assimilate new processes, systems, and team dynamics while thriving in a dynamic, high-pressure environment is essential.This role operates on Pacific Standard Time and supports remote work, although you are also welcome to work from our offices in San Francisco, New York, or Washington, D.C. The position includes resolving urgent escalations outside of standard working hours and participating in on-call shifts. Investigations will involve sensitive content, including sexual, violent, or otherwise disturbing material, including issues of child safety. Thus, resilience in managing high-stress environments is crucial.Key Responsibilities:Analyze leads, investigate activities, and disrupt abusive operations in collaboration with our policy, legal, and integrity teams, focusing on violent and terrorist activities, particularly those posing immediate threats to life.Create abuse signals and tracking strategies to proactively identify harmful activities on our platform.Identify operational workflow enhancements and processes that expedite work while maintaining risk mitigation strategies.

Mar 11, 2026
Apply
companyArena Intelligence logo
Founding Abuse Engineer

Arena Intelligence

Full-time|On-site|Bay Area

Location: Bay AreaRole: Founding Abuse Engineer Arena Intelligence provides transparent, real-world evaluations of AI models. Founded by researchers from UC Berkeley’s SkyLab, the company supports organizations worldwide with trusted benchmarks and monthly leaderboards used by millions. Arena’s team draws on experience from UC Berkeley, Google, Stanford, DeepMind, and Discord, and values truth, speed, and quality. The company encourages curiosity and welcomes people who want to make a difference in practical AI assessment. Role overview The Founding Abuse Engineer shapes the strategy and systems that keep Arena’s platform safe from misuse. This position covers everything from designing detection and enforcement tools to investigating threats and defending the integrity of Arena’s leaderboards. The work is both foundational and highly visible, setting standards for future trust and safety efforts as threats evolve. What you will do Design and implement systems to detect and prevent automated abuse and other misuse across Arena’s products Investigate threats and respond to new abuse tactics targeting AI model evaluations Collaborate with product, infrastructure, model collaborators, policy, and leadership teams to maintain secure and reliable leaderboards Define technical strategy for platform integrity and trust Establish the foundation for future trust & safety and abuse engineering work Collaboration This role works closely with teams across Arena, including product, infrastructure, policy, and leadership. The impact is direct: keeping leaderboards secure, intercepting harmful behavior, and ensuring Arena’s services can be safely deployed.

Apr 22, 2026
Apply
companyAnthropic logo
Full-time|On-site|San Francisco, CA | New York City, NY

Join our dynamic team at Anthropic as a Software Engineer specializing in Account Abuse. In this pivotal role, you will leverage your coding skills and analytical mindset to develop innovative solutions that enhance our platform’s integrity. You will work collaboratively with cross-functional teams to identify and mitigate potential abuse scenarios, ensuring a safe and secure user experience.

Mar 16, 2026
Apply
companyOpenAI logo
Full-time|Remote|Remote - US

About Our TeamAt OpenAI, we are dedicated to ensuring that general-purpose artificial intelligence serves the greater good of humanity. We firmly believe that achieving this goal necessitates real-world deployment and continuous improvement driven by our learnings.Our Intelligence and Investigations team plays a crucial role by identifying and probing instances of misuse of our products, particularly focusing on emerging types of abuse. This work empowers our partner teams to establish data-driven product policies and create scalable safety measures. A precise understanding of misuse is essential for us to enable users to leverage our products effectively and safely.About the RoleAs an Abuse Investigator on our Intelligence and Investigations team, your primary responsibility will be to identify and investigate misuse of our platform or services. You will specifically concentrate on cases where users attempt to exploit our platform for prohibited activities, such as developing or distributing biological and/or chemical threats aimed at harming individuals, essential resources/infrastructure, or the environment. OpenAI maintains strict policies in this realm, and you will be tasked with detecting, disrupting, and enforcing compliance against violators.This position requires specialized expertise, experience in investigating intricate threats, and the capacity to navigate ambiguous signals within a complex and adversarial threat landscape.You will be required to respond to urgent escalations and present your investigative findings, both in written form and verbally, to key stakeholders in government, industry, and civil society as needed. Additionally, you will contribute to shaping the company’s evolving threat response, integrity monitoring, and mitigation strategies, while working closely on individual cases and enforcement assessments.This role is designed to be fully remote, although you are welcome to work from our San Francisco office if preferred. Please note that some investigations may involve sensitive content of a violent or graphic nature.Responsibilities:Detect, investigate, and disrupt attempts to misuse OpenAI products in the creation or distribution of biological threats, including dual-use misuse and emerging biothreat vectors. You will also collaborate across related domains (e.g., chemical threats).Collaborate closely with teams across Policy, Legal, Integrity, Global Affairs, and Security to conduct thorough investigations, including cross-internet and open-source research to trace and comprehend instances of abuse, ensuring OpenAI’s mitigations adapt to the evolving needs of the sector.

Mar 6, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About the TeamAt OpenAI, we are dedicated to ensuring that general-purpose artificial intelligence serves to benefit all of humanity. Our mission involves real-world implementation and ongoing refinement based on the practical use and potential misuse of our technologies.The Intelligence and Investigations team plays a crucial role in this mission by identifying, analyzing, and investigating the misuse of our products, especially in relation to novel or emerging abuse patterns. Our efforts empower partner teams to formulate data-informed product policies and implement effective safety measures. By gaining a precise understanding of abuse, we ensure that OpenAI's products are utilized safely for meaningful and legitimate applications.About the RoleAs a Child Safety Investigator on the Intelligence & Investigations team, your primary responsibility will be to identify and prevent individuals from using OpenAI’s products to exploit minors, both online and offline. OpenAI has stringent prohibitions against such behavior and is committed to reporting any identified child sexual exploitation threats to the National Center for Missing and Exploited Children (NCMEC), in accordance with applicable laws and our internal policies.This position demands specialized knowledge, technical proficiency, and the capability to navigate ambiguous, high-stakes situations. You will conduct thorough investigations into user behavior, analyze product data, identify emerging threat patterns, and support enforcement actions, including cases that necessitate legal review and external reporting.Additionally, you will contribute to the development of detection strategies aimed at proactively identifying high-risk behaviors, particularly those that bypass existing safeguards. This role may involve responding to urgent escalations, and investigations may require exposure to sensitive and distressing materials, including sexual or violent content.In this role, you will:Investigate serious child safety violations and disrupt harmful actors in collaboration with Policy, Legal, Integrity, Global Affairs, Security, and Engineering teams, including conducting cross-platform and cross-internet research.Support investigations in other high-risk harm areas where child safety issues intersect.Perform open-source and cross-platform research to provide context on actors and abuse networks.Develop detection signals, behavioral heuristics, and tracking strategies to proactively identify high-risk users using tools such as SQL, Databricks, and Python.Collaborate with various teams to enhance the efficacy of safety measures and protocols.

Mar 6, 2026
Apply
companyOpenAI logo
Full-time|Remote|San Francisco

Role Overview OpenAI is seeking an Abuse Investigator focused on AI self-improvement risk. This position is based in San Francisco. The role centers on protecting the ethical and responsible use of AI systems. What You Will Do Analyze potential risks connected to AI self-improvement and related technologies. Investigate reported incidents or concerns involving AI behavior or misuse. Work with teams across the company to strengthen safety protocols and reduce risk.

Apr 15, 2026
Apply
companyCity and County of San Francisco logo
Full-time|On-site|San Francisco

The City and County of San Francisco is seeking a highly skilled and motivated Senior Human Resources Analyst. This pivotal role will involve working across multiple departments citywide, providing guidance and support in various HR functions.The ideal candidate will possess strong analytical skills, a deep understanding of HR policies, and a commitment to fostering a diverse and inclusive workplace. Your contributions will enhance the overall effectiveness of HR programs and initiatives.

Jun 7, 2023
Apply
company
Full-time|$80K/yr - $100K/yr|Remote|Remote — San Francisco, California, United States

As a Senior Analyst for Minor Safety at Control Risks, you will be instrumental in assisting a leading global technology firm in safeguarding and fortifying its online platform. This role focuses on the critical task of reviewing and addressing safety and abuse incidents involving minors, conducting in-depth investigations of behavioral abuse and potential threat actors, and delivering high-quality analytical insights in a dynamic environment. Become part of a dedicated, mission-oriented team that collaborates across various disciplines to protect vulnerable populations, allowing the client to operate securely and responsibly in a constantly shifting risk landscape.Work Schedule: Tuesday to Saturday, 9:00 AM - 5:00 PM local time.Thoroughly review incident reports related to safety and abuse targeting minors, taking decisive action in accordance with operational policies and ensuring consistent follow-through on every report.Conduct investigations into behavioral abuse and threat actors on the client’s platform to analyze techniques, impacts, and attribution accurately.Engage in a fast-paced analytic workflow, meeting tight deadlines while maintaining a high standard of analytic excellence, ensuring that reporting deliverables are aligned with best practice intelligence assessments.Prepare detailed written reports, presentations, and strategic insights for senior leadership and the broader organization.Lead improvement initiatives by offering guidance on policy development and executing projects that enhance existing workflows.Review and provide constructive feedback on team members’ work, and design and deliver training programs.Identify emerging issues and trends, escalating them as necessary.Contribute to enhancing support resources and content.Act as a consultative partner with our vendor team, providing expertise in processing all types of requests with exceptional quality and efficiency.Serve as a role model and mentor to colleagues, demonstrating flexibility and outstanding teamwork to prioritize competing demands effectively.

Jan 27, 2026
Apply
company
Full-time|On-site|San Francisco

Technical Staff MemberAt humans&, we are dedicated to pioneering a human-centric approach to artificial intelligence. Our mission is to redefine AI by placing individuals and their interpersonal connections at the heart of our innovations.We invite talented researchers and engineers who have made significant contributions to the cutting-edge of AI to join our dynamic team. If you excel in your field and are driven to innovate, we want to hear from you!

Jan 20, 2026
Apply
companyCity and County of San Francisco logo
Full-time|On-site|San Francisco

Join the City and County of San Francisco as a Human Resources Analyst, where you will play a vital role in enhancing workforce efficiency and supporting various city departments. This entry-level position offers an exciting opportunity to contribute to the HR functions that affect the diverse community of San Francisco.

Jun 7, 2023
Apply
company
Full-time|On-site|San Francisco

Join our dynamic team at humans-and as a Finance Generalist. In this pivotal role, you will manage various financial functions, ensuring accuracy and compliance while supporting our mission to deliver exceptional service. Your contributions will directly impact our financial health and operational efficiency.

Apr 8, 2026
Apply
companyJobs for Humanity logo
Business Systems Analyst

Jobs for Humanity

Full-time|On-site|San Francisco

Jobs for Humanity seeks a Business Systems Analyst based in San Francisco. This role centers on bridging business needs with effective technology solutions. Role overview The Business Systems Analyst works closely with stakeholders to understand organizational requirements. By analyzing current processes and identifying areas for improvement, the analyst helps shape solutions that support company objectives. What you will do Gather and clarify business requirements from different teams Analyze existing workflows and identify opportunities to streamline operations Design and help implement new processes or systems that align with business goals Location This position is based in San Francisco.

Apr 28, 2026
Apply
companyAnthropic logo
On-site|On-site|San Francisco, CA | New York City, NY

Join Anthropic as a Software Engineer on our Safeguards team, where you will play a pivotal role in developing safety and oversight mechanisms for our AI systems. Your work will focus on monitoring model behaviors, preventing misuse, and ensuring the well-being of our users. You will be responsible for creating systems that detect unwanted model behaviors, enforce compliance with our terms of service, and uphold our commitment to safety, transparency, and accountability.

Jan 29, 2026
Apply
companymindlance2 logo
Full-time|On-site|San Francisco

Join our team at mindlance2 as a Senior SAP Human Capital Management Functional Analyst, where you will play a critical role in optimizing our HR processes. You will be responsible for analyzing, designing, and implementing solutions within the SAP HCM module, ensuring that our HR systems align with business needs.

Sep 20, 2016

Sign in to browse more jobs

Create account — see all 517 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.