Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Senior
Qualifications
Proven experience in software engineering, with expertise in languages such as Java, Python, or JavaScript. Strong understanding of security principles and practices. Experience with cloud services (AWS, GCP, Azure) is a plus. Excellent problem-solving skills and the ability to work collaboratively in a team. Familiarity with Agile development methodologies. Understanding of data privacy regulations and best practices.
About the job
Join Quizlet as a Senior Software Engineer specializing in Trust & Safety, where you will play a crucial role in enhancing the security and integrity of our platform. You will collaborate with cross-functional teams to develop robust software solutions that protect our community and ensure a safe learning environment.
About Quizlet Inc.
Quizlet is a leading educational technology company that empowers students to learn more effectively through innovative tools and resources. With a commitment to creating a safe and supportive learning environment, we aim to make education accessible to everyone.
Join Quizlet as a Senior Software Engineer specializing in Trust & Safety, where you will play a crucial role in enhancing the security and integrity of our platform. You will collaborate with cross-functional teams to develop robust software solutions that protect our community and ensure a safe learning environment.
Airbnb started in 2007 when two hosts welcomed three guests into their San Francisco home. Since then, the platform has grown to over 5 million hosts and more than 2 billion guests worldwide. Hosts offer unique stays and experiences that connect travelers with local communities. Trust Engineering at Airbnb Trust sits at the heart of Airbnb’s platform. The Trust Engineering team builds technology to keep the community safe and uphold high standards for hosts, guests, homes, and experiences. Their work addresses both online risks, such as account compromise, fake listings, and financial loss, and offline concerns like theft, property damage, and personal safety. The team’s responsibilities include user onboarding, screening, identity, and reputation systems. Trust Engineering leads the technical vision for these systems and integrates them throughout Airbnb’s platform. Role overview The Senior Staff Software Engineer, Trust, is a senior individual contributor role. This engineer partners with technical leaders across Airbnb to shape, plan, and deliver a broad roadmap of Trust engineering projects. The position involves extensive collaboration with teams throughout the company. While highly senior, this is still a hands-on engineering role, every Airbnb software engineer, regardless of level, contributes code and development work. What you will do Define and drive the long-term vision and strategy for the Trust Platform, setting architectural direction for core systems that support scalable, high-quality fraud detection, safety, and trust decisions across Airbnb. Work deeply within Trust Platform components, developing system and performance tools, and identifying ways to improve technical quality, operational excellence, and developer experience. Promote an AI-first engineering approach, using LLM-powered agents to generate and refine code, so you can focus on problem-solving, system design, and quality oversight. Location This position is remote and based in the United States.
Join suno as an Engineering Manager in our Trust & Safety team, where you will lead the development and implementation of innovative solutions to enhance user safety and trust on our platform. You will work closely with cross-functional teams to ensure the integrity of our systems and the protection of our users. Your leadership will be vital in driving engineering excellence and fostering a culture of safety and accountability.
Role overview The Senior Manager, Trust & Safety Policy at Lyft leads the team that shapes and updates policies to protect riders and drivers. This position ensures Lyft’s standards align with legal requirements and promote a secure experience on the platform. The role involves both policy development and hands-on implementation. Key responsibilities Guide a team dedicated to creating and carrying out trust and safety policies Draft and update policies that keep users safe while meeting legal and regulatory standards Collaborate with colleagues from multiple departments to design solutions that work in practice Share policy changes and decisions clearly throughout the company What Lyft looks for Ability to think strategically and solve complex problems Strong communication skills Experience working with teams across different functions Background in trust and safety, policy, or a related area is helpful Location San Francisco, CA
Founded in 2007, Airbnb has transformed travel by connecting hosts and guests in over 190 countries. With a community of more than 5 million hosts, we facilitate unique stays and experiences, enabling guests to immerse themselves in local cultures.Join Our Trust-Focused Community:At Airbnb, trust is our top priority, and our dedicated team works tirelessly to ensure a safe and secure platform. As part of the Trust team, you will play a crucial role in safeguarding our community from fraud and ensuring the quality of our hosts, guests, and listings. This involves combating both online threats like account compromise and offline risks such as property damage and personal safety. Your work will help us build and maintain trust within our vibrant community.Make a Significant Impact:In your role within the Trust Engineering team, you will design and implement large-scale systems aimed at detecting and mitigating fraudulent activities across our platform. Collaborating closely with product, data science, and operations teams, you will develop real-time risk detection services that adapt to evolving threats, helping to make Airbnb the safest and most trusted platform in the industry.Your Daily Responsibilities:Design and operate resilient, scalable distributed systems.Enhance platform capabilities to counteract the evolving fraud landscape, collaborating with various engineering teams.Contribute vital insights to the Trust Platform's roadmap and strategic initiatives.
Full-time|$162K/yr - $186K/yr|On-site|United States
Airbnb, Inc. has grown from a small home-sharing idea in San Francisco to a platform connecting over 5 million hosts with more than 2 billion guests worldwide. The company’s mission centers on helping people find a place to belong, while supporting authentic connections between guests and local communities. Role overview The Trust Emerging Defense team focuses on building new products and protections to address evolving risks. This group works to ensure peace of mind for everyone using Airbnb, from guests to hosts, by strengthening the platform’s safety and privacy features. As a Software Engineer, Trust, the work directly impacts the security of global communications across phone, VoIP, SMS, in-app chat, and video conferencing. This position plays a key role in preventing fraud and safeguarding the Airbnb community. What you will do Design and develop products that protect user safety and privacy. Work with engineers, data scientists, product managers, and operations to identify opportunities and clarify requirements for fraud detection and prevention. Build, deploy, and operate machine learning models and pipelines at scale, including both batch and real-time applications. Improve risk investigation tools to support decisions that help prevent safety or property damage incidents. Collaboration and impact This role involves leading a team of engineers and collaborating across departments to launch core company initiatives. The work helps shape a secure, trustworthy communication platform for Airbnb’s global community.
Join Our Dynamic TeamAt OpenAI, our Trust, Safety & Risk Operations teams are dedicated to protecting our innovative products, users, and the organization from various threats, including abuse, fraud, scams, and regulatory challenges. We operate at the nexus of operations, compliance, user trust, and safety, collaborating closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are secure, compliant, and reliable for a diverse, global audience.Our team supports users across ChatGPT, our API, enterprise solutions, and developer tools. We handle sensitive inbound inquiries, develop detection and enforcement systems, and scale operational workflows to address the demands of a fast-paced, high-stakes environment.Your Role and ResponsibilitiesWe are looking for seasoned analysts with expertise in one or more of the following domains:Content Integrity & Scaled Enforcement – Proactively identify, review, and respond to policy violations, harmful content, and emerging abuse trends on a large scale.Emerging Risk Operations – Detect, assess, and mitigate new and intricate safety, policy, or integrity challenges in the rapidly changing AI landscape.In this role, you will manage high-sensitivity workflows, serve as the incident manager for complex cases, and develop scalable operational systems, including tools, automation, and vendor processes that uphold user safety and trust while fulfilling our legal, ethical, and product commitments.Our work culture embraces a hybrid model of three days in the San Francisco office each week, and we provide relocation assistance for new hires.Please be advised that this role may involve exposure to sensitive content, including material that may be sexual, violent, or otherwise unsettling.Your Key Responsibilities Include:Manage and resolve high-priority cases within your area of expertise (content enforcement, fraud/scams, compliance, or emerging risks).Conduct thorough risk assessments and investigations utilizing internal tools, product signals, and external data sources.Act as the incident manager for escalated cases necessitating intricate policy, legal, or regulatory analysis.Collaborate with cross-functional teams to design and implement top-tier operational workflows, decision trees, and automation strategies.Establish feedback loops and continuous improvement initiatives to enhance operational effectiveness.
Join Lyft as a Manager of Trust & Safety Policy, where you will play a crucial role in shaping and implementing policies that ensure the safety and trust of our community. Your leadership will guide strategic initiatives, engage with stakeholders, and drive data-informed decisions to foster a secure environment for our riders and drivers.
Full-time|$193.8K/yr - $285K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY; Chicago, IL
About the TeamThe Trust & Safety, Integrity, and Fraud Product team at DoorDash is committed to creating a secure and reliable experience for all users on our platform, including Consumers, Merchants, and Dashers. We address intricate challenges such as fraud prevention, account takeover prevention, authenticity verification, and regulatory compliance, all while ensuring a seamless user experience. Our collaborative efforts with cross-functional teams—including Engineering, Data Science, Compliance, and Risk Operations—drive strategic initiatives that safeguard our business while promoting growth.About the RoleAs DoorDash expands beyond restaurants into a broader marketplace, our commitment to safety and trust remains paramount. We are seeking a Senior Product Manager to spearhead cross-functional teams tackling complex challenges that affect Consumers, Dashers, and Merchants. Your role will encompass various aspects of our business—from new user onboarding and in-app experiences to innovative products and services that are unparalleled in the market. Depending on your expertise, you will either lead a vertical fraud team focused on protecting Consumers or a horizontal fraud platform team dedicated to enhancing our Risk Engine, Data Signals Intelligence, and Automation/Anomaly Detection capabilities. This is an exceptional opportunity to shape the future of DoorDash during a period of rapid growth and significant impact.You’re excited about this opportunity because you will…Establish the vision and long-term product strategy for a vertical or horizontal fraud team.Develop and implement a customer-centric product roadmap in close collaboration with senior leadership, Operations, Data Science, Analytics, Design, and Engineering teams.
As the Trust and Safety Strategy Lead at Faire, you will play a pivotal role in shaping our approach to ensuring the security and trust within our marketplace. You will be responsible for developing and executing strategic initiatives that promote a safe environment for our users, driving policy development, and collaborating with various teams to implement safety measures and risk management protocols.This position is ideal for a strategic thinker with a strong background in trust and safety, who thrives in a fast-paced, innovative environment.
Chime is hiring a Product Manager focused on Trust & Safety in San Francisco. This role centers on protecting the platform and its users by driving initiatives that strengthen safety and reduce fraud. Role overview The Product Manager will work with teams across the company to design and launch strategies that address user safety concerns. Efforts will target the identification and prevention of fraudulent activities, ensuring that Chime remains a secure place for members. Key responsibilities Develop and implement product strategies to enhance trust and safety Collaborate with engineering, operations, and other teams to address risks and improve user security Shape product direction with a focus on maintaining a trustworthy platform Impact Your work will directly influence how Chime protects its community, helping to build a safer experience for all users.
About Our TeamAt OpenAI, our User Safety & Risk Operations team is dedicated to protecting our platform and users from various forms of abuse, fraud, and emerging threats. We operate at the crucial intersection of product risk, operational scale, and real-time safety response, supporting a diverse range of users from individuals to global enterprises, as well as advertisers and creators.The Ads Trust & Safety Operations team is committed to ensuring the safety of our users, advertisers, and creators across all monetized surfaces. As OpenAI rolls out new revenue-generating formats and partnerships, this team guarantees that these experiences are safe, compliant, of high quality, and aligned with our overarching safety standards. We work closely with Product, Engineering, Policy, and Legal teams to identify potential risks, develop and enhance enforcement systems, and ensure scalable, high-integrity operations.About the RoleWe are seeking a seasoned operator to help expand and enhance the Ads Trust & Safety Operations at OpenAI. In this pivotal role, you will oversee critical Ads T&S workstreams from inception to execution, collaborating closely with Product, Policy, Engineering, Legal, and Operations teams to design scalable enforcement processes, strengthen detection mechanisms, and ensure safe support for Ads and monetization at scale.You will navigate the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.This position requires an individual who is highly operational, excels at execution, and is comfortable providing clarity in uncertain situations. You should be enthusiastic about building scalable systems and processes from the ground up and working in tandem with policy and product teams as we rapidly iterate on advertising strategies and features.Key Responsibilities:Oversee complex, high-impact Ads Trust & Safety problem areas from strategy through execution.Design and scale operational workflows for Ads Trust & Safety, encompassing enforcement models, review processes, escalation paths, and quality frameworks.Work closely with Product, Policy, and Engineering teams to translate risk and policy requirements into scalable systems, tools, and automation.Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.Leverage data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals for enhancements.
Full-time|$248K/yr - $279K/yr|On-site|San Francisco Bay Area
Discord, a platform frequented by over 200 million users monthly, thrives on its vibrant gaming community, where more than 90% of users engage in gaming activities. With a staggering 1.5 billion hours spent playing diverse titles each month, Discord is pivotal in shaping the gaming landscape. Our mission is to enhance social interactions for gamers before, during, and after gameplay.We are seeking an outstanding Trust & Safety Counsel to join our dynamic legal team. This influential role offers the opportunity to contribute significantly at one of the most exciting companies in the tech industry! As our second Trust & Safety Counsel, you will be integral in supporting our Trust & Safety organization, addressing law enforcement data requests, identifying and removing harmful content and actors, and ensuring compliance with international laws and regulations.
About Our TeamAt OpenAI, we believe that security is the cornerstone of our mission to ensure artificial general intelligence serves the greater good of humanity.Our dedicated Security team is responsible for safeguarding OpenAI’s technology, personnel, and products. We take a technical approach in our creations while maintaining operational excellence in our processes. Our core principles include prioritizing impactful work, empowering our researchers, preparing for revolutionary technological advancements, and fostering a strong culture of security.About the PositionTrusted Computing and Cryptography is a specialized engineering and security unit within OpenAI’s Security organization. Our focus areas include:Implementing high-performance cryptographic solutions at scaleManaging keys securely, including offline physical backups and multi-party computation strategiesUtilizing trusted hardware enclaves for functionalities ranging from boot measurements to GPU confidential computationWe are on the lookout for a talented software engineer to join our team and enhance the security of our vital computing infrastructure, concentrating on trusted computing and cryptography at scale.This role offers the flexibility of working remotely from anywhere in the US, with occasional travel to our San Francisco headquarters or other locations as necessary. We embrace a hybrid work model that encourages three days in the office weekly and provide relocation assistance for new hires.Your Responsibilities:Develop high-quality, performance-critical applications in Rust and Python.Collaborate with researchers, engineers, and security experts to seamlessly integrate and scale advanced cryptographic methodologies in our production and research systems.Create foundational libraries that underpin cryptographic operations and integrate security best practices into our infrastructure.Design, implement, and maintain secure key management systems for our production environments.Architect and deploy systems that instill trust in our infrastructure, utilizing security technologies such as TPM2, Secure Boot, Nitro Enclaves, confidential computing solutions, Intel SGX, and AMD-SEV.Research, design, and implement operating system-level security measures, including remote attestation and runtime TPM measurements.
Full-time|$196K/yr - $220.5K/yr|On-site|San Francisco Bay Area
Discord is a vibrant platform utilized by over 200 million users each month for a multitude of reasons, with a significant focus on gaming. A staggering 90% of our users engage in gaming activities, collectively spending 1.5 billion hours playing thousands of unique titles on Discord every month. As we look to the future, Discord is poised to play a pivotal role in enhancing the gaming experience by fostering communication and camaraderie among players before, during, and after their gaming sessions.The Safety Processing team is at the forefront of developing systems that empower Discord to effectively detect, review, and act upon harmful content at scale. Our mission is to build robust infrastructure and decision-making systems that facilitate accurate, efficient, and fair content moderation across the platform. We're on the lookout for a Senior Software Engineer capable of managing intricate, multi-phase projects and delivering high-quality systems that safeguard millions of users on a daily basis.As a Senior Engineer within the Safety Processing team, you will assume full ownership of projects from the initial design phase through to post-launch monitoring and iterative improvements. You will tackle challenging issues that lie at the intersection of automation, large-scale distributed systems, and content safety. Your contributions will include the development of automated decision systems that enhance our review capacity tenfold, the creation of enforcement infrastructure that manages millions of decisions each day, and the architecture of systems that centralize and streamline safety signal processing. Collaboration with Trust & Safety operations, machine learning teams, policy experts, and product partners will be key to delivering solutions that enhance Discord's safety while upholding our commitment to an exceptional user experience. You will create products that users love while maintaining Discord's high quality standards, utilizing an 80/20 mindset and a user-focused approach. We value a growth-oriented mindset and encourage diving into new code and technologies to implement safety solutions that protect our user community.This role presents the opportunity to work on systems that have a direct impact on user safety, address innovative technical challenges in content moderation at scale, and help shape the future of how Discord ensures community safety.
Full-time|$208K/yr - $260K/yr|On-site|San Francisco, CA, United States
At Ripple, we are pioneering a future where value is transferred with the same ease as information. Our bold vision is already in motion as we provide innovative cryptocurrency solutions to financial institutions, businesses, governments, and developers. By enhancing the global financial ecosystem, we aim to create economic equity and opportunities for countless individuals worldwide. Join us in doing remarkable work, advancing your career, and collaborating with a supportive team.If you're eager to witness the impact of your work and unlock incredible career advancement, come aboard and help us create tangible value.THE WORK:At Ripple, the Identity and Trust Platform is envisioned as the cornerstone of our "One Ripple" initiative. This platform aims to address identity-related challenges across all products and acquisitions by establishing a unified system of record for customers and their identities. It will separate operational identity from outdated systems, harmonize compliance and verification contexts (such as shared Know Your Business readiness), and create a common entitlements layer for consistent product access.We seek Staff Engineers who are driven by a passion for tackling complex, foundational issues, eager to create significant impacts by addressing identity and trust obstacles in a global financial network, and motivated to design a scalable platform that will enhance the unified customer experience for all of Ripple's current and future offerings.The Identity and Trust Platform is essential for fostering customer trust and establishing a singular system of record. As the technical leader of a small yet impactful team, your responsibilities will include:Building the Foundation for 'One Ripple': Establishing the Identity Platform as the singular system of record, decoupling operational identity from outdated systems to facilitate a unified experience across all Ripple products (Payments, Custody, Stablecoins, Ripple Prime, and others).Driving Strategic Decoupling: Taking ownership of the crucial migration of core identity logic from legacy systems, eliminating years of technical debt and paving the way for their retirement.Defining the Future of Trust: Designing and implementing the shared entitlements layer and unified compliance context (e.g., shared KYB readiness) to ensure consistent and secure access to products for all customers.Scaling and Mentoring: Acting as a technical anchor, setting architectural direction, promoting engineering excellence, and mentoring engineers to cultivate a high-performing foundational platform team.
Full-time|$196K/yr - $220.5K/yr|On-site|San Francisco Bay Area
Discord connects over 200 million users each month for various purposes, but a common theme is the love for video gaming. With over 90% of our users engaged in gaming, collectively spending 1.5 billion hours on our platform each month, Discord is key to shaping the future of gaming. Our mission is to enhance the experience of chatting and socializing before, during, and after gaming sessions.As a Senior Software Engineer on the Safety Experience team, you will play a pivotal role in creating a safer environment for our vast user base. You will spearhead the design, development, and maintenance of features that safeguard users from harmful activities while ensuring compliance with regulations. The work you do is vital for the growth and integrity of Discord, reporting directly to the Engineering Manager of Safety Experience.
Embrace the Future of Commerce with Whatnot!At Whatnot, we are proud to be the premier live shopping platform across North America and Europe, where you can buy, sell, and discover the items you cherish. We are transforming e-commerce by integrating community, shopping, and entertainment into a unique experience tailored just for you. Operating as a remote co-located team, we thrive on innovation while being guided by our core values. With a presence in the US, UK, Germany, Ireland, and Poland, we are collectively shaping the future of online marketplaces.Our live auctions span a diverse range of products, from fashion and beauty to electronics, and collectibles like trading cards, comic books, and even live plants—ensuring there’s something for everyone.And this is just the beginning! As one of the fastest-growing marketplaces, we are on the lookout for innovative and forward-thinking problem solvers across all sectors. Check out the latest updates from Whatnot on our news and engineering blogs and join us in empowering individuals to turn their passions into thriving businesses while fostering connections through commerce. Your RoleAs a Software Engineer within the Trust and Risk, Fraud, and Integrity teams, you will play a pivotal role in developing systems designed to establish clear expectations, promote appropriate behavior, and resolve issues effectively. By merging proactive detection with transparent enforcement, you will help maintain Whatnot as a safe and reliable platform for both buyers and sellers.Key Responsibilities Include:Policy Enforcement and Dispute Resolution: Creating systems that clearly communicate behavioral expectations, identify and address policy breaches, and resolve conflicts efficiently.Reusable, High-Performance Platforms: Designing and maintaining scalable infrastructure that supports internal operations, both automated and manual actioning systems, and advanced detection capabilities.Intelligent Detection and Prevention: Utilizing machine learning, behavioral analysis, and real-time interventions to counteract evolving abuse patterns and safeguard both buyers and sellers.Continuous Improvement: Implementing feedback loops and monitoring systems as managed assets for ongoing quality assurance and enhancement of algorithms, methodologies, and operational processes.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
Join Databricks, where we are dedicated to creating the most advanced and secure platform for data and AI. Our commitment to innovation drives us to develop cutting-edge solutions in security, compliance, and governance.As a vital member of the Trust and Safety Data Science team, you will engage in projects that are essential for maintaining the security and regulatory compliance of the Databricks Platform. Our clients rely on Databricks to safeguard their data while managing millions of virtual machines across three clouds in numerous regions worldwide.Our engineering teams design highly sophisticated products that address significant real-world challenges. We continuously strive to push the limits of data and AI technology, all while ensuring the security and scalability that are crucial for our customers' success on our platform. We cater to a diverse array of companies with different security and compliance needs. Understanding how our customers utilize our existing features is imperative, involving comprehensive, data-driven analysis of all facets of Databricks' security programs.Customers entrust us with their most critical data, and our mission is to establish the most reliable data analytics and machine learning platform globally. We are expanding our Trust and Safety Data Science team and seek talented individuals to join our group of “full stack” data scientists. Collaborating closely with engineering and security teams, you will focus on strategic initiatives that enhance the security and safety of Databricks for our clients. Our team employs advanced statistical and machine learning techniques to detect fraud and abuse across our platforms, utilizing state-of-the-art methodologies. For insights into our initiatives, check out our blog post. Engaging in fraud and abuse detection is dynamic and crucial, offering you a chance to significantly impact the security and efficiency of business operations.For further information, please visit https://www.databricks.com/trust.
Join Our Team as a Software EngineerAt intrinsic-safety, we are at the forefront of developing AI agents that tackle complex decision-making in risk investigations, fraud detection, and identity verification. Our mission revolves around empowering machines to make the most challenging judgment calls efficiently and accurately.As a dynamic and compact team based in San Francisco, we are addressing challenges that impact billions of transactions and entities. Our clientele includes renowned Fortune 500 companies, global marketplaces, and regulated financial institutions. If you are driven by ownership, quick execution, and collaboration with founders, you will thrive here.
Join Quizlet as a Senior Software Engineer specializing in Trust & Safety, where you will play a crucial role in enhancing the security and integrity of our platform. You will collaborate with cross-functional teams to develop robust software solutions that protect our community and ensure a safe learning environment.
Airbnb started in 2007 when two hosts welcomed three guests into their San Francisco home. Since then, the platform has grown to over 5 million hosts and more than 2 billion guests worldwide. Hosts offer unique stays and experiences that connect travelers with local communities. Trust Engineering at Airbnb Trust sits at the heart of Airbnb’s platform. The Trust Engineering team builds technology to keep the community safe and uphold high standards for hosts, guests, homes, and experiences. Their work addresses both online risks, such as account compromise, fake listings, and financial loss, and offline concerns like theft, property damage, and personal safety. The team’s responsibilities include user onboarding, screening, identity, and reputation systems. Trust Engineering leads the technical vision for these systems and integrates them throughout Airbnb’s platform. Role overview The Senior Staff Software Engineer, Trust, is a senior individual contributor role. This engineer partners with technical leaders across Airbnb to shape, plan, and deliver a broad roadmap of Trust engineering projects. The position involves extensive collaboration with teams throughout the company. While highly senior, this is still a hands-on engineering role, every Airbnb software engineer, regardless of level, contributes code and development work. What you will do Define and drive the long-term vision and strategy for the Trust Platform, setting architectural direction for core systems that support scalable, high-quality fraud detection, safety, and trust decisions across Airbnb. Work deeply within Trust Platform components, developing system and performance tools, and identifying ways to improve technical quality, operational excellence, and developer experience. Promote an AI-first engineering approach, using LLM-powered agents to generate and refine code, so you can focus on problem-solving, system design, and quality oversight. Location This position is remote and based in the United States.
Join suno as an Engineering Manager in our Trust & Safety team, where you will lead the development and implementation of innovative solutions to enhance user safety and trust on our platform. You will work closely with cross-functional teams to ensure the integrity of our systems and the protection of our users. Your leadership will be vital in driving engineering excellence and fostering a culture of safety and accountability.
Role overview The Senior Manager, Trust & Safety Policy at Lyft leads the team that shapes and updates policies to protect riders and drivers. This position ensures Lyft’s standards align with legal requirements and promote a secure experience on the platform. The role involves both policy development and hands-on implementation. Key responsibilities Guide a team dedicated to creating and carrying out trust and safety policies Draft and update policies that keep users safe while meeting legal and regulatory standards Collaborate with colleagues from multiple departments to design solutions that work in practice Share policy changes and decisions clearly throughout the company What Lyft looks for Ability to think strategically and solve complex problems Strong communication skills Experience working with teams across different functions Background in trust and safety, policy, or a related area is helpful Location San Francisco, CA
Founded in 2007, Airbnb has transformed travel by connecting hosts and guests in over 190 countries. With a community of more than 5 million hosts, we facilitate unique stays and experiences, enabling guests to immerse themselves in local cultures.Join Our Trust-Focused Community:At Airbnb, trust is our top priority, and our dedicated team works tirelessly to ensure a safe and secure platform. As part of the Trust team, you will play a crucial role in safeguarding our community from fraud and ensuring the quality of our hosts, guests, and listings. This involves combating both online threats like account compromise and offline risks such as property damage and personal safety. Your work will help us build and maintain trust within our vibrant community.Make a Significant Impact:In your role within the Trust Engineering team, you will design and implement large-scale systems aimed at detecting and mitigating fraudulent activities across our platform. Collaborating closely with product, data science, and operations teams, you will develop real-time risk detection services that adapt to evolving threats, helping to make Airbnb the safest and most trusted platform in the industry.Your Daily Responsibilities:Design and operate resilient, scalable distributed systems.Enhance platform capabilities to counteract the evolving fraud landscape, collaborating with various engineering teams.Contribute vital insights to the Trust Platform's roadmap and strategic initiatives.
Full-time|$162K/yr - $186K/yr|On-site|United States
Airbnb, Inc. has grown from a small home-sharing idea in San Francisco to a platform connecting over 5 million hosts with more than 2 billion guests worldwide. The company’s mission centers on helping people find a place to belong, while supporting authentic connections between guests and local communities. Role overview The Trust Emerging Defense team focuses on building new products and protections to address evolving risks. This group works to ensure peace of mind for everyone using Airbnb, from guests to hosts, by strengthening the platform’s safety and privacy features. As a Software Engineer, Trust, the work directly impacts the security of global communications across phone, VoIP, SMS, in-app chat, and video conferencing. This position plays a key role in preventing fraud and safeguarding the Airbnb community. What you will do Design and develop products that protect user safety and privacy. Work with engineers, data scientists, product managers, and operations to identify opportunities and clarify requirements for fraud detection and prevention. Build, deploy, and operate machine learning models and pipelines at scale, including both batch and real-time applications. Improve risk investigation tools to support decisions that help prevent safety or property damage incidents. Collaboration and impact This role involves leading a team of engineers and collaborating across departments to launch core company initiatives. The work helps shape a secure, trustworthy communication platform for Airbnb’s global community.
Join Our Dynamic TeamAt OpenAI, our Trust, Safety & Risk Operations teams are dedicated to protecting our innovative products, users, and the organization from various threats, including abuse, fraud, scams, and regulatory challenges. We operate at the nexus of operations, compliance, user trust, and safety, collaborating closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are secure, compliant, and reliable for a diverse, global audience.Our team supports users across ChatGPT, our API, enterprise solutions, and developer tools. We handle sensitive inbound inquiries, develop detection and enforcement systems, and scale operational workflows to address the demands of a fast-paced, high-stakes environment.Your Role and ResponsibilitiesWe are looking for seasoned analysts with expertise in one or more of the following domains:Content Integrity & Scaled Enforcement – Proactively identify, review, and respond to policy violations, harmful content, and emerging abuse trends on a large scale.Emerging Risk Operations – Detect, assess, and mitigate new and intricate safety, policy, or integrity challenges in the rapidly changing AI landscape.In this role, you will manage high-sensitivity workflows, serve as the incident manager for complex cases, and develop scalable operational systems, including tools, automation, and vendor processes that uphold user safety and trust while fulfilling our legal, ethical, and product commitments.Our work culture embraces a hybrid model of three days in the San Francisco office each week, and we provide relocation assistance for new hires.Please be advised that this role may involve exposure to sensitive content, including material that may be sexual, violent, or otherwise unsettling.Your Key Responsibilities Include:Manage and resolve high-priority cases within your area of expertise (content enforcement, fraud/scams, compliance, or emerging risks).Conduct thorough risk assessments and investigations utilizing internal tools, product signals, and external data sources.Act as the incident manager for escalated cases necessitating intricate policy, legal, or regulatory analysis.Collaborate with cross-functional teams to design and implement top-tier operational workflows, decision trees, and automation strategies.Establish feedback loops and continuous improvement initiatives to enhance operational effectiveness.
Join Lyft as a Manager of Trust & Safety Policy, where you will play a crucial role in shaping and implementing policies that ensure the safety and trust of our community. Your leadership will guide strategic initiatives, engage with stakeholders, and drive data-informed decisions to foster a secure environment for our riders and drivers.
Full-time|$193.8K/yr - $285K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY; Chicago, IL
About the TeamThe Trust & Safety, Integrity, and Fraud Product team at DoorDash is committed to creating a secure and reliable experience for all users on our platform, including Consumers, Merchants, and Dashers. We address intricate challenges such as fraud prevention, account takeover prevention, authenticity verification, and regulatory compliance, all while ensuring a seamless user experience. Our collaborative efforts with cross-functional teams—including Engineering, Data Science, Compliance, and Risk Operations—drive strategic initiatives that safeguard our business while promoting growth.About the RoleAs DoorDash expands beyond restaurants into a broader marketplace, our commitment to safety and trust remains paramount. We are seeking a Senior Product Manager to spearhead cross-functional teams tackling complex challenges that affect Consumers, Dashers, and Merchants. Your role will encompass various aspects of our business—from new user onboarding and in-app experiences to innovative products and services that are unparalleled in the market. Depending on your expertise, you will either lead a vertical fraud team focused on protecting Consumers or a horizontal fraud platform team dedicated to enhancing our Risk Engine, Data Signals Intelligence, and Automation/Anomaly Detection capabilities. This is an exceptional opportunity to shape the future of DoorDash during a period of rapid growth and significant impact.You’re excited about this opportunity because you will…Establish the vision and long-term product strategy for a vertical or horizontal fraud team.Develop and implement a customer-centric product roadmap in close collaboration with senior leadership, Operations, Data Science, Analytics, Design, and Engineering teams.
As the Trust and Safety Strategy Lead at Faire, you will play a pivotal role in shaping our approach to ensuring the security and trust within our marketplace. You will be responsible for developing and executing strategic initiatives that promote a safe environment for our users, driving policy development, and collaborating with various teams to implement safety measures and risk management protocols.This position is ideal for a strategic thinker with a strong background in trust and safety, who thrives in a fast-paced, innovative environment.
Chime is hiring a Product Manager focused on Trust & Safety in San Francisco. This role centers on protecting the platform and its users by driving initiatives that strengthen safety and reduce fraud. Role overview The Product Manager will work with teams across the company to design and launch strategies that address user safety concerns. Efforts will target the identification and prevention of fraudulent activities, ensuring that Chime remains a secure place for members. Key responsibilities Develop and implement product strategies to enhance trust and safety Collaborate with engineering, operations, and other teams to address risks and improve user security Shape product direction with a focus on maintaining a trustworthy platform Impact Your work will directly influence how Chime protects its community, helping to build a safer experience for all users.
About Our TeamAt OpenAI, our User Safety & Risk Operations team is dedicated to protecting our platform and users from various forms of abuse, fraud, and emerging threats. We operate at the crucial intersection of product risk, operational scale, and real-time safety response, supporting a diverse range of users from individuals to global enterprises, as well as advertisers and creators.The Ads Trust & Safety Operations team is committed to ensuring the safety of our users, advertisers, and creators across all monetized surfaces. As OpenAI rolls out new revenue-generating formats and partnerships, this team guarantees that these experiences are safe, compliant, of high quality, and aligned with our overarching safety standards. We work closely with Product, Engineering, Policy, and Legal teams to identify potential risks, develop and enhance enforcement systems, and ensure scalable, high-integrity operations.About the RoleWe are seeking a seasoned operator to help expand and enhance the Ads Trust & Safety Operations at OpenAI. In this pivotal role, you will oversee critical Ads T&S workstreams from inception to execution, collaborating closely with Product, Policy, Engineering, Legal, and Operations teams to design scalable enforcement processes, strengthen detection mechanisms, and ensure safe support for Ads and monetization at scale.You will navigate the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.This position requires an individual who is highly operational, excels at execution, and is comfortable providing clarity in uncertain situations. You should be enthusiastic about building scalable systems and processes from the ground up and working in tandem with policy and product teams as we rapidly iterate on advertising strategies and features.Key Responsibilities:Oversee complex, high-impact Ads Trust & Safety problem areas from strategy through execution.Design and scale operational workflows for Ads Trust & Safety, encompassing enforcement models, review processes, escalation paths, and quality frameworks.Work closely with Product, Policy, and Engineering teams to translate risk and policy requirements into scalable systems, tools, and automation.Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.Leverage data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals for enhancements.
Full-time|$248K/yr - $279K/yr|On-site|San Francisco Bay Area
Discord, a platform frequented by over 200 million users monthly, thrives on its vibrant gaming community, where more than 90% of users engage in gaming activities. With a staggering 1.5 billion hours spent playing diverse titles each month, Discord is pivotal in shaping the gaming landscape. Our mission is to enhance social interactions for gamers before, during, and after gameplay.We are seeking an outstanding Trust & Safety Counsel to join our dynamic legal team. This influential role offers the opportunity to contribute significantly at one of the most exciting companies in the tech industry! As our second Trust & Safety Counsel, you will be integral in supporting our Trust & Safety organization, addressing law enforcement data requests, identifying and removing harmful content and actors, and ensuring compliance with international laws and regulations.
About Our TeamAt OpenAI, we believe that security is the cornerstone of our mission to ensure artificial general intelligence serves the greater good of humanity.Our dedicated Security team is responsible for safeguarding OpenAI’s technology, personnel, and products. We take a technical approach in our creations while maintaining operational excellence in our processes. Our core principles include prioritizing impactful work, empowering our researchers, preparing for revolutionary technological advancements, and fostering a strong culture of security.About the PositionTrusted Computing and Cryptography is a specialized engineering and security unit within OpenAI’s Security organization. Our focus areas include:Implementing high-performance cryptographic solutions at scaleManaging keys securely, including offline physical backups and multi-party computation strategiesUtilizing trusted hardware enclaves for functionalities ranging from boot measurements to GPU confidential computationWe are on the lookout for a talented software engineer to join our team and enhance the security of our vital computing infrastructure, concentrating on trusted computing and cryptography at scale.This role offers the flexibility of working remotely from anywhere in the US, with occasional travel to our San Francisco headquarters or other locations as necessary. We embrace a hybrid work model that encourages three days in the office weekly and provide relocation assistance for new hires.Your Responsibilities:Develop high-quality, performance-critical applications in Rust and Python.Collaborate with researchers, engineers, and security experts to seamlessly integrate and scale advanced cryptographic methodologies in our production and research systems.Create foundational libraries that underpin cryptographic operations and integrate security best practices into our infrastructure.Design, implement, and maintain secure key management systems for our production environments.Architect and deploy systems that instill trust in our infrastructure, utilizing security technologies such as TPM2, Secure Boot, Nitro Enclaves, confidential computing solutions, Intel SGX, and AMD-SEV.Research, design, and implement operating system-level security measures, including remote attestation and runtime TPM measurements.
Full-time|$196K/yr - $220.5K/yr|On-site|San Francisco Bay Area
Discord is a vibrant platform utilized by over 200 million users each month for a multitude of reasons, with a significant focus on gaming. A staggering 90% of our users engage in gaming activities, collectively spending 1.5 billion hours playing thousands of unique titles on Discord every month. As we look to the future, Discord is poised to play a pivotal role in enhancing the gaming experience by fostering communication and camaraderie among players before, during, and after their gaming sessions.The Safety Processing team is at the forefront of developing systems that empower Discord to effectively detect, review, and act upon harmful content at scale. Our mission is to build robust infrastructure and decision-making systems that facilitate accurate, efficient, and fair content moderation across the platform. We're on the lookout for a Senior Software Engineer capable of managing intricate, multi-phase projects and delivering high-quality systems that safeguard millions of users on a daily basis.As a Senior Engineer within the Safety Processing team, you will assume full ownership of projects from the initial design phase through to post-launch monitoring and iterative improvements. You will tackle challenging issues that lie at the intersection of automation, large-scale distributed systems, and content safety. Your contributions will include the development of automated decision systems that enhance our review capacity tenfold, the creation of enforcement infrastructure that manages millions of decisions each day, and the architecture of systems that centralize and streamline safety signal processing. Collaboration with Trust & Safety operations, machine learning teams, policy experts, and product partners will be key to delivering solutions that enhance Discord's safety while upholding our commitment to an exceptional user experience. You will create products that users love while maintaining Discord's high quality standards, utilizing an 80/20 mindset and a user-focused approach. We value a growth-oriented mindset and encourage diving into new code and technologies to implement safety solutions that protect our user community.This role presents the opportunity to work on systems that have a direct impact on user safety, address innovative technical challenges in content moderation at scale, and help shape the future of how Discord ensures community safety.
Full-time|$208K/yr - $260K/yr|On-site|San Francisco, CA, United States
At Ripple, we are pioneering a future where value is transferred with the same ease as information. Our bold vision is already in motion as we provide innovative cryptocurrency solutions to financial institutions, businesses, governments, and developers. By enhancing the global financial ecosystem, we aim to create economic equity and opportunities for countless individuals worldwide. Join us in doing remarkable work, advancing your career, and collaborating with a supportive team.If you're eager to witness the impact of your work and unlock incredible career advancement, come aboard and help us create tangible value.THE WORK:At Ripple, the Identity and Trust Platform is envisioned as the cornerstone of our "One Ripple" initiative. This platform aims to address identity-related challenges across all products and acquisitions by establishing a unified system of record for customers and their identities. It will separate operational identity from outdated systems, harmonize compliance and verification contexts (such as shared Know Your Business readiness), and create a common entitlements layer for consistent product access.We seek Staff Engineers who are driven by a passion for tackling complex, foundational issues, eager to create significant impacts by addressing identity and trust obstacles in a global financial network, and motivated to design a scalable platform that will enhance the unified customer experience for all of Ripple's current and future offerings.The Identity and Trust Platform is essential for fostering customer trust and establishing a singular system of record. As the technical leader of a small yet impactful team, your responsibilities will include:Building the Foundation for 'One Ripple': Establishing the Identity Platform as the singular system of record, decoupling operational identity from outdated systems to facilitate a unified experience across all Ripple products (Payments, Custody, Stablecoins, Ripple Prime, and others).Driving Strategic Decoupling: Taking ownership of the crucial migration of core identity logic from legacy systems, eliminating years of technical debt and paving the way for their retirement.Defining the Future of Trust: Designing and implementing the shared entitlements layer and unified compliance context (e.g., shared KYB readiness) to ensure consistent and secure access to products for all customers.Scaling and Mentoring: Acting as a technical anchor, setting architectural direction, promoting engineering excellence, and mentoring engineers to cultivate a high-performing foundational platform team.
Full-time|$196K/yr - $220.5K/yr|On-site|San Francisco Bay Area
Discord connects over 200 million users each month for various purposes, but a common theme is the love for video gaming. With over 90% of our users engaged in gaming, collectively spending 1.5 billion hours on our platform each month, Discord is key to shaping the future of gaming. Our mission is to enhance the experience of chatting and socializing before, during, and after gaming sessions.As a Senior Software Engineer on the Safety Experience team, you will play a pivotal role in creating a safer environment for our vast user base. You will spearhead the design, development, and maintenance of features that safeguard users from harmful activities while ensuring compliance with regulations. The work you do is vital for the growth and integrity of Discord, reporting directly to the Engineering Manager of Safety Experience.
Embrace the Future of Commerce with Whatnot!At Whatnot, we are proud to be the premier live shopping platform across North America and Europe, where you can buy, sell, and discover the items you cherish. We are transforming e-commerce by integrating community, shopping, and entertainment into a unique experience tailored just for you. Operating as a remote co-located team, we thrive on innovation while being guided by our core values. With a presence in the US, UK, Germany, Ireland, and Poland, we are collectively shaping the future of online marketplaces.Our live auctions span a diverse range of products, from fashion and beauty to electronics, and collectibles like trading cards, comic books, and even live plants—ensuring there’s something for everyone.And this is just the beginning! As one of the fastest-growing marketplaces, we are on the lookout for innovative and forward-thinking problem solvers across all sectors. Check out the latest updates from Whatnot on our news and engineering blogs and join us in empowering individuals to turn their passions into thriving businesses while fostering connections through commerce. Your RoleAs a Software Engineer within the Trust and Risk, Fraud, and Integrity teams, you will play a pivotal role in developing systems designed to establish clear expectations, promote appropriate behavior, and resolve issues effectively. By merging proactive detection with transparent enforcement, you will help maintain Whatnot as a safe and reliable platform for both buyers and sellers.Key Responsibilities Include:Policy Enforcement and Dispute Resolution: Creating systems that clearly communicate behavioral expectations, identify and address policy breaches, and resolve conflicts efficiently.Reusable, High-Performance Platforms: Designing and maintaining scalable infrastructure that supports internal operations, both automated and manual actioning systems, and advanced detection capabilities.Intelligent Detection and Prevention: Utilizing machine learning, behavioral analysis, and real-time interventions to counteract evolving abuse patterns and safeguard both buyers and sellers.Continuous Improvement: Implementing feedback loops and monitoring systems as managed assets for ongoing quality assurance and enhancement of algorithms, methodologies, and operational processes.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
Join Databricks, where we are dedicated to creating the most advanced and secure platform for data and AI. Our commitment to innovation drives us to develop cutting-edge solutions in security, compliance, and governance.As a vital member of the Trust and Safety Data Science team, you will engage in projects that are essential for maintaining the security and regulatory compliance of the Databricks Platform. Our clients rely on Databricks to safeguard their data while managing millions of virtual machines across three clouds in numerous regions worldwide.Our engineering teams design highly sophisticated products that address significant real-world challenges. We continuously strive to push the limits of data and AI technology, all while ensuring the security and scalability that are crucial for our customers' success on our platform. We cater to a diverse array of companies with different security and compliance needs. Understanding how our customers utilize our existing features is imperative, involving comprehensive, data-driven analysis of all facets of Databricks' security programs.Customers entrust us with their most critical data, and our mission is to establish the most reliable data analytics and machine learning platform globally. We are expanding our Trust and Safety Data Science team and seek talented individuals to join our group of “full stack” data scientists. Collaborating closely with engineering and security teams, you will focus on strategic initiatives that enhance the security and safety of Databricks for our clients. Our team employs advanced statistical and machine learning techniques to detect fraud and abuse across our platforms, utilizing state-of-the-art methodologies. For insights into our initiatives, check out our blog post. Engaging in fraud and abuse detection is dynamic and crucial, offering you a chance to significantly impact the security and efficiency of business operations.For further information, please visit https://www.databricks.com/trust.
Join Our Team as a Software EngineerAt intrinsic-safety, we are at the forefront of developing AI agents that tackle complex decision-making in risk investigations, fraud detection, and identity verification. Our mission revolves around empowering machines to make the most challenging judgment calls efficiently and accurately.As a dynamic and compact team based in San Francisco, we are addressing challenges that impact billions of transactions and entities. Our clientele includes renowned Fortune 500 companies, global marketplaces, and regulated financial institutions. If you are driven by ownership, quick execution, and collaboration with founders, you will thrive here.
Mar 30, 2026
Sign in to browse more jobs
Create account — see all 7,100 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.