Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Senior
Qualifications
Proven experience in software engineering, with expertise in languages such as Java, Python, or JavaScript. Strong understanding of security principles and practices. Experience with cloud services (AWS, GCP, Azure) is a plus. Excellent problem-solving skills and the ability to work collaboratively in a team. Familiarity with Agile development methodologies. Understanding of data privacy regulations and best practices.
About the job
Join Quizlet as a Senior Software Engineer specializing in Trust & Safety, where you will play a crucial role in enhancing the security and integrity of our platform. You will collaborate with cross-functional teams to develop robust software solutions that protect our community and ensure a safe learning environment.
About Quizlet Inc.
Quizlet is a leading educational technology company that empowers students to learn more effectively through innovative tools and resources. With a commitment to creating a safe and supportive learning environment, we aim to make education accessible to everyone.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
Join Databricks, where we are dedicated to creating the most advanced and secure platform for data and AI. Our commitment to innovation drives us to develop cutting-edge solutions in security, compliance, and governance.As a vital member of the Trust and Safety Data Science team, you will engage in projects that are essential for maintaining the security and regulatory compliance of the Databricks Platform. Our clients rely on Databricks to safeguard their data while managing millions of virtual machines across three clouds in numerous regions worldwide.Our engineering teams design highly sophisticated products that address significant real-world challenges. We continuously strive to push the limits of data and AI technology, all while ensuring the security and scalability that are crucial for our customers' success on our platform. We cater to a diverse array of companies with different security and compliance needs. Understanding how our customers utilize our existing features is imperative, involving comprehensive, data-driven analysis of all facets of Databricks' security programs.Customers entrust us with their most critical data, and our mission is to establish the most reliable data analytics and machine learning platform globally. We are expanding our Trust and Safety Data Science team and seek talented individuals to join our group of “full stack” data scientists. Collaborating closely with engineering and security teams, you will focus on strategic initiatives that enhance the security and safety of Databricks for our clients. Our team employs advanced statistical and machine learning techniques to detect fraud and abuse across our platforms, utilizing state-of-the-art methodologies. For insights into our initiatives, check out our blog post. Engaging in fraud and abuse detection is dynamic and crucial, offering you a chance to significantly impact the security and efficiency of business operations.For further information, please visit https://www.databricks.com/trust.
Full-time|$198K/yr - $247.5K/yr|On-site|San Francisco, CA
About the Role At Scale AI, we are pioneering the future of Generative AI and enhancing human-AI collaboration. As a member of the Gen AI Ops Trust and Safety team, you will play a crucial role in safeguarding contributor integrity within a vast marketplace that encompasses hundreds of thousands of contributors involved in training foundational models. We are in search of a data-driven Data Science Lead who thrives in a fast-paced environment, possesses a systemic thinking approach, and seamlessly integrates AI coding tools into their daily workflow. This position offers a high degree of autonomy. You will be responsible for the end-to-end development of fraud and abuse detection models — from defining labels to feature engineering, training, evaluation, and final deployment in production. Collaborating within a compact team, you will achieve remarkable velocity by combining keen analytical insights with AI-enhanced development techniques (using tools like Cursor and Claude Code). If you have previously encountered limitations in teams that operate slowly or differentiate between 'analysis' and 'building', this role is designed to bridge that gap completely.
Full-time|$154K/yr - $198K/yr|Hybrid|San Francisco, CA - Hybrid; Denver, CO - Hybrid; New York, NY - Hybrid; United States - Remote
About GustoAt Gusto, we are dedicated to empowering the small business economy. We take care of essential services such as payroll, health insurance, 401(k)s, and HR, allowing business owners to concentrate on their passion and serving their customers. With teams based in Denver, San Francisco, and New York, we proudly support over 400,000 small businesses nationwide, and we are committed to building a workplace that reflects and celebrates the diversity of our customers. Discover more about our Total Rewards philosophy. About the Role:We are seeking a highly skilled and motivated Staff Data Scientist with a minimum of 7 years of experience in a business environment. In this pivotal role, you will utilize experimentation, statistical inference, and causal analysis to drive strategic decision-making that enhances our organization's success. The ideal candidate is a trusted data storyteller with strong statistical and programming skills, passionate about using these abilities to support small businesses in thriving.About the Team:In this position, you will collaborate closely with our Product, Engineering, Design, Finance, and other Data teams to become an expert in your domain's data, define and track metrics that provide insights into our business performance, and delve into our Payroll, Benefits, and HR data to deliver valuable insights and answer critical questions. You will also implement AI-assisted methodologies to accelerate analysis, enhance rigor, and broaden the impact of insights throughout Gusto. We have multiple senior roles open, each focusing on a different segment of our business.Here’s what you’ll do day-to-day:Lead: Tackle ambiguous challenges, design analytical frameworks, and introduce scalable structures across multiple product domains.Strategic Partnership: Work alongside product managers, engineering leads, designers, and operations teams to proactively identify opportunities, align on strategies, and steer data-informed decision-making.Analytical Rigor: Employ advanced statistical methods, causal inference, experimentation, and AI-assisted analytics to identify product performance drivers, distinguishing signal from noise.Experimentation & Analysis: Conduct rigorous experimentation to validate hypotheses and inform product iterations.
About Our TeamThe Safety Systems team is committed to ensuring the safety, robustness, and reliability of AI models in real-world applications. By leveraging years of practical alignment and applied safety efforts, this team addresses emerging safety challenges and develops innovative solutions to facilitate the secure deployment of our advanced models and future AGI, ensuring that AI is both beneficial and trustworthy.Discover more about OpenAI’s safety initiativesAbout the PositionAs a Data Scientist within the Safety Systems team, you will spearhead a data-driven methodology for analyzing, evaluating, and overseeing the safety of our production systems. You will collaborate with various partners across the organization to define key metrics, develop and implement statistical methods to operationalize these metrics, analyze the effects of our products, and create comprehensive dashboards that serve as a reliable source of truth for addressing safety-related inquiries. Most importantly, you will play a pivotal role in the Safety Systems team, working closely with researchers and engineers to further our mission of establishing safe, robust, and reliable AI.This position is based at our headquarters in San Francisco, and we provide relocation assistance for new employees.Key Responsibilities:Lead initiatives to assess and quantify the real-world safety impacts of OpenAI’s existing and upcoming products.Explore novel approaches to enhance our methodologies for measuring and mitigating harm and abuse.Develop and execute statistical methods necessary for the operationalization of safety metrics.Provide strategic direction and project coordination within the realm of safety.Foster a data-driven culture in Safety Systems by defining, tracking, and operationalizing metrics at the feature, product, and company levels.Create and share dashboards, reports, and tools that empower the team and the organization to independently address safety-related questions.Construct a safety data flywheel and supply safety research with production insights and data for training and evaluation.
Join Our Dynamic TeamAt OpenAI, our Trust, Safety & Risk Operations teams are dedicated to protecting our innovative products, users, and the organization from various threats, including abuse, fraud, scams, and regulatory challenges. We operate at the nexus of operations, compliance, user trust, and safety, collaborating closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are secure, compliant, and reliable for a diverse, global audience.Our team supports users across ChatGPT, our API, enterprise solutions, and developer tools. We handle sensitive inbound inquiries, develop detection and enforcement systems, and scale operational workflows to address the demands of a fast-paced, high-stakes environment.Your Role and ResponsibilitiesWe are looking for seasoned analysts with expertise in one or more of the following domains:Content Integrity & Scaled Enforcement – Proactively identify, review, and respond to policy violations, harmful content, and emerging abuse trends on a large scale.Emerging Risk Operations – Detect, assess, and mitigate new and intricate safety, policy, or integrity challenges in the rapidly changing AI landscape.In this role, you will manage high-sensitivity workflows, serve as the incident manager for complex cases, and develop scalable operational systems, including tools, automation, and vendor processes that uphold user safety and trust while fulfilling our legal, ethical, and product commitments.Our work culture embraces a hybrid model of three days in the San Francisco office each week, and we provide relocation assistance for new hires.Please be advised that this role may involve exposure to sensitive content, including material that may be sexual, violent, or otherwise unsettling.Your Key Responsibilities Include:Manage and resolve high-priority cases within your area of expertise (content enforcement, fraud/scams, compliance, or emerging risks).Conduct thorough risk assessments and investigations utilizing internal tools, product signals, and external data sources.Act as the incident manager for escalated cases necessitating intricate policy, legal, or regulatory analysis.Collaborate with cross-functional teams to design and implement top-tier operational workflows, decision trees, and automation strategies.Establish feedback loops and continuous improvement initiatives to enhance operational effectiveness.
Join Lyft as a Manager of Trust & Safety Policy, where you will play a crucial role in shaping and implementing policies that ensure the safety and trust of our community. Your leadership will guide strategic initiatives, engage with stakeholders, and drive data-informed decisions to foster a secure environment for our riders and drivers.
Join Grindr as a Staff Data Scientist in a hybrid role based out of our San Francisco, Los Angeles, or Chicago offices, requiring in-office presence on Tuesdays and Thursdays.Why This Role is Unique?Grindr (NYSE: GRND) is the world’s largest LGBTQ+ social application, boasting over 14 million monthly users globally. We are not just a platform but a vital part of the LGBTQ+ community and a cornerstone of gay culture.As a Staff Data Scientist, you will collaborate closely with product managers, designers, and engineers to create insightful metrics that drive product development. You will design and implement innovative experiments, present data-driven insights for decision-making, and explore new growth strategies through comprehensive analysis. This role allows you to work on deployed models that enhance the user experience for millions, while becoming an informal ambassador for the Data Science team, educating others on effective data utilization.You will be part of a dynamic data organization at Grindr that integrates data scientists, data engineers, and ML/AI engineers into a united and collaborative team. This is a unique opportunity to learn, share knowledge, and make a significant impact alongside industry leaders.Your ResponsibilitiesExtract actionable insights from complex, open-ended queries.Design and assess experiments to evaluate the impact of product changes.Analyze product data to identify root causes behind metric fluctuations.Effectively communicate findings to cross-functional stakeholders to inform product strategies.Develop tools to scale and automate analyses, enhancing company productivity.Mentor and guide team members, recommending best practices.Apply an engineering mindset to reduce complexity while maximizing utility and maintainability.Contribute to the development of future ML solutions to enhance recommendations, detect spam, and better serve our users.
About the TeamThe Applied Foundations team at OpenAI is focused on ensuring our innovative technology remains secure against a range of adversarial threats. We are committed to safeguarding the integrity of our platforms as they expand. Our team plays a crucial role in defending against financial abuse, large-scale attacks, and various forms of misuse that could compromise user experience or operational stability.The Integrity pillar within Applied Foundations is tasked with developing robust systems that identify and respond to harmful actors and activities on OpenAI’s platforms. As these systems evolve to address significant usage harms, we are seeking skilled data scientists to accurately measure the prevalence of these issues and assess the effectiveness of our responses.About the RoleWe are in search of experienced trust and safety data scientists who can enhance, operationalize, and oversee the measurement of complex actor- and network-level harms. The selected data scientist will be responsible for measurement and metrics across various established harm verticals, including estimating the prevalence of on-platform (and occasionally off-platform) harm, while also conducting analyses to uncover gaps and opportunities in our responses.This position is based in our San Francisco or New York office and may require addressing urgent escalations outside of regular working hours. Many harm areas may involve sensitive content, including sexual, violent, or otherwise disturbing material.In this role, you will:Lead measurement and quantitative analysis for a range of severe, actor- and network-based usage harm verticals.Develop and apply AI-first methodologies for prevalence measurement and other standardized safety metrics, potentially incorporating off-platform indicators and non-traditional datasets.Create metrics that can be utilized for goal-setting or A/B testing where traditional prevalence or top-line metrics may not apply.Manage dashboards and metrics reporting for harm verticals.Perform analyses and generate insights to guide improvements in review, detection, and enforcement processes, while influencing strategic roadmaps.Optimize LLM prompts specifically for measurement purposes.Collaborate with other safety teams to identify key safety issues and develop relevant policies to address safety needs.Provide metrics for leadership to support informed decision-making.
As the Trust and Safety Strategy Lead at Faire, you will play a pivotal role in shaping our approach to ensuring the security and trust within our marketplace. You will be responsible for developing and executing strategic initiatives that promote a safe environment for our users, driving policy development, and collaborating with various teams to implement safety measures and risk management protocols.This position is ideal for a strategic thinker with a strong background in trust and safety, who thrives in a fast-paced, innovative environment.
Chime is hiring a Product Manager focused on Trust & Safety in San Francisco. This role centers on protecting the platform and its users by driving initiatives that strengthen safety and reduce fraud. Role overview The Product Manager will work with teams across the company to design and launch strategies that address user safety concerns. Efforts will target the identification and prevention of fraudulent activities, ensuring that Chime remains a secure place for members. Key responsibilities Develop and implement product strategies to enhance trust and safety Collaborate with engineering, operations, and other teams to address risks and improve user security Shape product direction with a focus on maintaining a trustworthy platform Impact Your work will directly influence how Chime protects its community, helping to build a safer experience for all users.
About Our TeamAt OpenAI, our User Safety & Risk Operations team is dedicated to protecting our platform and users from various forms of abuse, fraud, and emerging threats. We operate at the crucial intersection of product risk, operational scale, and real-time safety response, supporting a diverse range of users from individuals to global enterprises, as well as advertisers and creators.The Ads Trust & Safety Operations team is committed to ensuring the safety of our users, advertisers, and creators across all monetized surfaces. As OpenAI rolls out new revenue-generating formats and partnerships, this team guarantees that these experiences are safe, compliant, of high quality, and aligned with our overarching safety standards. We work closely with Product, Engineering, Policy, and Legal teams to identify potential risks, develop and enhance enforcement systems, and ensure scalable, high-integrity operations.About the RoleWe are seeking a seasoned operator to help expand and enhance the Ads Trust & Safety Operations at OpenAI. In this pivotal role, you will oversee critical Ads T&S workstreams from inception to execution, collaborating closely with Product, Policy, Engineering, Legal, and Operations teams to design scalable enforcement processes, strengthen detection mechanisms, and ensure safe support for Ads and monetization at scale.You will navigate the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.This position requires an individual who is highly operational, excels at execution, and is comfortable providing clarity in uncertain situations. You should be enthusiastic about building scalable systems and processes from the ground up and working in tandem with policy and product teams as we rapidly iterate on advertising strategies and features.Key Responsibilities:Oversee complex, high-impact Ads Trust & Safety problem areas from strategy through execution.Design and scale operational workflows for Ads Trust & Safety, encompassing enforcement models, review processes, escalation paths, and quality frameworks.Work closely with Product, Policy, and Engineering teams to translate risk and policy requirements into scalable systems, tools, and automation.Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.Leverage data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals for enhancements.
Full-time|$248K/yr - $279K/yr|On-site|San Francisco Bay Area
Discord, a platform frequented by over 200 million users monthly, thrives on its vibrant gaming community, where more than 90% of users engage in gaming activities. With a staggering 1.5 billion hours spent playing diverse titles each month, Discord is pivotal in shaping the gaming landscape. Our mission is to enhance social interactions for gamers before, during, and after gameplay.We are seeking an outstanding Trust & Safety Counsel to join our dynamic legal team. This influential role offers the opportunity to contribute significantly at one of the most exciting companies in the tech industry! As our second Trust & Safety Counsel, you will be integral in supporting our Trust & Safety organization, addressing law enforcement data requests, identifying and removing harmful content and actors, and ensuring compliance with international laws and regulations.
Join suno as an Engineering Manager in our Trust & Safety team, where you will lead the development and implementation of innovative solutions to enhance user safety and trust on our platform. You will work closely with cross-functional teams to ensure the integrity of our systems and the protection of our users. Your leadership will be vital in driving engineering excellence and fostering a culture of safety and accountability.
Full-time|On-site|CA - San Francisco; NY - New York City
Employee Applicant Privacy NoticeAbout Us:Join us in shaping a brighter financial future. We’re revolutionizing personal finance by empowering our members with innovative, mobile-first technology to achieve their goals. As a next-generation financial services company and national bank, we're leading an unprecedented transformation in the industry, making a direct impact on lives every day. Join us to invest in yourself, your career, and the financial world.Role Overview:The Borrow Data Science team is in search of a Senior Staff Data Scientist to propel growth in our Home Loans division, enhancing SoFi's data execution capabilities. This pivotal role invites you to apply your analytical expertise to develop robust reporting, pinpoint growth opportunities, and construct machine learning models using tools such as Snowflake, Airflow, dbt, and Sagemaker.As a Senior Staff Data Scientist, you will lead data initiatives, balancing urgent needs with high-quality project delivery to key stakeholders through a clear, data-informed methodology. You will take on a technical leadership role, helping to develop reliable, efficient, and scalable data infrastructures, fostering a culture of technical ownership, and documenting your approaches to disseminate strategies throughout the organization. You will conduct analytical deep-dives to proactively identify impactful opportunities that will inform future experimentation designs and product roadmaps.In this position, you’ll leverage your proficiency in data analysis, statistical modeling, and machine learning to reveal insights that will directly shape product strategy and drive revenue growth. This role requires a solid technical foundation (SQL, Python/R, Tableau, Statistics), a deep understanding of business metrics, A/B testing, causal inference analysis, and superior collaboration skills.
Role overview The Senior Manager, Trust & Safety Policy at Lyft leads the team that shapes and updates policies to protect riders and drivers. This position ensures Lyft’s standards align with legal requirements and promote a secure experience on the platform. The role involves both policy development and hands-on implementation. Key responsibilities Guide a team dedicated to creating and carrying out trust and safety policies Draft and update policies that keep users safe while meeting legal and regulatory standards Collaborate with colleagues from multiple departments to design solutions that work in practice Share policy changes and decisions clearly throughout the company What Lyft looks for Ability to think strategically and solve complex problems Strong communication skills Experience working with teams across different functions Background in trust and safety, policy, or a related area is helpful Location San Francisco, CA
Full-time|$217K/yr - $303.9K/yr|Remote|Remote - United States
Join Reddit as a Senior Data Scientist in our Consumer Insights team, where you will leverage data to shape the future of community engagement. In this pivotal role, you will identify new opportunities for user growth, enhance engagement strategies, and drive retention through data-driven insights. You will conduct exploratory analyses, guide product strategy, and spearhead innovative experiments, all while collaborating closely with cross-functional teams. Your contributions will significantly impact how millions of users connect with the content and communities they care about.
About the Role Grow Therapy is hiring a Senior or Staff Data Scientist in San Francisco. This role focuses on using data to inform key decisions and improve therapy services.
Full-time|$170K/yr - $225K/yr|Hybrid|New York, New York, United States; San Francisco, California, United States
About TaskRabbit:TaskRabbit is a dynamic marketplace platform that seamlessly connects individuals with Taskers to take care of everyday tasks, including furniture assembly, handyman services, moving assistance, and much more.At TaskRabbit, our mission is to transform lives one task at a time. We celebrate innovation, inclusivity, and dedication. Our workplace culture is collaborative, pragmatic, and fast-paced. We're in search of talented, entrepreneurially-minded, data-driven individuals who are passionate about empowering others to pursue their passions. In partnership with IKEA, we're creating more opportunities for individuals to earn a consistent and meaningful income on their own terms by fostering lasting relationships with clients in communities worldwide.TaskRabbit operates as a hybrid company with team members distributed across the US and EU, and we are proud to be recognized as a Built In — Best Places to Work (2022, 2023, 2024) in multiple national and regional categories. Join us at TaskRabbit, where your work is impactful, your ideas are valued, and your potential is unleashed!This role follows a hybrid schedule, requiring two days of in-office collaboration each week. You can work from either our San Francisco office or our new New York City office, set to open in April 2026.
Join Quizlet as a Senior Software Engineer specializing in Trust & Safety, where you will play a crucial role in enhancing the security and integrity of our platform. You will collaborate with cross-functional teams to develop robust software solutions that protect our community and ensure a safe learning environment.
Full-time|On-site|CA - San Francisco; NY - New York City
Employee Applicant Privacy NoticeWho we are:At SoFi, we are on a mission to redefine personal finance and empower our members to achieve their financial goals. We are a forward-thinking financial services company and national bank utilizing cutting-edge, mobile-first technology to transform the financial landscape. Join us in making a tangible impact in people's lives while adhering to our core values. Invest in yourself, your career, and the financial future with us.The RoleThe Borrow Data Science team is on the lookout for a Staff Data Scientist to drive growth in our Personal Loans sector by harnessing the power of data. This is an exciting opportunity for an analytical professional to develop comprehensive reporting, uncover growth potential, and construct machine learning models utilizing tools such as Snowflake, Airflow, dbt, and Sagemaker.In your position as Staff Data Scientist, you will act as a data leader, managing urgent requests while delivering high-quality projects to critical stakeholders through a structured, data-driven approach. You will also be responsible for establishing reliable, efficient, and scalable data foundations, fostering a culture of technical ownership, and documenting your methods and insights to promote widespread strategies within the organization. Your analytical deep-dives will proactively identify impactful opportunities and inform future experimentation and product roadmaps.In this capacity, your expertise in data analysis, statistical modeling, and machine learning will reveal insights that directly contribute to product strategy and revenue enhancement. This role necessitates a robust technical proficiency (SQL, Python/R, Tableau, Statistics), a thorough understanding of business metrics, A/B testing, causal inference analysis, and excellent collaboration skills.Moreover, you will engage cross-functionally with teams from engineering, product management, lifecycle marketing, data science, design, operations, finance, risk, legal, compliance, and executive leadership.
Feb 23, 2026
Sign in to browse more jobs
Create account — see all 2,393 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.