Threat Modeler Preparedness jobs in San Francisco – Browse 171 openings on RoboApply Jobs

Threat Modeler Preparedness jobs in San Francisco

Open roles matching “Threat Modeler Preparedness” with location signals for San Francisco. 171 active listings on RoboApply Jobs.

171 jobs found

1 - 20 of 171 Jobs
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About the TeamThe Preparedness team plays a crucial role within the Safety Systems organization at OpenAI, adhering to our Preparedness Framework.While frontier AI models promise to bring significant benefits to humanity, they also introduce substantial risks. The Preparedness team is dedicated to ensuring that the development of advanced AI models fosters positive outcomes. Our mission includes identifying, monitoring, and preparing for catastrophic risks associated with these technologies.Key Mission Objectives:Monitor and predict the evolving capabilities of frontier AI systems to identify misuse risks that could significantly impact society.Establish concrete procedures, infrastructure, and partnerships to mitigate these risks and ensure the safe development of powerful AI systems.This fast-paced and impactful role connects capability assessment, evaluations, internal red teaming, and mitigations for frontier models, facilitating coordination on AGI preparedness.About the RoleAs a Threat Modeler, you will spearhead OpenAI's comprehensive approach to identifying, modeling, and forecasting risks from frontier AI systems. Your work will ensure that our evaluation frameworks, safeguards, and classifications are robust, comprehensive, and future-focused. You will help articulate the rationale behind our most stringent risk-prevention strategies, influencing prioritization and mitigation across various domains. This position acts as a central hub, integrating technical, governance, and policy considerations regarding our approach to frontier AI risks.Key ResponsibilitiesDevelop and maintain comprehensive threat models across various misuse areas (biological, cyber, attack planning, etc.).Create plausible threat models addressing loss of control, self-improvement, and other potential risks associated with alignment from frontier AI systems.Forecast risks by merging technical foresight, adversarial simulation, and current trends.Collaborate closely with technical partners on capability evaluations and risk assessments.

Mar 4, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About Our TeamThe Preparedness team is a vital segment of the Safety Systems organization at OpenAI, driven by our comprehensive Preparedness Framework.While frontier AI models hold the promise of significant benefits for humanity, they also introduce substantial risks. Our Preparedness team is instrumental in ensuring AI's development fosters positive societal impact by proactively preparing for potential hazards associated with advanced AI models. This team is dedicated to identifying, monitoring, and strategizing against catastrophic risks tied to these technologies.The mission of our Preparedness team includes:Vigilantly tracking and anticipating the evolving capabilities of frontier AI systems, focusing on misuse risks that could have devastating societal consequences.Establishing robust procedures, infrastructure, and partnerships to mitigate these risks and responsibly oversee the development of powerful AI systems.Our work intertwines capability assessment, evaluations, internal red teaming, and risk mitigation strategies for frontier models, all while ensuring comprehensive coordination on AGI preparedness. This fast-paced, impactful work is critical not only for our organization but for society at large.About the RoleWe are seeking outstanding research engineers who can extend the frontiers of our AI models. Ideal candidates will contribute to developing a nuanced understanding of the spectrum of AI safety concerns and will take ownership of specific projects from inception to completion.You will be responsible for ensuring the scientific validity of our frontier preparedness capability evaluations—designing new assessments based on real threat models (including high-stakes domains like CBRN, cyber threats, and other frontier-risk areas), while also maintaining the integrity of existing evaluations to prevent obsolescence or regression. You'll define datasets, evaluation criteria, and threshold guidelines, producing auditable documentation (evaluation cards, capability reports, system-card inputs) that leadership can rely on during critical launches.In this role, you'll:Identify emerging AI safety risks and develop innovative methodologies to explore their potential impacts.Build and continually refine evaluations of frontier AI models to assess the extent of risks and capabilities.

Jan 20, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About Our TeamThe Preparedness team plays a critical role within the Safety Systems organization at OpenAI, guided by our Preparedness Framework.As we develop frontier AI models that hold the potential to benefit humanity, we must also recognize the significant risks they pose. The Preparedness team is dedicated to anticipating and mitigating catastrophic risks associated with these advanced AI systems to ensure they foster positive outcomes for society.Our mission is to:Monitor and predict the evolving capabilities of frontier AI systems, particularly focusing on misuse risks that could have catastrophic consequences.Establish robust procedures, infrastructure, and partnerships to effectively manage these risks and safely advance the development of powerful AI systems.Preparedness connects capability assessments, evaluations, internal red teaming, and mitigations for frontier models, alongside comprehensive coordination on AGI preparedness. This is a dynamic, impactful role that significantly influences both the company and society.About the RoleWe are looking for a Data Scientist to join our team, focusing on the development, evaluation, and continuous enhancement of mitigation strategies to prevent extreme harms from AI systems. This position is tailored for an experienced and highly autonomous individual who can navigate ambiguous challenges, conduct rigorous analyses, and translate insights into actionable product and policy improvements.In this role, you will go beyond basic evaluations; you will contribute to the creation of mitigation intelligence and monitoring systems that empower OpenAI to identify issues early, assess effectiveness over time, and minimize both over-blocking and under-blocking of risks.Your ResponsibilitiesAssess and enhance mitigation systems, including classifiers and detection pipelines across various domains (e.g., biosecurity, cybersecurity, and emerging risk areas).Identify and analyze false positives and negatives through in-depth error analysis, root cause investigation, and provide clear recommendations for adjusting mitigations.Develop monitoring and measurement frameworks to track the effectiveness of mitigation strategies over time.

Feb 26, 2026
Apply
companySoFi Technologies, Inc. logo
Full-time|On-site|WA - Seattle; UT - Cottonwood Heights; CA - San Francisco; NY - New York City; TX - Frisco

Join SoFi as a Senior Cyber Threat Intelligence Engineer, where you will play a crucial role in safeguarding our digital assets. You will analyze threat data, develop actionable intelligence, and collaborate with cross-functional teams to enhance our security posture. Your expertise will be pivotal in identifying and mitigating potential risks while leveraging advanced analytical tools.

Mar 25, 2026
Apply
companyCloudflare, Inc. logo
Full-time|On-site|In-Office

Join Cloudflare as a Senior Threat Researcher, specializing in the East Asia region. In this role, you will leverage your expertise to identify and analyze emerging threats, contribute to the development of threat intelligence, and collaborate with cross-functional teams to enhance our security posture. Your insights will directly influence product development and the strategic direction of our security initiatives.

Feb 6, 2026
Apply
company
Full-time|$120K/yr - $140K/yr|Remote|Remote — San Francisco, California, United States

The Senior Cyber Threat Intelligence Analyst is integral to the daily functions of our client's cyber threat intelligence team. Collaborating closely with the Team Lead, this role emphasizes the triage of cyber events, proactive threat hunting, and the enhancement of the Security Operations Center (SOC) technology stack. This is a hands-on opportunity for a cybersecurity enthusiast eager to develop leadership skills while directly aiding in the identification and mitigation of cyber threats.Respond to and manage security alerts and incidents in real-time.Conduct thorough analyses of logs, network traffic, and endpoint data to uncover malicious behavior.Provide clear recommendations and escalate critical incidents to the Team Lead and relevant stakeholders.Engage in proactive threat hunting to uncover anomalies, suspicious activities, and sophisticated threats.Contribute to the development of playbooks and use cases addressing emerging attack methodologies.Assist in optimizing and fine-tuning tools such as SIEM, SOAR, and EDR platforms.Create detection rules, automation scripts, and dashboards to boost team productivity.Collaborate on evaluating new technologies and potential integrations.

Jan 27, 2026
Apply
companyCloudflare, Inc. logo
Full-time|Hybrid|Hybrid

Join Cloudflare as a Senior Threat Intelligence Engineer, where you will play a pivotal role in enhancing our security posture by analyzing and mitigating cyber threats. You will collaborate closely with cross-functional teams to develop strategies that protect our global network and safeguard our customers' data. Your expertise will be essential in driving threat intelligence initiatives, ensuring that we remain ahead of emerging threats.

Feb 6, 2026
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

About Our TeamAt OpenAI, we believe that the development of artificial general intelligence must be conducted in a way that is safe and beneficial for all of humanity. Security is paramount to our mission and underpins every aspect of our work.Our Security team is dedicated to safeguarding OpenAI’s technology, personnel, and products. We adopt a highly technical approach to our creations while maintaining operational excellence in execution. Our core tenets include prioritizing impactful initiatives, empowering our researchers, anticipating future technological advancements, and fostering a robust security culture.About the PositionAs a Security Engineer specializing in insider threat detection and response, you will collaborate with our talented engineers and researchers to build and secure groundbreaking AI technologies. This role emphasizes the identification and mitigation of insider threats, ensuring the protection of OpenAI's most sensitive assets. Key responsibilities will include:Key Responsibilities:Innovate and enhance our detection and response infrastructure to automate comprehensive workflows for detection and investigation.Develop, assess, and refine detection rules to guarantee effective and sustainable operations.Lead projects across OpenAI’s technology landscape focusing on insider threats, including access abuse and intellectual property theft, as well as emerging risks associated with AI infrastructure.Collaborate with cross-functional teams such as HR, Legal, and investigative units, providing technical insights and evidence to support thorough investigations.Engage in pioneering AI research initiatives, leveraging AI to bolster OpenAI’s security framework.Ideal Candidate Profile:A minimum of 5 years of experience in a detection/response or insider risk role; we welcome both mid-level and senior applicants.Proficient in operating systems and platforms, including macOS, Windows, Linux, and Kubernetes, with hands-on experience in cloud infrastructure.Strong knowledge of modern adversarial tactics, data exfiltration methods, and experience in managing and leading incident responses.Demonstrated proficiency in scripting languages such as Python, Bash, or PowerShell.Possess excellent analytical and problem-solving skills, with a keen attention to detail.

Nov 19, 2025
Apply
company
Full-time|$160K/yr - $160K/yr|Remote|Remote — San Francisco, California, United States

The Cyber Threat Intelligence Team Lead is crucial in establishing and guiding a premier Cyber Intelligence program for a key client at Control Risks. This role entails crafting strategies, enhancing capabilities, and leading a dedicated team of security professionals to proactively identify, assess, and respond to cyber threats.This position encompasses providing technical guidance and administrative oversight on all cybersecurity initiatives, ensuring the safeguarding of the client's systems, networks, and sensitive data. The Team Lead collaborates closely with technology and business stakeholders to integrate security considerations into all planning, development, and operational processes.Collaborate with client stakeholders to build, manage, and expand a Cyber Threat Intelligence Team from inception.Take charge of developing Standard Operating Procedures for threat intelligence operations, tailored to specific client activities and stakeholder needs, including tooling, reporting structures, and incident management outside regular hours.Oversee the management of the most severe and critical cybersecurity incidents, providing support to incident responders with timely reporting, updates, and investigations to facilitate effective incident response and crisis management.Mentor and train threat intelligence analysts, engineers, and threat hunters to enhance their skills and capabilities.Establish operational workflows, escalation protocols, and comprehensive playbooks.Supervise the triage of cybersecurity events, ensuring swift identification, investigation, and remediation.Coordinate incident response activities across IT, Legal, Risk, and other relevant stakeholders.Develop metrics, KPIs, and reporting frameworks to evaluate the effectiveness of the Security Operations Center (SOC).Lead proactive threat hunting initiatives to uncover potential compromises and undetected malicious activities.Integrate threat intelligence into SOC workflows and leverage insights to shape response and prevention strategies.Assess and optimize the client's technology stack, including SIEM, SOAR, EDR, and threat intelligence platforms.Drive ongoing enhancements in detection rules, automation, and response capabilities.Propose emerging tools and processes to elevate operational maturity.Conduct regular check-ins, offer coaching and feedback, manage performance reviews and improvement plans, and support career development for team members.Act as the primary liaison between team members and the ECS program management team, ensuring timely updates on programs and personnel, and maintaining quality control on client deliverables.Collaborate with the Talent Acquisition team in the hiring process to ensure team resources align with client expectations and program requirements.Lead onboarding efforts, manage logistics for offboarding, and ensure operational continuity during transitions.

Nov 20, 2025
Apply
companyCloudflare, Inc. logo
Full-time|Hybrid|Hybrid

Join Cloudflare’s Solutions Engineering team as a Threat Advisory Engineer, where you will play a pivotal role in providing expert insights and strategies to help our clients navigate the complexities of cybersecurity threats. You will engage directly with clients to understand their unique challenges and deliver tailored solutions that enhance their security posture.Your contributions will be vital in building trust and confidence among our clients as we work together to combat evolving threats in the digital landscape.

Feb 6, 2026
Apply
companyCloudflare, Inc. logo
Internship|On-site|In-Office

Join Cloudflare as a Threat Detection and Incident Response Intern for the Summer of 2026! This exciting opportunity is designed for students who are passionate about cybersecurity and eager to learn about detecting and responding to threats in a dynamic environment. You will work alongside experienced professionals, gaining hands-on experience that will enhance your skills and prepare you for a successful career in the field.

Feb 6, 2026
Apply
companyCloudflare, Inc. logo
Internship|On-site|In-Office

Embark on an exciting journey as a Threat Detection and Incident Response Intern at Cloudflare for the summer of 2026. This internship will provide you with the hands-on experience needed to thrive in the field of cybersecurity. You will work closely with our expert team to monitor, analyze, and respond to security incidents while contributing to innovative projects that protect our global network.

Mar 5, 2026
Apply
companySofi logo
Full-time|Remote| WA - Seattle; UT - Cottonwood Heights; CA - San Francisco; NY - New York City; TX - Frisco

Join Sofi as a Lead Cyber Threat Intelligence Engineer and play a pivotal role in safeguarding our digital landscape. In this position, you will lead initiatives aimed at identifying, analyzing, and mitigating potential cyber threats, ensuring the safety and integrity of our systems and data.

Mar 25, 2026
Apply
companySoFi logo
Full-time|On-site|WA - Seattle; UT - Cottonwood Heights; CA - San Francisco; NY - New York City; TX - Frisco

SoFi is seeking an experienced and strategic Director of Cyber Threat Intelligence to lead our efforts in identifying and mitigating cyber threats. In this pivotal role, you will head our threat intelligence team, collaborating closely with cross-functional teams to enhance our security posture. You will be responsible for analyzing threat data, providing actionable insights, and developing intelligence reports that inform our security strategies.

Mar 25, 2026
Apply
companyAmbience Healthcare logo
Full-time|$200K/yr - $250K/yr|Hybrid|San Francisco

About Us:At Ambience Healthcare, we are not just another documentation service; we are pioneering an AI-driven platform that reintroduces humanity into healthcare, creating substantial returns on investment for health systems nationwide.Our innovative technology empowers healthcare providers to concentrate on exceptional patient care by alleviating the administrative burdens that detract from their crucial responsibilities. We provide real-time, coding-aware documentation and clinical workflow assistance across various healthcare settings, including ambulatory, emergency, and inpatient environments, collaborating with the leading health systems in North America.We are committed to delivering the best solutions for our partners, operating with a strong sense of ownership and a culture that values transparency, positivity, and thoughtful discussion. Our team holds each other to high standards because we understand the significance of the challenges we tackle.Recognized as a leader in enhancing clinician experiences by KLAS Research, featured by Fast Company as one of the Next Big Things in Tech, acknowledged by Inc. as one of the best AI companies in healthcare, and listed as a LinkedIn Top Startup for 2024 and 2025, Ambience is backed by prestigious investors including Oak HC/FT, Andreessen Horowitz (a16z), OpenAI Startup Fund, and Kleiner Perkins. Our journey is just beginning.The Role:As a key member of our team, you will spearhead the detection engineering and incident response program within a HIPAA-compliant, AI-driven environment, where the threat landscape includes LLM-powered agents operating across diverse infrastructures. Your responsibilities will include writing production code, architecting security data pipelines, and establishing high standards for detection and response within a rapidly evolving attack surface.This position requires a hybrid work model based in our San Francisco office (3 days per week).What You’ll Own:Detection Engineering: Establish a detection pipeline covering our highest-risk surfaces, including AWS, Kubernetes, Okta, endpoints, and SaaS tools. Create environment-specific detections that ensure reliable alerting for the on-call team.Incident Response: Develop a comprehensive incident response program, including playbooks, escalation processes, evidence collection, and post-mortems. Ensure all procedures are well-documented, practiced, and meet regulatory requirements.

Mar 11, 2026
Apply
companyZyphra logo
Full-time|On-site|San Francisco

Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.

Aug 28, 2025
Apply
companyEarnest logo
Full-time|$189.5K/yr - $236.9K/yr|Remote|San Francisco, CA (Remote)

Earnest is dedicated to empowering ambitious individuals to make informed financial decisions and create the lives they aspire to lead.Our team, known as Earnies, is passionate about providing borrowers with smarter borrowing solutions that offer a clearer path toward financial empowerment. If you share our enthusiasm for this mission, we invite you to explore the details below and join us in building something exceptional.The Senior Model Risk Manager will report directly to the Head of Credit Risk.In this role, you will:Take ownership of and enhance Earnest’s Model Risk Management framework, ensuring that our credit, loss forecasting, fraud, marketing, and finance models are robust, transparent, and scalable.Conduct independent end-to-end model validations, from conceptual soundness and data quality to performance monitoring and implementation review, providing constructive feedback to modeling teams.Collaborate closely with Data Science and Risk leaders early in the model design process to refine assumptions, enhance methodologies, and uplift modeling standards throughout the organization.Supervise model performance monitoring and proactively identify emerging risks, performance drift, or control deficiencies, ensuring timely and effective remediation.Produce clear, decision-ready validation reports and effectively communicate technical findings to drive impactful business outcomes and sound risk management decisions.Act as a trusted advisor on model governance, enabling Earnest to operate swiftly while maintaining the necessary discipline and controls of a leading lending platform.

Mar 11, 2026
Apply
companySoFi logo
Full-time|Remote|WA - Seattle; UT - Cottonwood Heights; CA - San Francisco; TX - Frisco

Join SoFi as a Security Product Lead specializing in Threat Intelligence and Insider Risk. In this pivotal role, you will spearhead initiatives that enhance our security posture and protect our assets from internal and external threats. You will collaborate with cross-functional teams to develop and implement innovative security solutions, ensuring the safety and integrity of our operations.

Mar 12, 2026
Apply
companyWorld Labs logo
Full-time|$250K/yr - $325K/yr|On-site|San Francisco

About World Labs: At World Labs, we create foundational world models capable of perceiving, generating, reasoning, and interacting with the 3D environment. Our mission is to unlock the full potential of AI through spatial intelligence, transforming perception into action, reasoning into insight, and imagination into creation. We believe that spatial intelligence will revolutionize storytelling, creativity, design, simulation, and immersive experiences across both virtual and physical realms. Our world-class team is driven by curiosity and passion, boasting diverse backgrounds in technology, from AI research and systems engineering to product design. This synergy fosters a tight feedback loop between our cutting-edge research and user-empowering products. Role Overview We are seeking an innovative Research Scientist specializing in generative modeling, especially diffusion models, to join our modeling team. This position is ideal for individuals with extensive expertise in applying diffusion models to images, videos, or 3D assets and scenes. While not mandatory, experience in any of the following areas will be considered a significant advantage: Large-scale model trainingResearch in 3D computer vision In this role, you will work closely with researchers, engineers, and product teams to translate advanced 3D modeling and machine learning techniques into practical applications, ensuring our technology stays at the forefront of visual innovation. This position entails substantial hands-on research and engineering work, taking projects from conception to production deployment. Key Responsibilities Design, implement, and train large-scale diffusion models for generating 3D worlds. Develop and experiment with large-scale diffusion models to introduce novel control signals, align with target aesthetic preferences, or optimize for efficient inference. Collaborate closely with research and product teams to comprehend and translate product requirements into actionable technical roadmaps. Contribute actively to all phases of model development, including data curation, experimentation, evaluation, and deployment. Continuously investigate and integrate the latest research in diffusion and generative AI. Serve as a key technical resource within the team, mentoring peers and promoting best practices in generative modeling and machine learning engineering.

Feb 18, 2026
Apply
companyAbridge logo
Full-time|On-site|SF Office

About AbridgeAbridge, established in 2018, is dedicated to enhancing understanding in healthcare. Our innovative AI platform is specifically designed for medical conversations, streamlining clinical documentation processes and allowing clinicians to prioritize patient care.Our advanced technology converts patient-clinician discussions into structured clinical notes in real-time, featuring robust EMR integrations. With our unique Linked Evidence and auditable AI, we stand out as the only company that aligns AI-generated summaries with verified ground truth, enabling healthcare providers to trust and validate our outputs swiftly. As leaders in generative AI for healthcare, we are setting benchmarks for the ethical application of AI within health systems.Our diverse team comprises practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers, all collaborating to empower individuals and enhance healthcare delivery. Our offices are located in San Francisco's Mission District, New York's SoHo neighborhood, and Pittsburgh's East Liberty.The RoleAre you ready to build robust security measures at the forefront of AI in healthcare? We are seeking a highly skilled and motivated Senior or Staff Threat Detection and Response Engineer to join our pioneering Abridge Security Operations team. As one of our initial engineers, you will play a crucial role in elevating the costs for any adversary targeting our organization or our clients.This role demands profound technical knowledge, a builder’s mindset, and exceptional communication abilities to foster a security-centric culture across the organization. This is a greenfield opportunity to shape the future of Threat Detection and Response at Abridge. You will excel here if you are passionate about creating solutions from scratch and recognize that modern security fundamentally revolves around large-scale data and automation challenges.What You’ll DoLead investigations into complex, organization-wide security incidents, establishing best practices across various security domains including log analysis, digital forensics, and malware analysis.Design and implement a strategic roadmap for threat detection capabilities, developing high-fidelity detection systems informed by a deep understanding of advanced threat actor tactics, techniques, and procedures (TTPs).Architect scalable incident response processes while driving automation throughout the entire incident response lifecycle, establishing effective patterns for the organization.Act as a key technical leader and influence security practices organization-wide.

Jan 30, 2026

Sign in to browse more jobs

Create account — see all 171 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.