Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
QualificationsWe are looking for candidates with a strong foundation in machine learning and a passion for applied research. Ideal candidates should possess:Experience with machine learning frameworks and tools. Knowledge of AI principles and their application in real-world scenarios. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. A degree in Computer Science, Engineering, or a related field.
About the job
About the Role
Join Thorin as an AI Researcher and play a pivotal role in shaping the core research initiatives that drive our AI innovations. In this position, you will operate at the crossroads of machine learning research, practical model development, and product application, enhancing our understanding and automation of enterprise workflows.
This position merges theoretical research with hands-on implementation, transitioning ideas from conceptual stages through experimentation into functional components that enrich Thorin’s offerings.
Your Responsibilities
Research & Innovation
Conduct innovative machine learning research aligned with real-world product demands.
Investigate new model architectures, training methods, and evaluation techniques specifically designed for understanding and automating organizational workflows.
Model Development & Evaluation
Create, implement, and assess ML/AI methodologies that enhance model efficacy for essential tasks.
Collaborate closely with cross-functional teams to integrate research outcomes into tangible products that meet user needs.
About Thorin
Thorin, incubated by 8VC, is an innovative applied AI company that is transforming productivity within organizations. Our mission is to build proactive, long-term AI agents that continuously monitor workplace interactions across platforms such as Slack, email, and meetings, ensuring seamless task execution without oversight. Unlike conventional AI solutions that respond to input, Thorin provides a dynamic digital representation of business operations, facilitating automation, coordination, and strategic insights over extended periods. With visionary leadership from Joe Lonsdale and backed by 8VC, we are laying the groundwork for AI-enhanced productivity across industries.
Similar jobs
1 - 20 of 12,020 Jobs
Search for Senior Research Scientist At Ai Safety San Francisco
Join the Center for AI Safety (CAIS), a premier research and advocacy organization dedicated to addressing the complex societal challenges posed by artificial intelligence (AI). Our mission focuses on mitigating large-scale risks associated with AI through groundbreaking technical research, strategic initiatives, and proactive policy engagement, in collaboration with our sister organization, the Center for AI Safety Action Fund. As a Senior Research Scientist at CAIS, you will spearhead and execute transformative research aimed at enhancing the safety and reliability of advanced AI systems. You will take ownership of significant open challenges, driving them to successful publication. We seek individuals who set a high standard for research excellence and contribute innovative ideas to elevate our collective understanding. Your role will involve designing and conducting experiments on large language models, developing the necessary tools for large-scale model training and evaluation, and translating findings into publishable research. Close collaboration with CAIS researchers and external academic and industry partners will be essential, utilizing our compute cluster for extensive training and evaluation projects. Research areas include AI honesty, robustness, transparency, and mitigating trojan/backdoor behaviors, all geared towards reducing real-world risks from sophisticated AI systems.
Join the Center for AI Safety (CAIS), a pioneering research and advocacy organization dedicated to addressing the societal-scale risks posed by artificial intelligence. We tackle the most pressing challenges in AI through rigorous technical research, innovative field-building initiatives, and proactive policy engagement, in collaboration with our sister organization, the Center for AI Safety Action Fund.As a Research Scientist, you will spearhead and conduct transformative research aimed at enhancing the safety and dependability of cutting-edge AI systems. Your responsibilities will include designing and executing experiments on large language models, developing the necessary tools for training and evaluating models at scale, and converting your findings into publishable research. You will work closely with CAIS researchers and external partners from academia and industry, utilizing our compute cluster for large-scale model training and evaluation. Your research will focus on critical areas such as AI honesty, robustness, transparency, and the detection of trojan/backdoor behaviors, all aimed at mitigating real-world risks associated with advanced AI technologies.
At Sciforium, we are at the forefront of AI infrastructure, creating next-generation multimodal AI models and a proprietary high-efficiency serving platform. With substantial backing from AMD, our team is rapidly expanding to develop the complete stack necessary for cutting-edge AI models and real-time applications.About the RoleWe are on the lookout for a talented Senior Research Scientist with expertise in advanced AI and machine learning. This role entails spearheading innovative research projects focusing on large language models, generative media, model architecture, optimization, and scalable training systems. You will engage directly with contemporary ML frameworks, publish original research, and collaborate closely with engineering teams to transition impactful models into production. This position is perfect for a driven researcher excited about pioneering breakthroughs in AI.What You'll DoLead research initiatives in advanced machine learning topics such as LLMs, generative AI, foundational modeling, optimization strategies, diffusion models, and novel Transformer architectures.Design, implement, and assess new ML algorithms using frameworks like PyTorch and JAX.Conduct large-scale distributed training experiments utilizing multi-GPU/TPU systems and cutting-edge compute infrastructure.Enhance performance through debugging frameworks, optimizing speed, and refining training pipelines.Generate high-quality research outputs including academic papers, internal reports, patents, and reproducible code.Work collaboratively with engineering and product teams to convert research prototypes into robust production systems.Stay updated with the latest research advancements to incorporate state-of-the-art techniques into Sciforium's AI roadmap.Mentor junior researchers and actively contribute to fostering a world-class AI research culture.
RoleAs a Research Scientist at OpenEvidence, you will explore the potential of cutting-edge models, leveraging a strong research foundation to build practical systems rather than merely publishing papers. You will take ownership of projects from conception to execution, valuing evaluation and quantitative validation as integral components of your work.We seek exceptional innovators who are not confined to narrow specializations. Our engineers and scientists collaborate across diverse products and projects, driving impactful work wherever their skills can shine.About UsOpenEvidence is the leading medical AI platform globally, rapidly adopted by over 40% of US clinicians within just one year through organic, product-driven growth. With a valuation of $12B, our engineering team comprises 30 talented individuals from prestigious institutions like MIT, Harvard, and Stanford. We believe that transformative products emerge from a select group of exceptional builders who are empowered to take initiative and work swiftly toward focused goals. Join us as we embark on a unique opportunity to establish the standard platform for medical AI.CultureWe hold that work should engage at a world-class level. Building from 0 to 1 and scaling from 1 to 1000 is akin to a professional sport, where uncompromising excellence is the standard. We assert that the creation of unprecedented technologies requires complete ownership, and significant achievements arise when individuals commit to them.Who Are You?If your goal is to clock in and out with minimal engagement, this role is not for you. If you prefer to write papers rather than roll up your sleeves and get involved in creating impactful solutions that influence millions, this position may be your calling. The ideal candidate is a remarkable builder—intelligent, ambitious, resourceful, self-driven, meticulous, motivated, diligent, and humble. We recognize this profile is rare, which is why we have only identified 30 such individuals and are eager to find more.LocationAll engineering roles are in-person, requiring attendance five days a week in either San Francisco or Miami.
Genmo is a pioneering research laboratory dedicated to advancing cutting-edge models for video generation, with the mission of unlocking the creative potential of Artificial General Intelligence (AGI). We invite you to be a part of our innovative team, where you can contribute to shaping the future of AI and expanding the horizons of video generation technology.Role Overview:We are on the lookout for a talented Research Scientist to join our dynamic team, specializing in alignment and post-training methodologies for large-scale video generation models. In this pivotal role, you will be instrumental in ensuring our diffusion-based video models consistently deliver high-quality, physically accurate, and safe outputs that align with human values and preferences.Key Responsibilities:Lead groundbreaking research initiatives in alignment and post-training strategies for video generation models, prioritizing enhanced quality, reliability, and alignment with human intent.Design and implement supervised fine-tuning and reinforcement learning from human feedback (RLHF) pipelines for video generation models.Establish robust evaluation frameworks to assess model alignment, safety, and output quality.Create and optimize data collection pipelines for capturing human feedback and preferences.Conduct experiments to validate alignment techniques and their scalability.Collaborate with cross-functional teams to incorporate alignment enhancements into our production workflow.Stay abreast of the latest developments by reviewing academic literature in generative AI and alignment.Mentor junior researchers and promote a culture of responsible AI development.Partner closely with product teams to ensure that alignment methods enhance model capabilities.Qualifications:Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a closely related field.Demonstrated excellence with a strong publication record in top-tier conferences (e.g., NeurIPS, ICML, ICLR) focusing on reinforcement learning, alignment, or generative models.Extensive experience in implementing and optimizing large-scale training pipelines utilizing PyTorch.In-depth understanding of reinforcement learning techniques, especially RLHF.Proficient in distributed training systems and conducting large-scale experiments.Proven ability to design and implement robust evaluation strategies for models.
The Center for AI Safety (CAIS) is at the forefront of research and advocacy dedicated to addressing the societal-scale challenges posed by artificial intelligence. Our mission is to mitigate the risks associated with AI through innovative technical research, initiatives to foster the field, and strategic policy engagement. Together with our sister organization, the Center for AI Safety Action Fund, we tackle some of the most pressing issues in AI today. In the role of Senior Research Engineer, you will immerse yourself in the dynamic intersection of pioneering machine learning research and dependable engineering practices. You will own research projects from inception to publication, working autonomously with guidance from an advisor. Your responsibilities include designing and conducting experiments on large language models, developing the necessary tools for large-scale model training and evaluation, and transforming findings into research publications. You will collaborate closely with CAIS researchers, as well as external academic and commercial partners, utilizing our compute cluster for extensive training and evaluation. Your work will cover critical areas such as AI honesty, robustness, transparency, and the investigation of trojan/backdoor behaviors, all aimed at reducing the real-world risks posed by advanced AI systems.
Join Chai Discovery as an AI Research ScientistAt Chai Discovery, we are pioneering the development of advanced AI models aimed at revolutionizing molecule design and transforming drug discovery processes. Our dedicated team is passionate about reimagining how new cures are developed, ultimately aiming to save lives.Founded by a collective of leading researchers and seasoned Silicon Valley operators, our team has achieved groundbreaking milestones in AI applications for biology. Our founders have made significant contributions to protein language modeling and cutting-edge folding algorithms, and have successfully partnered with top pharmaceutical companies to implement AI solutions. We are proud to be supported by prestigious investors such as OpenAI, Thrive Capital, Dimension, Conviction, Lachy Groom, Amplify, and more.Role OverviewAs an AI Research Scientist, you will engage in innovative research focused on computational modeling for biological applications. Our projects extend beyond protein structure prediction into actual therapeutic engineering, offering you the opportunity to advance the field of AI-driven drug design, alongside a team that embodies a balance of critical thinking and optimistic vision.Your ProfileWe are looking for a driven AI Leader with extensive research experience and a commitment to pushing boundaries in therapeutic design. The ideal candidate should possess:Research Expertise:A Ph.D. or equivalent research experience in Machine Learning, Computational Biology, Bioinformatics, Computational Chemistry, or a related discipline.A robust publication record in leading ML or life-science journals (e.g., NeurIPS, ICML, Nature Methods, PLoS Computational Biology, J. Chem. Inf. Model., etc.).Technical Proficiency:Fluency in Python and familiarity with deep learning frameworks.Experience in training and evaluating large models using protein, antibody, or small-molecule datasets, or demonstrable transferable ML skills.Experimental Design & Analysis:Ability to convert scientific inquiries into manageable modeling experiments.Competence in managing extensive biological or chemical datasets, deriving significant metrics, and presenting findings to varied audiences.Collaboration & Communication:A collaborative spirit, eager to work with cross-functional teams including wet-lab scientists, software engineers, and product leaders.
Full-time|$50K/yr - $70K/yr|On-site|San Francisco, CA
About HandshakeHandshake is the premier career network tailored for the AI economy, connecting 20 million skilled professionals with over 1,600 educational institutions and 1 million employers, including every Fortune 50 company. Our platform empowers career discovery, recruitment, and skill enhancement, facilitating everything from freelance AI training opportunities to internships and full-time roles. Our unique value proposition has catalyzed exceptional growth, and we anticipate tripling our ARR by 2025.Why join Handshake now:Influence the evolution of careers in the AI landscape on a global scale, creating visible and meaningful impacts for your community.Collaborate closely with leading AI labs, Fortune 500 partners, and top-tier educational institutions.Become part of a team led by experts from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, among others.Play a crucial role in building a rapidly expanding enterprise with substantial revenue potential.The RoleAt Handshake AI, we are innovating a new generation of human-data products—from expert annotation platforms to AI interviewers and seamless payment infrastructures—designed to streamline research and enhance model efficacy. As AI models advance, the need for specialized human input is surging, and Handshake is uniquely equipped to cater to this demand. We support career platforms at 92% of the top U.S. universities, giving us direct access to verified expert talent across diverse fields.We are seeking a strategic, impact-driven UX Researcher to help redefine the future of human data. In this role, you will contribute to the development of platforms utilized by real human experts to generate data for LLMs at leading AI labs.Your ResponsibilitiesLead comprehensive UX research initiatives that drive the development of Handshake AI's human data products, including expert onboarding processes, annotation workflows, evaluation tools, and internal operational systems.Monitor and analyze user sentiment, usability, and efficiency across systems to facilitate ongoing enhancements.Extract actionable insights from qualitative research and product analytics to inform design and strategic direction.Plan and execute both evaluative and generative research throughout the entire product lifecycle, from initial concepts to live applications.Provide swift, actionable insights that enhance user experience, product quality, and operational efficiency.
Join Mercor as a Data ScientistAt Mercor, we stand at the forefront of labor markets and artificial intelligence research. We collaborate with top-tier AI laboratories and businesses to infuse the human intelligence crucial for the evolution of AI.Our expansive talent network empowers frontier AI models, mirroring the way educators impart knowledge: sharing insights and experiences that transcend mere coding. Currently, our network boasts over 30,000 experts generating more than $2 million daily.We are pioneering a new work paradigm where specialized expertise drives AI progress. To realize this vision, we seek a dynamic, fast-paced, and dedicated team. You will collaborate with leading researchers, operators, and AI companies, playing a pivotal role in the systems that are reshaping society.As a profitable Series C company, Mercor is valued at $10 billion and operates from our new headquarters in San Francisco with an in-office work schedule five days a week.Your RoleIn your first year, you will implement analyses and experiments that enhance key product metrics, including match quality, time-to-hire, candidate experience, and revenue. Your responsibilities will include:Establishing north-star and feature-specific metrics for our ranking systems, interview analytics, and payout frameworks.Designing and executing A/B tests and quasi-experiments, translating results into product decisions within the same week.Creating source-of-truth dashboards and streamlined data models to enable teams to self-serve answers.Collaborating with engineers to instrument events, enhancing data quality and latency from ingestion to insights.Rapidly prototyping models (from baseline models to gradient boosting) to optimize matching and scoring.Assisting in the evaluation of LLM-powered agents through the design of rubrics, human-in-the-loop studies, and guardrail mechanisms.What Makes You a Great FitYou possess strong foundational skills in statistics, SQL, and Python, alongside projects you are eager to showcase. You adapt swiftly, frame inquiries, test hypotheses, and deliver results within a day, valuing clarity in communication as much as statistical significance. A keen interest in LLM evaluation, retrieval, and ranking is a plus; you will learn alongside professionals from renowned firms such as Jane Street, Citadel, Databricks, and Stripe.
About Our TeamThe Safety Systems team at OpenAI is at the forefront of ensuring our advanced AI models are deployed safely for societal benefit. We are dedicated to OpenAI's mission of building safe AGI and promoting a culture centered on trust and transparency.Our Trustworthy AI team focuses on actionable research that considers the societal implications of AI development. This includes addressing complex policy challenges by creating mechanisms for public input into AI values and analyzing the effects of anthropomorphism in AI. We strive to convert abstract policy dilemmas into practical, measurable solutions that can help prepare society for more intelligent systems. Our work also emphasizes external validations and assurances for AI, aiming to strengthen independent oversight.About the RoleWe are seeking outstanding research scientists/engineers who can enhance our efforts to prepare society for AGI. The ideal candidate will have the capability to transform vague policy issues into quantifiable and actionable research.This position is based at our headquarters in San Francisco, and we provide relocation assistance for new hires.In This Role, You Will:Develop research methodologies and strategies to investigate the societal impacts of our models in a manner that informs model design.Innovate and execute experiments that facilitate public engagement in shaping model values.Enhance the robustness of external assurances through comprehensive evaluations of outside findings.Support and expand our capacity to mitigate risks associated with flagship model deployments efficiently.You Will Excel in This Role If You:Are passionate about OpenAI’s vision of developing safe, universally beneficial AGI and resonate with our charter.Demonstrate a strong commitment to AI safety, aiming to enhance the safety of cutting-edge AI systems for practical application.Bring over 3 years of research experience (whether in industry or academia) and proficiency in Python or analogous programming languages.Exhibit excellent problem-solving skills with a track record of translating complex concepts into actionable insights.
About Retell AI Retell AI builds voice AI technology that helps businesses transform their call center operations. In just 18 months, thousands of companies have adopted Retell’s AI voice agents to streamline sales, support, and logistics, work that once required large human teams. Backed by investors including Y Combinator and Alt Capital, Retell has grown annual recurring revenue from $5M to $36M with a focused team of 20. The company’s goal for 2026: a modern customer experience platform where AI powers entire contact centers. Retell is developing AI “workers” that can serve as frontline agents, quality assurance analysts, and managers, handling, evaluating, and improving customer interactions on their own. Named a top 50 AI app by a16z: https://tinyurl.com/5853dt2x Ranked #4 on Brex’s Fast-Growing Software Vendors of 2025: https://www.brex.com/journal/brex-benchmark-december-2025 Featured on the Lean AI Leaderboard: https://leanaileaderboard.com/ Role Overview: Research Scientist – LLM Retell AI is hiring a Research Scientist focused on large language models (LLMs) and audio processing. This role suits machine learning researchers who want to push the boundaries of real-time AI and see their work in production. What You Will Do Investigate new approaches in large language models and audio processing for human-like voice agents Design and implement evaluation methods for complex, real-world conversational systems Prototype systems to improve reasoning, reduce latency, and enhance conversation quality Work closely with engineering and product teams to bring research advances into production Impact Research at Retell directly shapes the capabilities of voice AI agents for thousands of businesses. The work blends advanced research with practical deployment, improving how customers interact with automated systems across industries. Location This position is based in the San Francisco Bay Area.
Full-time|$220K/yr - $270K/yr|On-site|San Francisco (Brisbane), CA
Vivodyne: Pioneering Human Data Generation Pre-Clinical TrialsAt Vivodyne, we revolutionize the discovery, design, and development of human therapeutics by utilizing large-scale, lab-grown human organ tissues. Our innovative approach integrates cutting-edge biology, robotics, and artificial intelligence, allowing us to identify and validate novel therapeutic targets while minimizing risks associated with new therapeutic assets. By generating clinically translatable multi-omic data from our proprietary human organ tissues, we produce more human data than all U.S. clinical trials combined. Backed by elite venture funds, we collaborate with leading multinational pharmaceutical companies to enhance drug discovery and significantly reduce animal testing burdens.www.vivodyne.comRole OverviewThe AI Team at Vivodyne faces some of the most compelling challenges in science and engineering. With access to extensive and rich human tissue imagery, we are pushing the boundaries of artificial intelligence in biological applications. We are developing a suite of AI technologies aimed at automating the discovery, development, and de-risking of novel therapies through our unique technology platform, including single-cell 3D phenomics/machine vision, multimodal (multi-omic) translation, and reinforcement learning for robotic planning and study design.As an AI Senior Scientist, you will apply your expertise in large-scale model development (including Transformer, Diffusion, and hybrid architectures) and Generative AI solutions to transform our advanced imaging and multi-omics datasets into groundbreaking scientific insights. You will work closely with biologists, engineers, and AI specialists to create robust, production-ready algorithms and models that facilitate Vivodyne's impactful discoveries.This position requires on-site work at our Brisbane, California office.
At Intrinsic Safety, we're leveraging innovative technologies to tackle some of the most challenging issues of our digital era using safe and effective AI solutions. Your contributions will play a pivotal role in assisting fraud prevention and Trust & Safety teams, enabling them to concentrate on impactful tasks rather than repetitive manual reviews. We're experiencing rapid growth, partnering with some of the largest and most dynamic social media and online service platforms.We are on the lookout for an exceptional Technical Account Manager (TAM) to join our core Customer Success team. In this role, you will be crucial in ensuring our customers—particularly our enterprise clients—derive maximum value from Intrinsic Safety by providing expert technical support and guidance. You will act as the essential link between our advanced AI platform and our customers' technical teams, fostering deep platform adoption, troubleshooting complex issues, and facilitating their long-term success. This is a unique opportunity to contribute to our mission by transforming intricate technical challenges into practical solutions that effectively combat fraud and abuse.This role requires in-person attendance at our San Francisco office.Your ResponsibilitiesPost-Integration Technical Partnership: Serve as the main technical liaison for our enterprise customers post-deployment. Gain a thorough understanding of their unique technical environments and data flows as they interact with the Intrinsic Safety platform.Drive Ongoing Technical Adoption & Optimization: Assist customers in navigating advanced platform features and configurations to ensure they maximize the benefits of Intrinsic Safety’s offerings.Build Scalable Technical Resources: Develop and enhance our knowledge base, technical documentation, and best practice guides to empower both customers and the Customer Success team.Strategic Technical Troubleshooting & Resolution: Identify and resolve sophisticated technical challenges that arise after implementation, working closely with engineering and product teams to deliver scalable solutions.Product Evolution: Act as a key contributor to the continual improvement of our product offerings.
AI Research ScientistOverviewJoin Physical Superintelligence, an innovative startup rooted in prestigious institutions such as Harvard, MIT, Johns Hopkins, Oxford, the Institute for Advanced Study, and the Perimeter Institute. We are at the forefront of building AI systems designed to uncover groundbreaking insights in physics on a grand scale. We are in search of talented AI researchers dedicated to developing reinforcement learning agents and training frameworks that propel scientific discovery.Key Responsibilities- Develop and optimize AI systems aimed at physics discovery, collaborating with physicists on verification harnesses and engineers on training infrastructure.- Address critical AI research questions related to agent learning in physics reasoning, action space design for scientific exploration, reward structure development, and scalable training systems.- Construct and train reinforcement learning agents leveraging cutting-edge methodologies such as PPO, SAC, MuZero, and multi-agent self-play.- Design agent architectures tailored for physics reasoning and scientific tool utilization.- Execute training curricula and reward structures for discovery tasks.- Establish evaluation workflows and benchmarks to assess physics reasoning capabilities.- Develop instrumentation to analyze agent behavior and learning dynamics.- Collaborate closely with physicists and engineers to refine system design and architecture.Candidate ProfileWe are looking for candidates with a strong background in developing agents and training models using reinforcement learning. Proficiency in modern machine learning frameworks and experience with distributed training systems is essential, alongside a proven track record of deploying effective AI systems.Essential Skills:- Practical experience with contemporary reinforcement learning algorithms including PPO, SAC, MuZero, and multi-agent self-play.- Proficient in PyTorch or JAX, with hands-on experience in distributed training using Ray, XLA, or Accelerate, and familiarity with modern pretraining workflows.Preferred Background:- A strong foundation in physics or mathematics that enhances intuition for physical reasoning and mathematical modeling.- Experience applying agents in simulators, games, scientific tool use, or benchmark design employing rigorous experimental methodologies.
About Our TeamThe Safety Systems organization at OpenAI is dedicated to ensuring the responsible development and deployment of our most advanced AI models. We create evaluations, safeguards, and safety frameworks to ensure that our models operate as intended in real-world scenarios.The Preparedness team plays a critical role within the Safety Systems organization, guided by OpenAI’s Preparedness Framework.As frontier AI models hold the potential for significant benefits to humanity, they also present escalating risks. The Preparedness team is essential in preparing for the development of increasingly capable frontier AI models, focusing on identifying, tracking, and preparing for catastrophic risks associated with these technologies.The mission of the Preparedness team includes:Monitoring and predicting the evolving capabilities of frontier AI systems, especially regarding risks that could have catastrophic consequences.Establishing concrete procedures, infrastructure, and partnerships to mitigate these risks and safely advance the development of powerful AI systems.The Preparedness team integrates capability assessment, evaluations, internal red teaming, and mitigations for frontier models, coordinating overall AGI preparedness. This fast-paced and impactful work holds significant importance for both our organization and society.About the RoleAs frontier AI systems become more capable, they exhibit greater autonomy, the ability to pursue long-term goals, adapt to feedback, and utilize tools. While these advancements offer immense potential, they also raise the risk of models behaving in misaligned or deceptive ways, which can be difficult to supervise or contain. Addressing the risk of loss of control is a key challenge in the safe development and deployment of advanced AI systems.In your role as a Researcher focused on loss of control mitigations, you will design and implement an end-to-end mitigation strategy aimed at reducing the risk of intentionally subversive or inadequately controllable behaviors in OpenAI’s products and internal operations. This position necessitates strong technical expertise and close collaboration across functions to ensure that safeguards are enforceable, scalable, and effective. You will directly contribute to establishing robust protections as model capabilities evolve.
Innovate Boldly. Shape Tomorrow. Our VisionDevelop everyday AGI. Trustworthy, user-friendly agents that transform human–AI collaboration for millions. Software should not merely execute commands; it should collaborate with you, enhancing your daily capabilities.Why Join AGI, Inc.?We are a discreet group of pioneering founders and AI researchers with expertise from Stanford, OpenAI, and DeepMind. Our team is at the forefront of mobile and computer-based agents, scaling these innovations for consumers.With years of agent research underpinning our work, we prioritize trustworthiness and reliability as foundational elements, not mere afterthoughts.Backed by top-tier investors who were instrumental in the rise of the first generation of AI giants, we are positioned to create the next wave: everyday AGI. (Check out the demo)If you envision possibilities where others see constraints, we invite you to read further.Role OverviewAs an AI Researcher, you will pioneer the development of groundbreaking AI algorithms and strategies for intelligent action-taking agents. Your innovative hypotheses will be transformed into production-ready solutions, pushing the limits of existing agent capabilities.What to Expect:Explore New Frontiers. Develop and prototype advanced architectures in reasoning, long-term planning, memory, multi-agent coordination, and alignment.Deliver Results. Conduct live A/B tests on billions of real trajectories, seeing your innovations implemented in production within weeks.Merge Theory with User Delight. Collaborate with product engineers to integrate novel algorithms into enjoyable consumer experiences.Set the Standard. Define our research rigor, safety metrics, and evaluation frameworks for anything labeled as “agentic.”Elevate Others. Mentor interns, residents, and interdisciplinary innovators, amplifying the team's overall impact.Main ResponsibilitiesInnovate Methodologies. Explore RLHF, scalable supervision, tool-use planning, state-space memory, and any other strategies that unlock new agent skills.
The Center for AI Safety (CAIS) stands at the forefront of research and advocacy, dedicated to addressing the pressing societal-scale risks posed by artificial intelligence. Our mission is to confront AI's greatest challenges through rigorous technical research, innovative field-building initiatives, and impactful policy engagement, in collaboration with our sister organization, the Center for AI Safety Action Fund.We are committed to maximizing our positive impact through an expansive array of programs. Notable achievements include the introduction of the most widely utilized AI capabilities measurement embraced by leading AI companies, operating a substantial compute cluster for AI safety research that has been cited over 16,000 times, and the publication of a global statement on AI Risk, endorsed by prominent figures including Geoffrey Hinton, Yoshua Bengio, and leading AI CEOs.We seek enthusiastic and proactive individuals to manage and implement programs spanning public engagement, operations, publications, special projects, and research. Potential projects may involve collaborating with the creators of #TeamTrees to launch a campaign focused on AGI, aiding researchers in developing benchmarks related to deception and weaponization risks, establishing an AI safety hub in Washington, D.C., or devising strategies to engage YouTube creators and long-form content producers on AI safety topics. At CAIS, we are a fast-paced, meritocratic organization where responsibilities and leadership opportunities expand for those who demonstrate initiative and consistently deliver results.
Join Our Team as a Research ScientistAt Parallel, we are at the forefront of web infrastructure innovation, enabling businesses across sectors such as sales, marketing, insurance, and technology to harness the power of AI. Our state-of-the-art products empower users to develop superior AI agents with seamless and flexible access to the web.With significant backing of $130 million from prominent investors like Kleiner Perkins, Index Ventures, and Spark Capital, we are dedicated to redefining the web for artificial intelligence. As we expand, we're assembling a top-tier team of engineers, designers, marketers, sales experts, researchers, and operational specialists committed to our vision.Your Role: As a Research Scientist, you will tackle the challenge of training and scaling models designed to enhance web indexing capabilities.About You: You possess a profound understanding of contemporary models and training methodologies. You enjoy engaging in discussions about the convergence of search, recommendations, and transformer models, and are passionate about translating your research into impactful products and systems utilized by millions.
Join OpenAI as a Research Scientist and explore cutting-edge machine learning innovations. In this role, you will be at the forefront of developing groundbreaking techniques while advancing our team's research initiatives. Collaborate with talented peers across various teams to discover transformative ideas that scale effectively. We seek individuals who are passionate about pushing the boundaries of AI and want to contribute to our unified research vision.
About the RoleJoin Thorin as an AI Researcher and play a pivotal role in shaping the core research initiatives that drive our AI innovations. In this position, you will operate at the crossroads of machine learning research, practical model development, and product application, enhancing our understanding and automation of enterprise workflows.This position merges theoretical research with hands-on implementation, transitioning ideas from conceptual stages through experimentation into functional components that enrich Thorin’s offerings.Your ResponsibilitiesResearch & InnovationConduct innovative machine learning research aligned with real-world product demands.Investigate new model architectures, training methods, and evaluation techniques specifically designed for understanding and automating organizational workflows.Model Development & EvaluationCreate, implement, and assess ML/AI methodologies that enhance model efficacy for essential tasks.Collaborate closely with cross-functional teams to integrate research outcomes into tangible products that meet user needs.
Jan 15, 2026
Sign in to browse more jobs
Create account — see all 12,020 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.