Machine Learning Engineer Post Training And Evaluation jobs in San Francisco – Browse 5,510 openings on RoboApply Jobs

Machine Learning Engineer Post Training And Evaluation jobs in San Francisco

Open roles matching “Machine Learning Engineer Post Training And Evaluation” with location signals for San Francisco. 5,510 active listings on RoboApply Jobs.

5,510 jobs found

1 - 20 of 5,510 Jobs
Apply
companyScale AI logo
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY

At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.

Mar 26, 2026
Apply
companyReflection AI logo
Full-time|On-site|San Francisco

Reflection AI develops open-weight models with the goal of making superintelligence broadly accessible. The team draws on backgrounds from DeepMind, OpenAI, Google Brain, Meta, and Anthropic, and serves a wide range of users including individuals, enterprises, and government organizations. Role overview This Machine Learning Engineer position focuses on post-training and evaluation within the Applied AI group in San Francisco. The main responsibility is to fine-tune and evaluate Reflection AI’s open-weight models for enterprise customers, adapting them to specific domains and tasks using real customer data. The work covers the entire process: preparing and cleaning datasets, running fine-tuning workflows, building evaluation systems, and deploying models into production. Collaboration is central, both with clients to understand their needs and with research colleagues to advance model capabilities. What you will do Fine-tune open-weight models for customer use cases, including dataset preparation, configuring training (such as SFT, preference optimization, and reinforcement fine-tuning), and iterating based on evaluation feedback. Design and maintain evaluation infrastructure: create evaluation suites, curate test sets, set baselines, and measure improvements on key customer tasks. Prepare training data from raw customer sources by assessing data quality, cleaning and formatting, identifying noisy or adversarial samples, and building reproducible data pipelines. Troubleshoot training and inference by analyzing loss curves, diagnosing data issues, and identifying problematic training dynamics. Deploy fine-tuned models in hybrid environments (public cloud, VPC, on-premises) to ensure reliable, high-performance inference in production. Contribute to developing playbooks, evaluation benchmarks, and best practices for fine-tuning and evaluation as the team’s approach evolves. Requirements Hands-on experience in applied machine learning, especially fine-tuning language models. This includes preparing datasets, running training loops, evaluating results, and deploying models. Familiarity with SFT, DPO, RLHF, or related techniques is required. Strong understanding of evaluation methods, with the ability to design evaluations, interpret training metrics, and accurately assess model performance. Location San Francisco

Apr 22, 2026
Apply
companyExa logo
Full-time|On-site|San Francisco, California

At Exa, we are pioneering the next generation of search engines designed for the era of artificial intelligence, starting from the foundational Silicon architecture. Our ambitious indexing operation is unparalleled, allowing us to crawl the vast open web at an extraordinary scale. We harness cutting-edge embedding models to comprehend this data and utilize our high-performance Rust-based vector database alongside a $5M H200 GPU cluster, which powers tens of thousands of machines simultaneously.The Machine Learning (ML) division is central to this mission, focusing on the training of foundational models that enhance search capabilities. Our vision is to create systems capable of swiftly filtering the world’s knowledge to deliver precisely what you need, regardless of the complexity of your inquiry—effectively transforming the web into a robust, searchable database.To achieve this ambitious goal, we must define what constitutes “effective search”. This is where your expertise will play a crucial role.We are seeking a talented Machine Learning Evaluations Engineer to develop and implement our evaluation framework at Exa. This position entails exploring methodologies to assess search engines in a world dominated by large language models (LLMs) and crafting the most thorough, innovative, and impactful evaluation suite. Your decisions will influence the future of search optimization and directly affect the research team’s focus, shaping the company’s strategic direction.

Oct 15, 2025
Apply
companyOpenAI logo
Full-time|On-site|San Francisco

OpenAI is hiring a Software Engineer for Post-Training Research in San Francisco. This position centers on improving the performance and capabilities of advanced machine learning models after their initial training phase. Role overview Work closely with a skilled team to explore new ways of strengthening AI systems. The focus is on researching and developing methods that push the boundaries of what these models can achieve once training is complete. Collaboration Expect to contribute to ongoing research efforts and share insights with colleagues who are passionate about advancing AI. Teamwork and knowledge exchange are key parts of this role. Location This position is based in San Francisco.

Apr 29, 2026
Apply
companyBaseten logo
Full-time|On-site|San Francisco

Join Baseten as a Post-Training Research Engineer and contribute to groundbreaking advancements in machine learning and AI. In this role, you will leverage your engineering skills to analyze and enhance models post-training, ensuring optimal performance and efficiency.

Mar 23, 2026
Apply
companyThinking Machines Lab logo
Post-Training Researcher

Thinking Machines Lab

Full-time|$350K/yr - $475K/yr|On-site|San Francisco

At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We strive to build a future where everyone has access to the knowledge and tools essential for making AI work effectively for their unique objectives.Our team comprises scientists, engineers, and innovators who have contributed to some of the most widely adopted AI products, including ChatGPT and Character.ai, as well as notable open-weight models like Mistral and popular open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleThe Post-Training Researcher position is pivotal to our roadmap. It serves as a crucial connection between raw model intelligence and a system that is genuinely beneficial, safe, and collaborative for human users.This role uniquely combines fundamental research with practical engineering, as we do not differentiate between these functions internally. Candidates will be expected to produce high-performance code and analyze technical reports. This position is ideal for individuals who relish both deep theoretical inquiry and hands-on experimentation, aiming to influence the foundational aspects of AI learning.Note: This position is classified as an 'evergreen role', meaning we continuously accept applications in this research domain. Given the high volume of applications, an immediate match for your skills and experience may not always be available. However, we encourage you to apply; we regularly review submissions and reach out as new opportunities arise. You are welcome to apply again after gaining more experience, but we ask that you refrain from applying more than once every six months. Additionally, specific postings for singular roles may be available for distinct projects or team needs, in which case you are welcome to apply directly in conjunction with this evergreen role.What You’ll DoDevelop and Optimize Recipes: Refine post-training recipes, encompassing various datasets, training stages, and hyperparameters, while assessing their impact on multiple performance metrics.Iterate on Evaluations: Engage in a continuous process of defining evaluation metrics, optimizing them, and recognizing their limitations. You will be accountable for enhancing performance metrics and ensuring they are meaningful.Debug and Analyze: During the fine-tuning of training configurations, you may encounter results that appear inconsistent. You will be responsible for troubleshooting and cultivating a deeper understanding to apply to subsequent challenges.Scale and Investigate: Assess and expand the capabilities of our models while exploring potential improvements.

Nov 23, 2025
Apply
companygleanwork logo
Full-time|Remote|San Francisco Bay Area

Join gleanwork as a Machine Learning Engineer specializing in LLM evaluations and observability. In this role, you will be instrumental in developing cutting-edge machine learning systems that enhance our understanding and effectiveness of language learning models. You will collaborate with cross-functional teams to drive the integration of advanced analytics and machine learning solutions.

Mar 16, 2026
Apply
companyGeneralist logo
Full-time|On-site|San Francisco Bay Area (San Mateo) or Boston (Somerville)

About the RoleIn the realm of machine learning, pretraining lays the foundation for a general model, while post-training refines that model, enhancing its utility, controllability, safety, and performance in real-world applications. As a Post-Training Research Scientist, you will transform large pretrained robot models into production-ready systems through methodologies such as fine-tuning, reinforcement learning, steering, human feedback, task specialization, evaluation, and on-robot validation at scale. This position offers a unique opportunity for individuals from diverse backgrounds to evolve into full-stack ML roboticists, adept at swiftly identifying challenges across machine learning and control domains. This is where innovative research converges with practical implementation.Your Responsibilities Include:Crafting fine-tuning and adaptation strategies tailored for specific robotic tasks and embodiments.Developing methodologies to enhance reliability, robustness, and controllability of robotic systems.Establishing evaluation frameworks to assess real-world robot performance beyond just offline metrics.Collaborating with ML infrastructure teams to optimize inference-time performance, including latency, stability, and memory usage.Utilizing advanced techniques such as imitation learning, reinforcement learning, distillation, synthetic data, and curriculum learning.Bridging the gap between model outputs and tangible outcomes in the physical world.You Might Excel in This Role If You:Possess experience in fine-tuning large models for downstream applications, including RLHF, imitation learning, reinforcement learning, distillation, and domain adaptation.Have a background in embodied AI, robotics, or real-world machine learning systems.Demonstrate a strong commitment to evaluation, benchmarking, and failure analysis.Are comfortable troubleshooting and debugging across the entire ML stack, from analyzing loss curves to understanding robot behavior.Enjoy rapid iteration and thrive on real-world feedback loops.Aspire to connect foundational models with practical deployment scenarios.About GeneralistAt Generalist, we are dedicated to realizing the vision of general-purpose robots. We envision a future where industries and homes benefit from collaborative interactions between humans and machines, enabling us to achieve more than ever before. Our focus is on building embodied foundation models, starting with dexterity, and advancing the frontiers of data, models, and hardware to empower robots to intelligently engage with their environments.

Feb 12, 2026
Apply
companyReducto logo
Full-time|On-site|San Francisco Office

Join Reducto as a Machine Learning Evaluation Engineer where you will play a critical role in assessing and enhancing machine learning models. You will collaborate closely with data scientists and engineers to ensure our systems are efficient and accurate, bringing innovative solutions to challenging problems in the machine learning space.

Mar 16, 2026
Apply
companyScale AI logo
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY

Artificial Intelligence is increasingly becoming a pivotal element across all sectors of society. At Scale AI, we are committed to accelerating the evolution of AI applications. For nearly a decade, we have been the premier AI data foundry, propelling groundbreaking advancements in areas such as generative AI, defense applications, and autonomous vehicles. Following our recent investment from Meta, we are intensifying our efforts to develop advanced post-training algorithms that are essential for sophisticated agents in enterprises worldwide.The Enterprise ML Research Lab is at the forefront of this AI revolution, leveraging a suite of proprietary research, tools, and resources to support our enterprise clients. As a Staff Machine Learning Research Engineer focusing on Agent Post-training, you will be instrumental in creating our next-generation Agent Reinforcement Learning training platform. Your work will enable the training of top-tier Agents that deliver state-of-the-art results in real-world enterprise applications.You will incorporate cutting-edge research into our training framework, empowering ML Research Engineers on the Enterprise AI team to deploy use cases ranging from next-generation AI cybersecurity firewalls to training foundational healthtech search models. If you are passionate about shaping the future of the GenAI movement, we welcome your application!

Mar 26, 2026
Apply
companyScale AI logo
Full-time|$280K/yr - $380K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY

At Scale AI, we are the premier partner for data and evaluation in the rapidly evolving field of artificial intelligence. Our commitment to advancing the assessment and benchmarking of large language models (LLMs) positions us at the forefront of AI innovation. We are dedicated to creating leading-edge LLM evaluation methodologies that set new benchmarks for model performance. Our research teams collaborate with the top AI laboratories in the industry to provide high-quality data, accelerate progress in generative AI research, and inform what excellence looks like in this domain. As a Staff Machine Learning Research Scientist on our LLM Evals team, you will spearhead the creation of novel evaluation methodologies, metrics, and benchmarks to assess the strengths and weaknesses of cutting-edge LLMs. Your work will shape our internal strategies and influence the broader AI research community, making this role essential for establishing best practices in data-driven AI development.

Mar 26, 2026
Apply
companyScale AI logo
Full-time|$216.3K/yr - $300.3K/yr|On-site|San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC

Senior Machine Learning Engineer - Model Evaluations for the Public Sector The Public Sector Machine Learning team at Scale AI pioneers the deployment of cutting-edge AI systems, including Large Language Models (LLMs), agentic models, and comprehensive multimodal pipelines, within critical government operations. We establish robust evaluation frameworks that ensure these models function reliably, safely, and effectively in real-world scenarios. As a Senior Machine Learning Engineer, you will architect, implement, and enhance automated evaluation pipelines that empower our clients to trust and effectively utilize advanced AI systems in defense, intelligence, and federal missions. Your Responsibilities Include: Creating and maintaining automated evaluation pipelines for machine learning models, focusing on functional, performance, robustness, and safety metrics, including evaluations based on LLM judges. Designing test datasets and benchmarks to assess generalization, bias, explainability, and potential failure modes. Building evaluation frameworks for LLM agents, which includes the infrastructure for scenario-based and environment-based testing. Conducting comparative analyses of model architectures, training procedures, and evaluation results. Implementing tools for continuous monitoring, regression testing, and quality assurance of machine learning systems. Designing and executing stress tests and red-teaming workflows to identify vulnerabilities and edge cases. Collaborating with operations teams and subject matter experts to generate high-quality evaluation datasets. This position requires an active security clearance or the ability to obtain one.

Mar 26, 2026
Apply
companyBaseten logo
Full-time|On-site|San Francisco

Join Baseten as a Post-Training Research Scientist, where you will play a vital role in advancing our machine learning capabilities. In this position, you will have the opportunity to conduct innovative research, analyze data, and contribute to the development of cutting-edge technologies. Your work will directly impact our projects and enhance the performance of our models.

Mar 17, 2026
Apply
companyScale AI logo
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY

As AI continues to play a crucial role across various sectors, Scale AI is committed to accelerating the evolution of AI applications. For nearly a decade, we have been at the forefront of AI data solutions, driving significant innovations such as generative AI, defense technologies, and autonomous vehicles. With recent funding from Meta, we are intensifying our efforts to develop cutting-edge post-training algorithms essential for enhancing the performance of complex enterprise agents globally. The Enterprise ML Research Lab is at the forefront of this AI transformation. Our team is dedicated to creating a suite of proprietary research and resources tailored for our enterprise clientele. As a Machine Learning Systems Research Engineer, you will play a pivotal role in developing algorithms for our next-generation Agent Reinforcement Learning (RL) training platform, support large-scale training operations, and integrate state-of-the-art technologies to optimize our machine learning systems. You will collaborate with other ML Research Engineers and AI Architects on the Enterprise AI team to apply these training algorithms to various client use cases, from next-gen AI cybersecurity firewalls to foundational healthtech search models. If you are passionate about shaping the future of AI, we want to hear from you!

Mar 26, 2026
Apply
companyAchira logo
Full-time|On-site|San Francisco Office

Why Join Achira?Become part of an elite team comprising scientists, machine learning researchers, and engineers dedicated to transforming the predictability of the physical microcosm and revolutionizing drug discovery.Explore uncharted territories: we are on a mission to innovate next-generation model architectures that merge AI with chemistry.Engage in large-scale operations: harness massive computational resources, extensive datasets, and ambitious objectives.Take ownership of significant projects from inception to deployment on large-scale infrastructures.Thrive in a culture that values precision, speed, execution, and a proactive mindset.About the PositionAt Achira, we are committed to developing state-of-the-art foundation models that tackle the most complex challenges in simulation for drug discovery and beyond. Our atomistic foundation simulation models (FSMs) serve as world models of the physical microcosm, incorporating machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and various generative models.We are seeking a Machine Learning Research Engineer (MLRE) who excels at the intersection of advanced machine learning and rigorous research methodologies. Collaborate closely with our research scientists to design and enhance intelligent training systems that propel us beyond contemporary architectures into a new era of ML-driven molecular modeling.Your mission is clear yet ambitious: to establish the foundational frameworks for training atomistic simulation models at scale. This entails a deep dive into architecture, data, optimizers, losses, training metrics, and representation learning, all while constructing high-performance systems that maximize the potential of our models. In this role, you will be instrumental in creating a blueprint for pretraining FSMs similar to today’s large-scale generative AI systems, making a significant impact on drug discovery.At Achira, you will have the chance to pioneer models that comprehend and simulate the physical world at an atomic level, achieving unprecedented speed and accuracy.

Sep 26, 2025
Apply
companyLila Sciences logo
Full-time|$116K/yr - $170K/yr|Hybrid|Cambridge, MA USA; San Francisco, CA USA

Your Role at Lila SciencesWe are in search of a talented Machine Learning Research Engineer with a focus on LLM post-training. In this pivotal role, you will architect and oversee large-scale training systems, enhance the performance of extensive models, and incorporate state-of-the-art methodologies to boost efficiency and throughput.Key ResponsibilitiesDevelop Ray-based distributed training infrastructure for LLMs and multi-modal models.Implement performance optimizations for large-scale model training, including training and optimization workflows such as SFT, MoE, and long-context scaling.Manage the orchestration of leading-edge and open-source LLMs alongside intricate compute-intensive tools.Create scalable pipelines for data preprocessing and experiment orchestration, utilizing tools for efficient data loading, pipeline parallelism, and optimizer tuning.Establish system-level performance benchmarks and debugging utilities.

Mar 4, 2026
Apply
company
Full-time|On-site|San Francisco

OverviewPluralis Research is at the forefront of Protocol Learning, innovating a decentralized approach to train and deploy AI models that democratizes access beyond just well-funded corporations. By aggregating computational resources from diverse participants, we incentivize collaboration while safeguarding against centralized control of model weights, paving the way for a truly open and cooperative environment for advanced AI.We are seeking a talented Machine Learning Training Platform Engineer to design, develop, and scale the core infrastructure that powers our decentralized ML training platform. In this role, you will have ownership over essential systems including infrastructure orchestration, distributed computing, and service integration, facilitating ongoing experimentation and large-scale model training.ResponsibilitiesMulti-Cloud Infrastructure: Create resource management systems that provision and orchestrate computing resources across AWS, GCP, and Azure using infrastructure-as-code tools like Pulumi or Terraform. Manage dynamic scaling, state synchronization, and concurrent operations across hundreds of diverse nodes.Distributed Training Systems: Design fault-tolerant infrastructure for distributed machine learning, including GPU clusters, NVIDIA runtime, S3 checkpointing, large dataset management and streaming, health monitoring, and resilient retry strategies.Real-World Networking: Develop systems that simulate and manage real-world network conditions—such as bandwidth shaping, latency injection, and packet loss—while accommodating dynamic node churn and ensuring efficient data flow across workers with varying connectivity, as our training occurs on consumer nodes and non-co-located infrastructure.

Apr 1, 2026
Apply
companyCartesia logo
Full-time|On-site|*HQ - San Francisco, CA

Join Cartesia: Pioneering AI InnovationAt Cartesia, we are on a mission to redefine the landscape of artificial intelligence. Our goal is to create the next generation of AI that is interactive, ubiquitous, and capable of continuous reasoning across vast streams of audio, video, and text data. With an impressive foundation built on our pioneering work in State Space Models (SSMs) at the Stanford AI Lab, our team is uniquely positioned to advance model architectures that will make on-device reasoning a reality.Backed by prominent investors like Index Ventures and Lightspeed Venture Partners, along with a network of 90+ advisors, including top experts in AI, we are committed to pushing the boundaries of model innovation and systems engineering.About the RoleWe believe that the next significant advancement in model intelligence will stem from enhanced post-training methods and alignment strategies. As a Post-Training Researcher, you will be at the forefront of developing systems and methodologies that ensure our multimodal models are not just adaptive, but also aligned with human intentions.In this role, you will collaborate across machine learning research, alignment, and infrastructure, crafting innovative techniques for preference optimization, model evaluation, and feedback-driven learning. You will investigate how feedback signals can enhance reasoning capabilities across various modalities while establishing the necessary infrastructure to scale and improve these processes.Your contributions will be pivotal in shaping the learning and improvement trajectory of Cartesia’s foundational models, ultimately enhancing their connection with users.Your ImpactLead research initiatives aimed at enhancing the capabilities and alignment of multimodal models.Create cutting-edge post-training methods and evaluation frameworks to assess model advancements.Collaborate closely with research, product, and platform teams to establish best practices for specialized model development.Design, debug, and scale experimental systems to ensure reliability and reproducibility throughout training cycles.Convert research insights into production-ready systems that enhance model reasoning, consistency, and alignment with human values.

Oct 21, 2025
Apply
companyOrchard logo
Full-time|On-site|San Francisco

Join Orchard as a Machine Learning Engineer and play a pivotal role in transforming data into actionable insights. In this dynamic position, you will leverage your expertise in machine learning algorithms and data analysis to develop innovative solutions that enhance our products and services.We are looking for a proactive team player who thrives in a fast-paced environment and possesses strong problem-solving skills. You will collaborate with cross-functional teams, engage with large datasets, and contribute to the design and implementation of machine learning models.

Mar 14, 2026
Apply
companyThinking Machines Lab logo
Post-Training Researcher

Thinking Machines Lab

Full-time|$350K/yr - $475K/yr|On-site|San Francisco

At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone can harness the knowledge and tools necessary for AI to serve their unique needs and aspirations. Our team comprises scientists, engineers, and builders who have developed some of the most widely utilized AI products, such as ChatGPT and Character.ai, as well as open-weight models like Mistral and popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleThe role of a Post-Training Researcher is pivotal to our strategic vision. This position serves as the essential link between raw model intelligence and a practical, safe, and collaborative system for human users.Our research in post-training data sits at the intersection of human insights and machine learning. By integrating human and synthetic data techniques alongside innovative methodologies, we capture the subtleties of human behavior to inform and guide our models. We investigate and model the mechanisms that derive value for individuals, enabling us to articulate, predict, and enhance human preferences, behaviors, and satisfaction. Our objective is to translate research concepts into actionable data through meticulously planned data labeling and collection initiatives, while also understanding the science behind high-quality data that effectively trains our models. Additionally, we develop and assess quantitative metrics to evaluate the success and impact of our data and training strategies.Beyond execution, we explore new paradigms for human-AI interaction and scalable oversight, experimenting with optimal ways for humans to supervise, guide, and collaborate with models. This interdisciplinary role merges research, data operations, and technical implementation, pushing the boundaries of aligned, human-centered AI systems.This position combines foundational research and practical engineering, as we do not differentiate between these roles internally. You will be expected to write high-performance code and comprehend technical reports. This role is perfect for individuals who thrive on deep theoretical exploration and hands-on experimentation, eager to shape the foundational aspects of AI learning.Note: This is an evergreen role that we maintain continuously to express interest in this research area. We receive a high volume of applications, and while there may not always be an immediate fit for your skills and experience, we encourage you to apply. We regularly review applications and reach out to candidates as new opportunities arise. You are welcome to reapply after gaining more experience, but please limit applications to once every six months. You may also notice postings for specific roles for targeted positions.

Nov 23, 2025

Sign in to browse more jobs

Create account — see all 5,510 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.