Research Engineer Llm Post Training At Lila Sciences Cambridge Ma San Francisco Ca jobs in San Francisco – Browse 11,614 openings on RoboApply Jobs
Research Engineer Llm Post Training At Lila Sciences Cambridge Ma San Francisco Ca jobs in San Francisco
Open roles matching “Research Engineer Llm Post Training At Lila Sciences Cambridge Ma San Francisco Ca” with location signals for San Francisco. 11,614 active listings on RoboApply Jobs.
11,614 jobs found
Research Engineer - LLM Post-Training at Lila Sciences | Cambridge, MA / San Francisco, CA
Lila SciencesCambridge, MA USA; San Francisco, CA USA
Hybrid Full-time $116K/yr - $170K/yr
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
Qualifications for SuccessDemonstrated expertise with distributed ML training frameworks (e.g., Megatron-LM, TorchTitan, DeepSpeed, Ray). Solid software engineering proficiency in Python; C++ contributions to kernels are advantageous. Comprehensive understanding of large-scale model training techniques. Experience in cloud or HPC environments. Preferred QualificationsPrevious involvement with scientific datasets or domain-specific modeling. Contributions to open-source ML frameworks.
About the job
Your Role at Lila Sciences
We are in search of a talented Machine Learning Research Engineer with a focus on LLM post-training. In this pivotal role, you will architect and oversee large-scale training systems, enhance the performance of extensive models, and incorporate state-of-the-art methodologies to boost efficiency and throughput.
Key Responsibilities
Develop Ray-based distributed training infrastructure for LLMs and multi-modal models.
Implement performance optimizations for large-scale model training, including training and optimization workflows such as SFT, MoE, and long-context scaling.
Manage the orchestration of leading-edge and open-source LLMs alongside intricate compute-intensive tools.
Create scalable pipelines for data preprocessing and experiment orchestration, utilizing tools for efficient data loading, pipeline parallelism, and optimizer tuning.
Establish system-level performance benchmarks and debugging utilities.
About Lila Sciences
Lila Sciences stands as the world's pioneering scientific superintelligence platform and autonomous laboratory dedicated to life sciences, chemistry, and materials science. We are at the forefront of a transformative era of limitless exploration, leveraging AI to revolutionize every facet of the scientific method. Our mission is to introduce scientific superintelligence to tackle humanity's most pressing challenges, empowering scientists to deliver solutions in human health, climate, and sustainability at an unprecedented scale and speed. Discover more about our vision at www.lila.ai.
Full-time|$116K/yr - $170K/yr|Hybrid|Cambridge, MA USA; San Francisco, CA USA
Your Role at Lila SciencesWe are in search of a talented Machine Learning Research Engineer with a focus on LLM post-training. In this pivotal role, you will architect and oversee large-scale training systems, enhance the performance of extensive models, and incorporate state-of-the-art methodologies to boost efficiency and throughput.Key ResponsibilitiesDevelop Ray-based distributed training infrastructure for LLMs and multi-modal models.Implement performance optimizations for large-scale model training, including training and optimization workflows such as SFT, MoE, and long-context scaling.Manage the orchestration of leading-edge and open-source LLMs alongside intricate compute-intensive tools.Create scalable pipelines for data preprocessing and experiment orchestration, utilizing tools for efficient data loading, pipeline parallelism, and optimizer tuning.Establish system-level performance benchmarks and debugging utilities.
Join Cartesia: Pioneering AI InnovationAt Cartesia, we are on a mission to redefine the landscape of artificial intelligence. Our goal is to create the next generation of AI that is interactive, ubiquitous, and capable of continuous reasoning across vast streams of audio, video, and text data. With an impressive foundation built on our pioneering work in State Space Models (SSMs) at the Stanford AI Lab, our team is uniquely positioned to advance model architectures that will make on-device reasoning a reality.Backed by prominent investors like Index Ventures and Lightspeed Venture Partners, along with a network of 90+ advisors, including top experts in AI, we are committed to pushing the boundaries of model innovation and systems engineering.About the RoleWe believe that the next significant advancement in model intelligence will stem from enhanced post-training methods and alignment strategies. As a Post-Training Researcher, you will be at the forefront of developing systems and methodologies that ensure our multimodal models are not just adaptive, but also aligned with human intentions.In this role, you will collaborate across machine learning research, alignment, and infrastructure, crafting innovative techniques for preference optimization, model evaluation, and feedback-driven learning. You will investigate how feedback signals can enhance reasoning capabilities across various modalities while establishing the necessary infrastructure to scale and improve these processes.Your contributions will be pivotal in shaping the learning and improvement trajectory of Cartesia’s foundational models, ultimately enhancing their connection with users.Your ImpactLead research initiatives aimed at enhancing the capabilities and alignment of multimodal models.Create cutting-edge post-training methods and evaluation frameworks to assess model advancements.Collaborate closely with research, product, and platform teams to establish best practices for specialized model development.Design, debug, and scale experimental systems to ensure reliability and reproducibility throughout training cycles.Convert research insights into production-ready systems that enhance model reasoning, consistency, and alignment with human values.
Full-time|$232K/yr - $304K/yr|On-site|San Francisco, CA USA
Your Impact at Lila Sciences Join our innovative team as a Senior Principal Software Engineer at Lila Sciences, where you will play a pivotal role in developing the next generation of our AI-driven scientific platform. As a key technical leader, you will work collaboratively with machine learning researchers, platform engineers, and scientists to build cutting-edge applications and services that facilitate AI-driven scientific processes. Your leadership will enable seamless collaboration across diverse teams. This is your chance to design a state-of-the-art, production-grade system from the ground up, utilizing advanced AI frameworks, lab automation software, and scalable cloud infrastructure. In our dynamic and fast-paced environment, your technical acumen and innovative problem-solving skills will redefine the possibilities of AI in scientific research. If you are enthusiastic about building intelligent systems that can revolutionize the scientific landscape, we invite you to apply!
Full-time|$144K/yr - $210K/yr|On-site|San Francisco, CA USA
Your Contribution at Lila Sciences We are on the lookout for a Senior Frontend Software Engineer to join our innovative software team and contribute to the development of our next-generation, AI-powered scientific platform. In this pivotal role, you will be responsible for designing, constructing, and optimizing backend systems that enhance intelligent, data-driven applications. Your focus will be on creating user interfaces, services, high-performance APIs, and databases while ensuring the reliability of services that merge advanced AI frameworks with intricate scientific analytics and laboratory operations. You will collaborate closely with machine learning researchers, platform engineers, and scientists to develop systems capable of managing diverse workloads and achieving seamless scalability, including structured SQL databases, data lakes, and vector databases. This is a unique opportunity to leverage your extensive frontend and backend expertise to impact a state-of-the-art AI platform that drives real scientific advancements. If you are enthusiastic about building efficient and elegant systems, we are eager to connect with you! Your Responsibilities UI Design & Development: Create high-performance, secure, and well-documented user interfaces and APIs that integrate with AI-driven applications. Database Management: Develop schemas and oversee various data systems (SQL, NoSQL, Vector DBs, etc.) for optimal performance and scalability. Application Development: Lead the implementation of both frontend and backend services with a focus on performance, maintainability, and reliability. Performance Optimization: Identify and resolve system bottlenecks, guaranteeing high availability and low-latency performance across large-scale workloads. Infrastructure & Cloud Services: Utilize AWS services, Kubernetes, and modern DevOps methodologies to design and deploy production-grade systems at scale. Collaborative Innovation: Partner with ML researchers, engineers, and scientists to integrate data pipelines, APIs, and cloud infrastructure into scientific workflows. Qualifications for Success Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 2-6 years of engineering experience in building and deploying large-scale systems in production; expertise in either frontend or backend development is essential. Technical Skills: Proficiency in Typescript, React, and Python is required.
Join Baseten as a Post-Training Research Engineer and contribute to groundbreaking advancements in machine learning and AI. In this role, you will leverage your engineering skills to analyze and enhance models post-training, ensuring optimal performance and efficiency.
Role overview OpenAI is looking for a Researcher focused on Agentic Post-Training, based in San Francisco. This role centers on analyzing and improving how AI systems behave after their initial training. The goal is to broaden the capabilities of AI and refine how models respond in complex situations. What you will do Study and assess agentic behaviors in trained AI models Create new approaches to strengthen these behaviors after training Collaborate with a talented team on projects that shape the future of artificial intelligence research Collaboration and impact This position involves hands-on research with other specialists at OpenAI. The work directly supports the advancement of AI capabilities and helps define new benchmarks for agentic performance in artificial intelligence.
Advancing Self-Improving SuperintelligenceAt Letta, we are on a mission to revolutionize artificial intelligence by creating self-improving agents that learn and adapt like humans. Unlike current AI systems that are often rigid and brittle, our innovative approach aims to build adaptable AI that continually evolves through experience.Founded by the visionaries behind MemGPT at UC Berkeley's Sky Computing Lab, the birthplace of Spark and Ray, we are backed by notable figures in AI infrastructure, including Jeff Dean and Clem Delangue. Our agents are already enhancing production systems for industry leaders such as 11x and Bilt Rewards, continually learning and improving in real-time.Join our elite team of researchers and engineers dedicated to tackling AI's most significant challenges: creating machines that can reason, remember, and learn as humans do.This position requires in-person attendance (no hybrid options) at our downtown San Francisco office, five days a week.
Join Baseten as a Post-Training Research Scientist, where you will play a vital role in advancing our machine learning capabilities. In this position, you will have the opportunity to conduct innovative research, analyze data, and contribute to the development of cutting-edge technologies. Your work will directly impact our projects and enhance the performance of our models.
Full-time|On-site|San Francisco Bay Area (San Mateo) or Boston (Somerville)
About the RoleIn the realm of machine learning, pretraining lays the foundation for a general model, while post-training refines that model, enhancing its utility, controllability, safety, and performance in real-world applications. As a Post-Training Research Scientist, you will transform large pretrained robot models into production-ready systems through methodologies such as fine-tuning, reinforcement learning, steering, human feedback, task specialization, evaluation, and on-robot validation at scale. This position offers a unique opportunity for individuals from diverse backgrounds to evolve into full-stack ML roboticists, adept at swiftly identifying challenges across machine learning and control domains. This is where innovative research converges with practical implementation.Your Responsibilities Include:Crafting fine-tuning and adaptation strategies tailored for specific robotic tasks and embodiments.Developing methodologies to enhance reliability, robustness, and controllability of robotic systems.Establishing evaluation frameworks to assess real-world robot performance beyond just offline metrics.Collaborating with ML infrastructure teams to optimize inference-time performance, including latency, stability, and memory usage.Utilizing advanced techniques such as imitation learning, reinforcement learning, distillation, synthetic data, and curriculum learning.Bridging the gap between model outputs and tangible outcomes in the physical world.You Might Excel in This Role If You:Possess experience in fine-tuning large models for downstream applications, including RLHF, imitation learning, reinforcement learning, distillation, and domain adaptation.Have a background in embodied AI, robotics, or real-world machine learning systems.Demonstrate a strong commitment to evaluation, benchmarking, and failure analysis.Are comfortable troubleshooting and debugging across the entire ML stack, from analyzing loss curves to understanding robot behavior.Enjoy rapid iteration and thrive on real-world feedback loops.Aspire to connect foundational models with practical deployment scenarios.About GeneralistAt Generalist, we are dedicated to realizing the vision of general-purpose robots. We envision a future where industries and homes benefit from collaborative interactions between humans and machines, enabling us to achieve more than ever before. Our focus is on building embodied foundation models, starting with dexterity, and advancing the frontiers of data, models, and hardware to empower robots to intelligently engage with their environments.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone can harness the knowledge and tools necessary for AI to serve their unique needs and aspirations. Our team comprises scientists, engineers, and builders who have developed some of the most widely utilized AI products, such as ChatGPT and Character.ai, as well as open-weight models like Mistral and popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleThe role of a Post-Training Researcher is pivotal to our strategic vision. This position serves as the essential link between raw model intelligence and a practical, safe, and collaborative system for human users.Our research in post-training data sits at the intersection of human insights and machine learning. By integrating human and synthetic data techniques alongside innovative methodologies, we capture the subtleties of human behavior to inform and guide our models. We investigate and model the mechanisms that derive value for individuals, enabling us to articulate, predict, and enhance human preferences, behaviors, and satisfaction. Our objective is to translate research concepts into actionable data through meticulously planned data labeling and collection initiatives, while also understanding the science behind high-quality data that effectively trains our models. Additionally, we develop and assess quantitative metrics to evaluate the success and impact of our data and training strategies.Beyond execution, we explore new paradigms for human-AI interaction and scalable oversight, experimenting with optimal ways for humans to supervise, guide, and collaborate with models. This interdisciplinary role merges research, data operations, and technical implementation, pushing the boundaries of aligned, human-centered AI systems.This position combines foundational research and practical engineering, as we do not differentiate between these roles internally. You will be expected to write high-performance code and comprehend technical reports. This role is perfect for individuals who thrive on deep theoretical exploration and hands-on experimentation, eager to shape the foundational aspects of AI learning.Note: This is an evergreen role that we maintain continuously to express interest in this research area. We receive a high volume of applications, and while there may not always be an immediate fit for your skills and experience, we encourage you to apply. We regularly review applications and reach out to candidates as new opportunities arise. You are welcome to reapply after gaining more experience, but please limit applications to once every six months. You may also notice postings for specific roles for targeted positions.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.
OpenAI is hiring a Software Engineer for Post-Training Research in San Francisco. This position centers on improving the performance and capabilities of advanced machine learning models after their initial training phase. Role overview Work closely with a skilled team to explore new ways of strengthening AI systems. The focus is on researching and developing methods that push the boundaries of what these models can achieve once training is complete. Collaboration Expect to contribute to ongoing research efforts and share insights with colleagues who are passionate about advancing AI. Teamwork and knowledge exchange are key parts of this role. Location This position is based in San Francisco.
Genmo is a pioneering research laboratory dedicated to advancing cutting-edge models for video generation, with the mission of unlocking the creative potential of Artificial General Intelligence (AGI). We invite you to be a part of our innovative team, where you can contribute to shaping the future of AI and expanding the horizons of video generation technology.Role Overview:We are on the lookout for a talented Research Scientist to join our dynamic team, specializing in alignment and post-training methodologies for large-scale video generation models. In this pivotal role, you will be instrumental in ensuring our diffusion-based video models consistently deliver high-quality, physically accurate, and safe outputs that align with human values and preferences.Key Responsibilities:Lead groundbreaking research initiatives in alignment and post-training strategies for video generation models, prioritizing enhanced quality, reliability, and alignment with human intent.Design and implement supervised fine-tuning and reinforcement learning from human feedback (RLHF) pipelines for video generation models.Establish robust evaluation frameworks to assess model alignment, safety, and output quality.Create and optimize data collection pipelines for capturing human feedback and preferences.Conduct experiments to validate alignment techniques and their scalability.Collaborate with cross-functional teams to incorporate alignment enhancements into our production workflow.Stay abreast of the latest developments by reviewing academic literature in generative AI and alignment.Mentor junior researchers and promote a culture of responsible AI development.Partner closely with product teams to ensure that alignment methods enhance model capabilities.Qualifications:Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a closely related field.Demonstrated excellence with a strong publication record in top-tier conferences (e.g., NeurIPS, ICML, ICLR) focusing on reinforcement learning, alignment, or generative models.Extensive experience in implementing and optimizing large-scale training pipelines utilizing PyTorch.In-depth understanding of reinforcement learning techniques, especially RLHF.Proficient in distributed training systems and conducting large-scale experiments.Proven ability to design and implement robust evaluation strategies for models.
Full-time|$176K/yr - $304K/yr|Hybrid|Cambridge, MA USA; San Francisco, CA USA
Your Contribution at LilaAs a Machine Learning Research Scientist I/II specializing in LLM Inference, you will spearhead research initiatives focused on the training and deployment of large language models for scientific applications.Your ResponsibilitiesDevelop and refine post-training strategies for LLMs, including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and Reinforcement Learning with verifiers.Design efficient inference mechanisms and compute strategies for complex tool utilization in various environments.Create scalable evaluation metrics to assess LLM performance in scientific reasoning tasks.Investigate the boundaries of cutting-edge LLM methodologies for scientific challenges and analyze their limitations.
About MercorMercor sits at the forefront of labor markets and artificial intelligence research, collaborating with premier AI laboratories and enterprises to harness the human intelligence crucial for AI evolution.Our expansive talent network empowers the training of cutting-edge AI models, akin to how educators impart knowledge to students—sharing insights, experiences, and contexts that transcend mere code. Currently, our network comprises over 30,000 experts, generating collective earnings exceeding $2 million daily.At Mercor, we are pioneering a unique category of work where expertise fuels AI progress. Realizing this vision necessitates a bold, fast-paced, and deeply dedicated team. You will collaborate with researchers, operators, and AI firms that are at the vanguard of transforming systems that redefine society.As a profitable Series C company, Mercor is valued at $10 billion and maintains an in-office presence five days a week at our new headquarters in San Francisco.About the RoleIn your capacity as a Research Engineer at Mercor, you will operate at the intersection of engineering and applied AI research. You will play a pivotal role in post-training and Reinforcement Learning from Human Feedback (RLVR), synthetic data generation, and large-scale evaluation workflows essential for advancing frontier language models.Your contributions will help train large language models to adeptly utilize tools, exhibit agentic behavior, and engage in real-world reasoning within production environments. You will be instrumental in shaping rewards, conducting post-training experiments, and constructing scalable systems to enhance model performance. Your responsibilities will also include designing and evaluating datasets, creating scalable data augmentation pipelines, and developing rubrics and evaluators that expand the learning potential of LLMs.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We strive to build a future where everyone has access to the knowledge and tools essential for making AI work effectively for their unique objectives.Our team comprises scientists, engineers, and innovators who have contributed to some of the most widely adopted AI products, including ChatGPT and Character.ai, as well as notable open-weight models like Mistral and popular open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleThe Post-Training Researcher position is pivotal to our roadmap. It serves as a crucial connection between raw model intelligence and a system that is genuinely beneficial, safe, and collaborative for human users.This role uniquely combines fundamental research with practical engineering, as we do not differentiate between these functions internally. Candidates will be expected to produce high-performance code and analyze technical reports. This position is ideal for individuals who relish both deep theoretical inquiry and hands-on experimentation, aiming to influence the foundational aspects of AI learning.Note: This position is classified as an 'evergreen role', meaning we continuously accept applications in this research domain. Given the high volume of applications, an immediate match for your skills and experience may not always be available. However, we encourage you to apply; we regularly review submissions and reach out as new opportunities arise. You are welcome to apply again after gaining more experience, but we ask that you refrain from applying more than once every six months. Additionally, specific postings for singular roles may be available for distinct projects or team needs, in which case you are welcome to apply directly in conjunction with this evergreen role.What You’ll DoDevelop and Optimize Recipes: Refine post-training recipes, encompassing various datasets, training stages, and hyperparameters, while assessing their impact on multiple performance metrics.Iterate on Evaluations: Engage in a continuous process of defining evaluation metrics, optimizing them, and recognizing their limitations. You will be accountable for enhancing performance metrics and ensuring they are meaningful.Debug and Analyze: During the fine-tuning of training configurations, you may encounter results that appear inconsistent. You will be responsible for troubleshooting and cultivating a deeper understanding to apply to subsequent challenges.Scale and Investigate: Assess and expand the capabilities of our models while exploring potential improvements.
Full-time|$250K/yr - $450K/yr|On-site|San Francisco
About AfterQuery AfterQuery builds training data and evaluation frameworks used by leading AI labs around the world. The team partners with advanced research groups to create high-quality datasets and run detailed evaluations that go beyond standard benchmarks. As a small, post-Series A company based in San Francisco, every team member plays a key role in shaping how future AI models learn and improve. Role Overview The Post-Training Research Scientist focuses on proving the impact of AfterQuery's datasets. This work involves designing and running training experiments to isolate how specific data influences model performance. Projects span Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) post-training, with an emphasis on measuring effects on capability, generalization, and alignment. Working closely with partner labs, the scientist turns data into clear, verifiable results: showing exactly how a dataset leads to measurable improvements under defined conditions. The work is experimental and directly shapes the value of AfterQuery's products. What You Will Do Run controlled SFT and RL experiments to measure how datasets affect model outcomes. Quantify gains in areas like reasoning, tool use, long-horizon tasks, and specialized workflows. Share findings with partner labs to support sales and demonstrate value. Work with internal subject matter experts to improve data quality based on experimental results. What We Look For Strong background in LLM training and evaluation methods. Curiosity about how data structure, selection, and quality shape model behavior. Skill in designing experiments, executing quickly, and drawing practical insights from complex results. Comfort working across fields such as finance, software engineering, and policy. Focus on real-world implementation, not just theory. Research experience at the undergraduate or master's level is preferred; a PhD is not required. Compensation $250,000 - $450,000 total compensation plus equity
About Retell AI Retell AI builds voice AI technology that helps businesses transform their call center operations. In just 18 months, thousands of companies have adopted Retell’s AI voice agents to streamline sales, support, and logistics, work that once required large human teams. Backed by investors including Y Combinator and Alt Capital, Retell has grown annual recurring revenue from $5M to $36M with a focused team of 20. The company’s goal for 2026: a modern customer experience platform where AI powers entire contact centers. Retell is developing AI “workers” that can serve as frontline agents, quality assurance analysts, and managers, handling, evaluating, and improving customer interactions on their own. Named a top 50 AI app by a16z: https://tinyurl.com/5853dt2x Ranked #4 on Brex’s Fast-Growing Software Vendors of 2025: https://www.brex.com/journal/brex-benchmark-december-2025 Featured on the Lean AI Leaderboard: https://leanaileaderboard.com/ Role Overview: Research Scientist – LLM Retell AI is hiring a Research Scientist focused on large language models (LLMs) and audio processing. This role suits machine learning researchers who want to push the boundaries of real-time AI and see their work in production. What You Will Do Investigate new approaches in large language models and audio processing for human-like voice agents Design and implement evaluation methods for complex, real-world conversational systems Prototype systems to improve reasoning, reduce latency, and enhance conversation quality Work closely with engineering and product teams to bring research advances into production Impact Research at Retell directly shapes the capabilities of voice AI agents for thousands of businesses. The work blends advanced research with practical deployment, improving how customers interact with automated systems across industries. Location This position is based in the San Francisco Bay Area.
Join the Center for AI Safety (CAIS), a premier research and advocacy institution dedicated to minimizing large-scale societal risks associated with artificial intelligence. We tackle the most pressing challenges in AI through innovative technical research, community-building initiatives, and active policy engagement, alongside our sister organization, the Center for AI Safety Action Fund.As a Research Engineer, you will operate at the forefront of advanced machine learning research and dependable engineering practices. Your role will involve designing and executing experiments on large language models, developing the necessary tools for extensive model training and evaluation, and translating findings into publishable research. You will work collaboratively with CAIS researchers and external academic and commercial partners, utilizing our compute cluster to conduct large-scale training and evaluations. Your work will focus on critical areas such as AI honesty, robustness, transparency, and the identification of trojan/backdoor behaviors, all aimed at mitigating real-world risks posed by sophisticated AI systems.
About CartesiaAt Cartesia, our vision is to create the future of artificial intelligence—intelligent systems that are seamlessly integrated into daily life. We aim to overcome current limitations by enabling models to continuously understand and analyze vast streams of audio, video, and text data—ranging from 1 billion text tokens to 1 trillion video tokens—right on your device.Our pioneering team, comprised of PhDs from the Stanford AI Lab, has developed State Space Models (SSMs), a groundbreaking approach to training efficient, large-scale foundation models. With a rich blend of expertise in model innovation and systems engineering, alongside a product-focused engineering team, we are committed to developing and delivering cutting-edge AI models and user experiences.Supported by prominent investors including Index Ventures and Lightspeed Venture Partners, as well as many esteemed advisors and over 90 angel investors from diverse industries, we are at the forefront of AI advancements.About The RoleIn our quest to create truly global AI, we must train our models using datasets that represent the vast diversity of languages and cultures around the world. We are looking for a Research Engineer to take charge of the quality and comprehensiveness of the data that drives our models. As our in-house expert in global data, you will ensure that our models excel across multiple languages, leveraging your keen understanding of linguistic subtleties and your enthusiasm for building inclusive, large-scale datasets.Your ImpactDesign and construct extensive datasets for model training, conducting controlled experiments to evaluate their effect on model performance.Develop assessments for speech models through both manual annotation and automated evaluation metrics.Utilize data generation techniques to enhance model intelligence and reduce biases.Create automated quality control systems to validate and filter the generated data.Collaborate with product teams to ensure optimal support for key languages and markets.What You BringProven experience in developing or working with extensive multilingual datasets.Familiarity with generative models, including speech, text, or multimodal systems.Ability to guide human annotation and evaluation across various languages.Strong analytical skills and a passion for data-driven decision-making.
Full-time|$116K/yr - $170K/yr|Hybrid|Cambridge, MA USA; San Francisco, CA USA
Your Role at Lila SciencesWe are in search of a talented Machine Learning Research Engineer with a focus on LLM post-training. In this pivotal role, you will architect and oversee large-scale training systems, enhance the performance of extensive models, and incorporate state-of-the-art methodologies to boost efficiency and throughput.Key ResponsibilitiesDevelop Ray-based distributed training infrastructure for LLMs and multi-modal models.Implement performance optimizations for large-scale model training, including training and optimization workflows such as SFT, MoE, and long-context scaling.Manage the orchestration of leading-edge and open-source LLMs alongside intricate compute-intensive tools.Create scalable pipelines for data preprocessing and experiment orchestration, utilizing tools for efficient data loading, pipeline parallelism, and optimizer tuning.Establish system-level performance benchmarks and debugging utilities.
Join Cartesia: Pioneering AI InnovationAt Cartesia, we are on a mission to redefine the landscape of artificial intelligence. Our goal is to create the next generation of AI that is interactive, ubiquitous, and capable of continuous reasoning across vast streams of audio, video, and text data. With an impressive foundation built on our pioneering work in State Space Models (SSMs) at the Stanford AI Lab, our team is uniquely positioned to advance model architectures that will make on-device reasoning a reality.Backed by prominent investors like Index Ventures and Lightspeed Venture Partners, along with a network of 90+ advisors, including top experts in AI, we are committed to pushing the boundaries of model innovation and systems engineering.About the RoleWe believe that the next significant advancement in model intelligence will stem from enhanced post-training methods and alignment strategies. As a Post-Training Researcher, you will be at the forefront of developing systems and methodologies that ensure our multimodal models are not just adaptive, but also aligned with human intentions.In this role, you will collaborate across machine learning research, alignment, and infrastructure, crafting innovative techniques for preference optimization, model evaluation, and feedback-driven learning. You will investigate how feedback signals can enhance reasoning capabilities across various modalities while establishing the necessary infrastructure to scale and improve these processes.Your contributions will be pivotal in shaping the learning and improvement trajectory of Cartesia’s foundational models, ultimately enhancing their connection with users.Your ImpactLead research initiatives aimed at enhancing the capabilities and alignment of multimodal models.Create cutting-edge post-training methods and evaluation frameworks to assess model advancements.Collaborate closely with research, product, and platform teams to establish best practices for specialized model development.Design, debug, and scale experimental systems to ensure reliability and reproducibility throughout training cycles.Convert research insights into production-ready systems that enhance model reasoning, consistency, and alignment with human values.
Full-time|$232K/yr - $304K/yr|On-site|San Francisco, CA USA
Your Impact at Lila Sciences Join our innovative team as a Senior Principal Software Engineer at Lila Sciences, where you will play a pivotal role in developing the next generation of our AI-driven scientific platform. As a key technical leader, you will work collaboratively with machine learning researchers, platform engineers, and scientists to build cutting-edge applications and services that facilitate AI-driven scientific processes. Your leadership will enable seamless collaboration across diverse teams. This is your chance to design a state-of-the-art, production-grade system from the ground up, utilizing advanced AI frameworks, lab automation software, and scalable cloud infrastructure. In our dynamic and fast-paced environment, your technical acumen and innovative problem-solving skills will redefine the possibilities of AI in scientific research. If you are enthusiastic about building intelligent systems that can revolutionize the scientific landscape, we invite you to apply!
Full-time|$144K/yr - $210K/yr|On-site|San Francisco, CA USA
Your Contribution at Lila Sciences We are on the lookout for a Senior Frontend Software Engineer to join our innovative software team and contribute to the development of our next-generation, AI-powered scientific platform. In this pivotal role, you will be responsible for designing, constructing, and optimizing backend systems that enhance intelligent, data-driven applications. Your focus will be on creating user interfaces, services, high-performance APIs, and databases while ensuring the reliability of services that merge advanced AI frameworks with intricate scientific analytics and laboratory operations. You will collaborate closely with machine learning researchers, platform engineers, and scientists to develop systems capable of managing diverse workloads and achieving seamless scalability, including structured SQL databases, data lakes, and vector databases. This is a unique opportunity to leverage your extensive frontend and backend expertise to impact a state-of-the-art AI platform that drives real scientific advancements. If you are enthusiastic about building efficient and elegant systems, we are eager to connect with you! Your Responsibilities UI Design & Development: Create high-performance, secure, and well-documented user interfaces and APIs that integrate with AI-driven applications. Database Management: Develop schemas and oversee various data systems (SQL, NoSQL, Vector DBs, etc.) for optimal performance and scalability. Application Development: Lead the implementation of both frontend and backend services with a focus on performance, maintainability, and reliability. Performance Optimization: Identify and resolve system bottlenecks, guaranteeing high availability and low-latency performance across large-scale workloads. Infrastructure & Cloud Services: Utilize AWS services, Kubernetes, and modern DevOps methodologies to design and deploy production-grade systems at scale. Collaborative Innovation: Partner with ML researchers, engineers, and scientists to integrate data pipelines, APIs, and cloud infrastructure into scientific workflows. Qualifications for Success Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 2-6 years of engineering experience in building and deploying large-scale systems in production; expertise in either frontend or backend development is essential. Technical Skills: Proficiency in Typescript, React, and Python is required.
Join Baseten as a Post-Training Research Engineer and contribute to groundbreaking advancements in machine learning and AI. In this role, you will leverage your engineering skills to analyze and enhance models post-training, ensuring optimal performance and efficiency.
Role overview OpenAI is looking for a Researcher focused on Agentic Post-Training, based in San Francisco. This role centers on analyzing and improving how AI systems behave after their initial training. The goal is to broaden the capabilities of AI and refine how models respond in complex situations. What you will do Study and assess agentic behaviors in trained AI models Create new approaches to strengthen these behaviors after training Collaborate with a talented team on projects that shape the future of artificial intelligence research Collaboration and impact This position involves hands-on research with other specialists at OpenAI. The work directly supports the advancement of AI capabilities and helps define new benchmarks for agentic performance in artificial intelligence.
Advancing Self-Improving SuperintelligenceAt Letta, we are on a mission to revolutionize artificial intelligence by creating self-improving agents that learn and adapt like humans. Unlike current AI systems that are often rigid and brittle, our innovative approach aims to build adaptable AI that continually evolves through experience.Founded by the visionaries behind MemGPT at UC Berkeley's Sky Computing Lab, the birthplace of Spark and Ray, we are backed by notable figures in AI infrastructure, including Jeff Dean and Clem Delangue. Our agents are already enhancing production systems for industry leaders such as 11x and Bilt Rewards, continually learning and improving in real-time.Join our elite team of researchers and engineers dedicated to tackling AI's most significant challenges: creating machines that can reason, remember, and learn as humans do.This position requires in-person attendance (no hybrid options) at our downtown San Francisco office, five days a week.
Join Baseten as a Post-Training Research Scientist, where you will play a vital role in advancing our machine learning capabilities. In this position, you will have the opportunity to conduct innovative research, analyze data, and contribute to the development of cutting-edge technologies. Your work will directly impact our projects and enhance the performance of our models.
Full-time|On-site|San Francisco Bay Area (San Mateo) or Boston (Somerville)
About the RoleIn the realm of machine learning, pretraining lays the foundation for a general model, while post-training refines that model, enhancing its utility, controllability, safety, and performance in real-world applications. As a Post-Training Research Scientist, you will transform large pretrained robot models into production-ready systems through methodologies such as fine-tuning, reinforcement learning, steering, human feedback, task specialization, evaluation, and on-robot validation at scale. This position offers a unique opportunity for individuals from diverse backgrounds to evolve into full-stack ML roboticists, adept at swiftly identifying challenges across machine learning and control domains. This is where innovative research converges with practical implementation.Your Responsibilities Include:Crafting fine-tuning and adaptation strategies tailored for specific robotic tasks and embodiments.Developing methodologies to enhance reliability, robustness, and controllability of robotic systems.Establishing evaluation frameworks to assess real-world robot performance beyond just offline metrics.Collaborating with ML infrastructure teams to optimize inference-time performance, including latency, stability, and memory usage.Utilizing advanced techniques such as imitation learning, reinforcement learning, distillation, synthetic data, and curriculum learning.Bridging the gap between model outputs and tangible outcomes in the physical world.You Might Excel in This Role If You:Possess experience in fine-tuning large models for downstream applications, including RLHF, imitation learning, reinforcement learning, distillation, and domain adaptation.Have a background in embodied AI, robotics, or real-world machine learning systems.Demonstrate a strong commitment to evaluation, benchmarking, and failure analysis.Are comfortable troubleshooting and debugging across the entire ML stack, from analyzing loss curves to understanding robot behavior.Enjoy rapid iteration and thrive on real-world feedback loops.Aspire to connect foundational models with practical deployment scenarios.About GeneralistAt Generalist, we are dedicated to realizing the vision of general-purpose robots. We envision a future where industries and homes benefit from collaborative interactions between humans and machines, enabling us to achieve more than ever before. Our focus is on building embodied foundation models, starting with dexterity, and advancing the frontiers of data, models, and hardware to empower robots to intelligently engage with their environments.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone can harness the knowledge and tools necessary for AI to serve their unique needs and aspirations. Our team comprises scientists, engineers, and builders who have developed some of the most widely utilized AI products, such as ChatGPT and Character.ai, as well as open-weight models like Mistral and popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleThe role of a Post-Training Researcher is pivotal to our strategic vision. This position serves as the essential link between raw model intelligence and a practical, safe, and collaborative system for human users.Our research in post-training data sits at the intersection of human insights and machine learning. By integrating human and synthetic data techniques alongside innovative methodologies, we capture the subtleties of human behavior to inform and guide our models. We investigate and model the mechanisms that derive value for individuals, enabling us to articulate, predict, and enhance human preferences, behaviors, and satisfaction. Our objective is to translate research concepts into actionable data through meticulously planned data labeling and collection initiatives, while also understanding the science behind high-quality data that effectively trains our models. Additionally, we develop and assess quantitative metrics to evaluate the success and impact of our data and training strategies.Beyond execution, we explore new paradigms for human-AI interaction and scalable oversight, experimenting with optimal ways for humans to supervise, guide, and collaborate with models. This interdisciplinary role merges research, data operations, and technical implementation, pushing the boundaries of aligned, human-centered AI systems.This position combines foundational research and practical engineering, as we do not differentiate between these roles internally. You will be expected to write high-performance code and comprehend technical reports. This role is perfect for individuals who thrive on deep theoretical exploration and hands-on experimentation, eager to shape the foundational aspects of AI learning.Note: This is an evergreen role that we maintain continuously to express interest in this research area. We receive a high volume of applications, and while there may not always be an immediate fit for your skills and experience, we encourage you to apply. We regularly review applications and reach out to candidates as new opportunities arise. You are welcome to reapply after gaining more experience, but please limit applications to once every six months. You may also notice postings for specific roles for targeted positions.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.
OpenAI is hiring a Software Engineer for Post-Training Research in San Francisco. This position centers on improving the performance and capabilities of advanced machine learning models after their initial training phase. Role overview Work closely with a skilled team to explore new ways of strengthening AI systems. The focus is on researching and developing methods that push the boundaries of what these models can achieve once training is complete. Collaboration Expect to contribute to ongoing research efforts and share insights with colleagues who are passionate about advancing AI. Teamwork and knowledge exchange are key parts of this role. Location This position is based in San Francisco.
Genmo is a pioneering research laboratory dedicated to advancing cutting-edge models for video generation, with the mission of unlocking the creative potential of Artificial General Intelligence (AGI). We invite you to be a part of our innovative team, where you can contribute to shaping the future of AI and expanding the horizons of video generation technology.Role Overview:We are on the lookout for a talented Research Scientist to join our dynamic team, specializing in alignment and post-training methodologies for large-scale video generation models. In this pivotal role, you will be instrumental in ensuring our diffusion-based video models consistently deliver high-quality, physically accurate, and safe outputs that align with human values and preferences.Key Responsibilities:Lead groundbreaking research initiatives in alignment and post-training strategies for video generation models, prioritizing enhanced quality, reliability, and alignment with human intent.Design and implement supervised fine-tuning and reinforcement learning from human feedback (RLHF) pipelines for video generation models.Establish robust evaluation frameworks to assess model alignment, safety, and output quality.Create and optimize data collection pipelines for capturing human feedback and preferences.Conduct experiments to validate alignment techniques and their scalability.Collaborate with cross-functional teams to incorporate alignment enhancements into our production workflow.Stay abreast of the latest developments by reviewing academic literature in generative AI and alignment.Mentor junior researchers and promote a culture of responsible AI development.Partner closely with product teams to ensure that alignment methods enhance model capabilities.Qualifications:Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a closely related field.Demonstrated excellence with a strong publication record in top-tier conferences (e.g., NeurIPS, ICML, ICLR) focusing on reinforcement learning, alignment, or generative models.Extensive experience in implementing and optimizing large-scale training pipelines utilizing PyTorch.In-depth understanding of reinforcement learning techniques, especially RLHF.Proficient in distributed training systems and conducting large-scale experiments.Proven ability to design and implement robust evaluation strategies for models.
Full-time|$176K/yr - $304K/yr|Hybrid|Cambridge, MA USA; San Francisco, CA USA
Your Contribution at LilaAs a Machine Learning Research Scientist I/II specializing in LLM Inference, you will spearhead research initiatives focused on the training and deployment of large language models for scientific applications.Your ResponsibilitiesDevelop and refine post-training strategies for LLMs, including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and Reinforcement Learning with verifiers.Design efficient inference mechanisms and compute strategies for complex tool utilization in various environments.Create scalable evaluation metrics to assess LLM performance in scientific reasoning tasks.Investigate the boundaries of cutting-edge LLM methodologies for scientific challenges and analyze their limitations.
About MercorMercor sits at the forefront of labor markets and artificial intelligence research, collaborating with premier AI laboratories and enterprises to harness the human intelligence crucial for AI evolution.Our expansive talent network empowers the training of cutting-edge AI models, akin to how educators impart knowledge to students—sharing insights, experiences, and contexts that transcend mere code. Currently, our network comprises over 30,000 experts, generating collective earnings exceeding $2 million daily.At Mercor, we are pioneering a unique category of work where expertise fuels AI progress. Realizing this vision necessitates a bold, fast-paced, and deeply dedicated team. You will collaborate with researchers, operators, and AI firms that are at the vanguard of transforming systems that redefine society.As a profitable Series C company, Mercor is valued at $10 billion and maintains an in-office presence five days a week at our new headquarters in San Francisco.About the RoleIn your capacity as a Research Engineer at Mercor, you will operate at the intersection of engineering and applied AI research. You will play a pivotal role in post-training and Reinforcement Learning from Human Feedback (RLVR), synthetic data generation, and large-scale evaluation workflows essential for advancing frontier language models.Your contributions will help train large language models to adeptly utilize tools, exhibit agentic behavior, and engage in real-world reasoning within production environments. You will be instrumental in shaping rewards, conducting post-training experiments, and constructing scalable systems to enhance model performance. Your responsibilities will also include designing and evaluating datasets, creating scalable data augmentation pipelines, and developing rubrics and evaluators that expand the learning potential of LLMs.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We strive to build a future where everyone has access to the knowledge and tools essential for making AI work effectively for their unique objectives.Our team comprises scientists, engineers, and innovators who have contributed to some of the most widely adopted AI products, including ChatGPT and Character.ai, as well as notable open-weight models like Mistral and popular open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleThe Post-Training Researcher position is pivotal to our roadmap. It serves as a crucial connection between raw model intelligence and a system that is genuinely beneficial, safe, and collaborative for human users.This role uniquely combines fundamental research with practical engineering, as we do not differentiate between these functions internally. Candidates will be expected to produce high-performance code and analyze technical reports. This position is ideal for individuals who relish both deep theoretical inquiry and hands-on experimentation, aiming to influence the foundational aspects of AI learning.Note: This position is classified as an 'evergreen role', meaning we continuously accept applications in this research domain. Given the high volume of applications, an immediate match for your skills and experience may not always be available. However, we encourage you to apply; we regularly review submissions and reach out as new opportunities arise. You are welcome to apply again after gaining more experience, but we ask that you refrain from applying more than once every six months. Additionally, specific postings for singular roles may be available for distinct projects or team needs, in which case you are welcome to apply directly in conjunction with this evergreen role.What You’ll DoDevelop and Optimize Recipes: Refine post-training recipes, encompassing various datasets, training stages, and hyperparameters, while assessing their impact on multiple performance metrics.Iterate on Evaluations: Engage in a continuous process of defining evaluation metrics, optimizing them, and recognizing their limitations. You will be accountable for enhancing performance metrics and ensuring they are meaningful.Debug and Analyze: During the fine-tuning of training configurations, you may encounter results that appear inconsistent. You will be responsible for troubleshooting and cultivating a deeper understanding to apply to subsequent challenges.Scale and Investigate: Assess and expand the capabilities of our models while exploring potential improvements.
Full-time|$250K/yr - $450K/yr|On-site|San Francisco
About AfterQuery AfterQuery builds training data and evaluation frameworks used by leading AI labs around the world. The team partners with advanced research groups to create high-quality datasets and run detailed evaluations that go beyond standard benchmarks. As a small, post-Series A company based in San Francisco, every team member plays a key role in shaping how future AI models learn and improve. Role Overview The Post-Training Research Scientist focuses on proving the impact of AfterQuery's datasets. This work involves designing and running training experiments to isolate how specific data influences model performance. Projects span Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) post-training, with an emphasis on measuring effects on capability, generalization, and alignment. Working closely with partner labs, the scientist turns data into clear, verifiable results: showing exactly how a dataset leads to measurable improvements under defined conditions. The work is experimental and directly shapes the value of AfterQuery's products. What You Will Do Run controlled SFT and RL experiments to measure how datasets affect model outcomes. Quantify gains in areas like reasoning, tool use, long-horizon tasks, and specialized workflows. Share findings with partner labs to support sales and demonstrate value. Work with internal subject matter experts to improve data quality based on experimental results. What We Look For Strong background in LLM training and evaluation methods. Curiosity about how data structure, selection, and quality shape model behavior. Skill in designing experiments, executing quickly, and drawing practical insights from complex results. Comfort working across fields such as finance, software engineering, and policy. Focus on real-world implementation, not just theory. Research experience at the undergraduate or master's level is preferred; a PhD is not required. Compensation $250,000 - $450,000 total compensation plus equity
About Retell AI Retell AI builds voice AI technology that helps businesses transform their call center operations. In just 18 months, thousands of companies have adopted Retell’s AI voice agents to streamline sales, support, and logistics, work that once required large human teams. Backed by investors including Y Combinator and Alt Capital, Retell has grown annual recurring revenue from $5M to $36M with a focused team of 20. The company’s goal for 2026: a modern customer experience platform where AI powers entire contact centers. Retell is developing AI “workers” that can serve as frontline agents, quality assurance analysts, and managers, handling, evaluating, and improving customer interactions on their own. Named a top 50 AI app by a16z: https://tinyurl.com/5853dt2x Ranked #4 on Brex’s Fast-Growing Software Vendors of 2025: https://www.brex.com/journal/brex-benchmark-december-2025 Featured on the Lean AI Leaderboard: https://leanaileaderboard.com/ Role Overview: Research Scientist – LLM Retell AI is hiring a Research Scientist focused on large language models (LLMs) and audio processing. This role suits machine learning researchers who want to push the boundaries of real-time AI and see their work in production. What You Will Do Investigate new approaches in large language models and audio processing for human-like voice agents Design and implement evaluation methods for complex, real-world conversational systems Prototype systems to improve reasoning, reduce latency, and enhance conversation quality Work closely with engineering and product teams to bring research advances into production Impact Research at Retell directly shapes the capabilities of voice AI agents for thousands of businesses. The work blends advanced research with practical deployment, improving how customers interact with automated systems across industries. Location This position is based in the San Francisco Bay Area.
Join the Center for AI Safety (CAIS), a premier research and advocacy institution dedicated to minimizing large-scale societal risks associated with artificial intelligence. We tackle the most pressing challenges in AI through innovative technical research, community-building initiatives, and active policy engagement, alongside our sister organization, the Center for AI Safety Action Fund.As a Research Engineer, you will operate at the forefront of advanced machine learning research and dependable engineering practices. Your role will involve designing and executing experiments on large language models, developing the necessary tools for extensive model training and evaluation, and translating findings into publishable research. You will work collaboratively with CAIS researchers and external academic and commercial partners, utilizing our compute cluster to conduct large-scale training and evaluations. Your work will focus on critical areas such as AI honesty, robustness, transparency, and the identification of trojan/backdoor behaviors, all aimed at mitigating real-world risks posed by sophisticated AI systems.
About CartesiaAt Cartesia, our vision is to create the future of artificial intelligence—intelligent systems that are seamlessly integrated into daily life. We aim to overcome current limitations by enabling models to continuously understand and analyze vast streams of audio, video, and text data—ranging from 1 billion text tokens to 1 trillion video tokens—right on your device.Our pioneering team, comprised of PhDs from the Stanford AI Lab, has developed State Space Models (SSMs), a groundbreaking approach to training efficient, large-scale foundation models. With a rich blend of expertise in model innovation and systems engineering, alongside a product-focused engineering team, we are committed to developing and delivering cutting-edge AI models and user experiences.Supported by prominent investors including Index Ventures and Lightspeed Venture Partners, as well as many esteemed advisors and over 90 angel investors from diverse industries, we are at the forefront of AI advancements.About The RoleIn our quest to create truly global AI, we must train our models using datasets that represent the vast diversity of languages and cultures around the world. We are looking for a Research Engineer to take charge of the quality and comprehensiveness of the data that drives our models. As our in-house expert in global data, you will ensure that our models excel across multiple languages, leveraging your keen understanding of linguistic subtleties and your enthusiasm for building inclusive, large-scale datasets.Your ImpactDesign and construct extensive datasets for model training, conducting controlled experiments to evaluate their effect on model performance.Develop assessments for speech models through both manual annotation and automated evaluation metrics.Utilize data generation techniques to enhance model intelligence and reduce biases.Create automated quality control systems to validate and filter the generated data.Collaborate with product teams to ensure optimal support for key languages and markets.What You BringProven experience in developing or working with extensive multilingual datasets.Familiarity with generative models, including speech, text, or multimodal systems.Ability to guide human annotation and evaluation across various languages.Strong analytical skills and a passion for data-driven decision-making.
Jan 6, 2026
Sign in to browse more jobs
Create account — see all 11,614 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.