Research Engineer Specializing In Multimodal Data Scaling jobs in San Francisco – Browse 6,193 openings on RoboApply Jobs
Research Engineer Specializing In Multimodal Data Scaling jobs in San Francisco
Open roles matching “Research Engineer Specializing In Multimodal Data Scaling” with location signals for San Francisco. 6,193 active listings on RoboApply Jobs.
6,193 jobs found
Research Engineer Specializing in Multimodal Data Scaling
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Proven experience in data engineering, research, or a related field. Strong programming skills in languages such as Python or R. Familiarity with machine learning frameworks and data processing tools. Ability to work collaboratively in a fast-paced environment. A passion for pushing the boundaries of data research.
About the job
Join worldlabs as a Research Engineer focused on scaling multimodal data. In this dynamic role, you will leverage cutting-edge technologies and methodologies to enhance data processing capabilities. You will be responsible for developing innovative solutions that integrate various data types and drive impactful research outcomes.
About worldlabs
worldlabs is a pioneering technology firm located in San Francisco, committed to leveraging data science to solve complex problems. We are dedicated to innovation and fostering a collaborative environment where creativity thrives.
Join worldlabs as a Research Engineer focused on scaling multimodal data. In this dynamic role, you will leverage cutting-edge technologies and methodologies to enhance data processing capabilities. You will be responsible for developing innovative solutions that integrate various data types and drive impactful research outcomes.
Eventual Computing builds tools that help AI teams work with large, complex datasets. Based in San Francisco, the company supports projects in robotics, autonomous vehicles, and advanced video generation. Its open-source engine, Daft, is already in use at organizations with demanding data needs. The team focuses on making data curation and model training more efficient, so the right datasets are always within reach. The office is located in the Mission district, where collaboration with leading AI labs and infrastructure companies is part of daily work. Role overview The Research Engineer - Multimodal Data will join the Visual Understanding team. This position centers on building solutions to make vast amounts of video and sensor data accessible and easy to query. The work directly supports researchers who need to find and use specific datasets quickly. What you will do Develop and refine systems that process petabytes of multimodal data, including video and sensor streams. Apply vision-language models to improve how data is discovered and retrieved. Define and influence the roadmap for visual understanding features. Train models to streamline large-scale data annotation and improve efficiency for research teams.
Zyphra is a cutting-edge artificial intelligence firm located in the heart of San Francisco, California, dedicated to advancing technology across various modalities.About the Position:We are seeking a Data Engineer - Multimodal Systems to play a pivotal role in the enhancement and expansion of Zyphra's datasets and data pipelines. This position offers a unique opportunity to collaborate with diverse teams and contribute to innovative data solutions. You will engage in the collection of extensive datasets and the development and optimization of high-performance parallel data pipelines.Your Responsibilities Will Include:Executing large-scale data collection across multiple modalities, including text, audio, and image.Designing and implementing highly efficient, parallelized data processing pipelines that integrate various modalities.Conducting rigorous experimental ablations to evaluate the effectiveness of new data enhancements.Candidate Requirements:Proven ability in implementation and prototyping.Capability to transform ideas into experimental frameworks swiftly.Strong collaborative skills, thriving in a dynamic research environment.Eagerness to learn and apply new concepts effectively.Exceptional communication and teamwork skills, capable of contributing to both research and large-scale engineering projects.Preferred Qualifications:Experience in the collection, management, and processing of large datasets.Familiarity with parallel programming frameworks in Python, such as Dask.In-depth understanding of state-of-the-art dataset curation practices.A detail-oriented mindset with a passion for data integrity and verification.Strong foundation in experimental methodologies for conducting thorough ablation studies and hypothesis testing.Knowledge and interest in large-scale, highly parallel data processing systems.Proficiency in PyTorch and Python.Experience with large, complex codebases and the ability to quickly become productive within them.Published research in respected machine learning venues.Postgraduate degree in a relevant field is a plus.
Full-time|On-site|San Francisco (London/Europe - OK)
Tavus – Multimodal AI Model OptimizationResearch EngineerAt Tavus, we are pioneering the human aspect of AI technology. Our objective is to make human-AI interactions as seamless and natural as in-person conversations, allowing for a human touch in areas that were once considered unscalable.We accomplish this through groundbreaking research in multimodal AI, focusing on human-to-human communication modeling (encompassing language, audio, and video) and the development of audio-visual avatar behaviors. Our innovative models drive applications ranging from text-to-video AI avatars to real-time conversational video experiences across sectors such as healthcare, recruitment, sales, and education.By empowering AI to perceive, listen, and engage with an authentic human-like presence, we are laying the groundwork for the next generation of AI workers, assistants, and companions.As a Series B company, we are supported by renowned investors, including Sequoia, Y Combinator, and Scale VC. Join us as we shape the future of human-AI interaction.The RoleWe are seeking an accomplished Research Scientist/Engineer with expertise in model optimization to be a vital part of our core AI team.The ideal candidate thrives in dynamic startup environments, is adept at setting priorities independently, and is open to making calculated decisions. We are moving swiftly and need individuals who can help navigate our path forward.Your MissionTransform state-of-the-art research models into fast, efficient, and production-ready systems through techniques such as sparsification, distillation, and quantization.Oversee the optimization lifecycle for critical models: establish metrics, conduct experiments, and evaluate trade-offs among latency, cost, and quality.Collaborate closely with researchers and engineers to convert innovative concepts into deployable solutions.RequirementsExtensive experience in deep learning with PyTorch.Practical experience in model optimization and compression, including knowledge distillation, pruning/sparsification, quantization, and mixed precision.Familiarity with efficient architectures such as low-rank adapters.Strong grasp of inference performance and GPU/accelerator fundamentals.Proficient in Python coding and adherence to best practices in research engineering.Experience with large models and datasets in cloud environments.Capability to read ML literature, reproduce results, and modify ideas accordingly.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY
Artificial Intelligence is revolutionizing every aspect of our lives. At Scale AI, we are dedicated to accelerating the advancement of AI applications across industries. For nearly a decade, we have established ourselves as a premier AI data foundry, powering groundbreaking innovations in AI, including generative AI, defense systems, and autonomous technologies. With our recent investment from Meta, we are committed to enhancing our state-of-the-art post-training algorithms to achieve unparalleled performance for complex agents serving enterprises globally. The Enterprise ML Research Lab is at the forefront of this AI evolution. Our team develops a suite of proprietary research, tools, and resources tailored for our enterprise clients. As a Machine Learning Research Engineer on the Data Foundation team, you will engage in pioneering research to optimize the data flywheel that drives our entire machine learning ecosystem. Your work will involve exploring synthetic environments, defining tasks, building agents for trace analysis, and contributing to a cutting-edge framework that automates agent building through advanced evaluation techniques. You will create top-tier agents that deliver state-of-the-art results by leveraging sophisticated post-training and agent-building algorithms. If you are passionate about influencing the future of Generative AI, we encourage you to apply!
About TavusTavus is at the forefront of innovation in human computing. Our mission is to develop AI Humans: an advanced interface that bridges the gap between individuals and machines, eliminating the friction found in current technologies. Our state-of-the-art human simulation models empower machines to see, hear, respond, and even exhibit realistic appearances—facilitating genuine, face-to-face interactions. AI Humans integrate the emotional insight of humans with the scalability and dependability of machines, making them reliable agents accessible 24/7, in any language, on our terms.Imagine having access to an affordable therapist, a personal trainer that fits your schedule, or a team of medical assistants dedicated to providing personalized care for every patient. With Tavus, individuals, enterprises, and developers have the tools to create AI Humans that connect, comprehend, and act with empathy on a large scale.We are a Series A company supported by esteemed investors such as Sequoia Capital, Y Combinator, and Scale Venture Partners.Join us in shaping a future where machines and humans genuinely understand one another.The PositionWe are seeking an AI Researcher to join our core AI team and advance the frontiers of multimodal conversational intelligence. If you excel in dynamic environments, enjoy transforming abstract concepts into functional code, and derive motivation from pushing the boundaries of possibility, this role is designed for you.Your Responsibilities Engage in research focusing on Foundational Multimodal Models specifically in the realm of Conversational Avatars (such as Neural Avatars and Talking-Heads).Develop models for video, audio, and language sequences utilizing Autoregressive and Predictive Architectures (e.g., V-JEPA) and/or Diffusion methodologies, with a focus on temporal and sequential data rather than static images.Collaborate closely with the Applied ML team to implement your research into production systems.Remain at the forefront of multimodal learning and assist us in defining what “cutting edge” will mean in the future.Ideal Candidate ProfilePhD (or nearing completion) in a relevant field, or equivalent practical research experience.Experience in multimodal machine learning, particularly focused on conversational interfaces.
Bland Inc. seeks a Machine Learning Researcher specializing in Multimodal Large Language Models (LLMs) to join the team in San Francisco. The focus is on advancing AI systems that integrate language with other types of data. Role overview This position centers on research and development aimed at improving how AI models process and understand information from multiple sources, such as text combined with images or other modalities. What you will do Investigate how language interacts with additional data types within multimodal LLMs Create and evaluate new methods to enhance AI model performance Work closely with colleagues on projects designed to push the boundaries of machine learning Location This role is based in San Francisco.
About UsTavus is an innovative research lab at the forefront of human computing technology. Our mission is to create AI Humans—advanced interfaces that bridge the gap between individuals and machines, eliminating the friction found in current systems. Our real-time human simulation models empower machines to see, hear, respond, and appear realistic, facilitating genuine, face-to-face conversations. With AI Humans, we blend the emotional intelligence inherent in humans with the extensive reach and reliability of machines, enabling them to serve as capable and trusted agents available 24/7, capable of communicating in any language.Envision a therapist accessible to everyone, a personal trainer that tailors sessions to your schedule, or a fleet of medical assistants dedicated to providing personalized attention to every patient. Tavus enables individuals, enterprises, and developers to create AI Humans that connect, empathize, and act with understanding on a large scale.Backed by prestigious investors like Sequoia Capital, Y Combinator, and Scale Venture Partners, we are a Series A company ready to shape the future of human-machine interaction.Join us in transforming a future where humans and machines genuinely comprehend one another.The RoleWe are seeking a passionate AI Researcher to join our core AI team and advance the science of audio-visual avatar generation. If you thrive in dynamic startup environments, enjoy experimenting with generative models, and are excited to see your research translated into production, you will find a welcoming home here.Your Mission Conduct research and develop cutting-edge audio-visual generation models for conversational agents (e.g., Neural Avatars, Talking Heads).Focus on models that intricately align with conversation flows, ensuring seamless integration of verbal and non-verbal cues.Experiment with diffusion models (DDPMs, LDMs, etc.), long-video generation, and audio synthesis.Collaborate closely with the Applied ML team to transition your research into practical applications.Stay updated on the latest breakthroughs in multimodal generation and contribute to the evolution of this field.You Will Excel If You Have:A PhD (or nearing completion) in a relevant discipline, or equivalent hands-on research experience.Proficiency in applying image/video generation techniques and a solid understanding of machine learning principles.
About Our Team:At OpenAI, we are dedicated to ensuring that artificial general intelligence (AGI) serves the greater good of humanity. Our API has emerged as the most widely embraced AI platform in the industry, catering to a diverse clientele ranging from startups and independent developers to Fortune 500 companies. By leveraging our multimodal APIs—which encompass real-time interactions, text-to-speech (TTS), speech generation, and image creation—we empower users to effectively harness the full spectrum of AI capabilities at scale.About the Role:We are on the lookout for an Engineering Manager to spearhead our multimodal API product suite. In this pivotal role, you will guide a talented team focused on delivering cutting-edge APIs for real-time processing, speech transcription, speech generation, and image creation. You will be instrumental in shaping the product roadmap and developing the tools that enable developers to connect with millions of end users through AI-driven audio, video, and imagery.In this role, you will:Lead, mentor, and cultivate a high-performing engineering team dedicated to multimodal API products, including our real-time API, Whisper transcription models, TTS speech generation models, and DALLE image generation APIs.Collaborate with product managers, designers, and various stakeholders to articulate the strategic vision and product roadmap.Work alongside our research teams to enhance our core multimodal models tailored for API customer use cases.Steer technical and architectural decisions with a focus on scalability, robustness, and user experience.Promote a culture of innovation, continuous improvement, and accountability within your team.Qualifications:Demonstrated experience in managing engineering teams that successfully deliver complex, high-quality products at scale.Strong technical expertise with proficiency in modern software engineering practices and system architecture.Exceptional collaboration and communication skills to effectively engage with diverse teams and stakeholders.Familiarity with or a strong passion for multimodal AI, encompassing speech technologies, real-time systems, and image generation.Adept at thriving in a fast-paced, dynamic startup environment.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY
Artificial Intelligence (AI) is becoming increasingly crucial across all sectors of society. At Scale AI, our mission is to expedite the advancement of AI applications. With nine years of experience, we have established ourselves as the leading AI data foundry, facilitating groundbreaking developments in AI, including generative AI, defense applications, and autonomous vehicles. Following our recent investment from Meta, we are committed to enhancing our capabilities by developing cutting-edge post-training algorithms that are essential for optimizing complex agents in enterprises globally.The Enterprise ML Research Lab is at the forefront of this AI revolution. We are dedicated to crafting a suite of proprietary research tools and resources that cater to all of our enterprise clients. As a Machine Learning Research Engineer focusing on Agents, you will apply our Agent Reinforcement Learning (RL) training and building algorithms to real-world enterprise datasets across our clients and benchmarks. Your role will involve developing top-tier Agents that achieve state-of-the-art results through a blend of post-training and agent-building algorithms.If you are passionate about influencing the trajectory of the modern Generative AI movement, we would love to hear from you!
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone can harness the knowledge and tools necessary to make AI work for their unique objectives.Comprising a team of scientists, engineers, and innovators, we have developed some of the most widely employed AI products, including ChatGPT and Character.ai, as well as open-weight models such as Mistral and popular open-source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleAt Thinking Machines, we prioritize a multimodal-first approach. We are seeking new team members to push the boundaries of visual perception and multimodal learning. Our focus is on understanding the interplay between vision and language at scale. We design innovative architectures that integrate pixels and text, create datasets and evaluation methods that assess real-world comprehension, and develop representations that enable models to connect abstract concepts with the physical world. Our aim is to build multimodal systems that seamlessly integrate into real-world applications.Your work will be at the intersection of visual understanding, multimodal reasoning, and large-scale model training. You will contribute to the development of architectures, data, and evaluation tools that teach AI to perceive, comprehend, and collaborate effectively. The ideal candidate is inquisitive about multimodal interfaces, possesses experience in conducting large-scale experiments, and is adept at contributing to complex engineering systems. While we seek individuals with expertise in multimodality, our collaborative environment encourages all new hires to work across modalities as a unified team.This role merges foundational research with practical engineering since we do not differentiate between these roles internally. You will be expected to write high-performance code and analyze technical reports. This position is perfect for someone who enjoys both deep theoretical inquiry and hands-on experimentation and is eager to influence the foundational aspects of AI learning.Note: This is an "evergreen role" that we keep open continuously to express interest in this research area. We receive a high volume of applications, and there may not always be an immediate position that perfectly matches your experience and skills. We encourage you to apply regardless. Applications are reviewed regularly, and we reach out to candidates as new opportunities arise. You are welcome to reapply if your experience increases, but please refrain from applying more than once every six months. Additionally, we may post specific roles for particular project or team needs, where you are also welcome to apply directly in addition to this evergreen role.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.
The Bot CompanyAt The Bot Company, we are on a mission to create an innovative robotic assistant for every household.Our dynamic team, composed of talented engineers, designers, and operators, is based in San Francisco. We have a rich background from industry leaders such as Tesla, Cruise, OpenAI, Google, and Pixar, and we have successfully delivered products to hundreds of millions of users, honing our ability to create exceptional products and experiences.We pride ourselves on maintaining a streamlined team structure that fosters swift decision-making and minimizes bureaucracy. Each member is considered an Individual Contributor, granted substantial autonomy, ownership, and accountability. Our culture enables us to work across the technology stack with an emphasis on rapid iteration and execution.What We Seek in CandidatesCandidates for all positions at The Bot Company must exhibit remarkable sharpness and the capacity to thrive in high-pressure environments. We expect candidates to showcase:Exceptional Cognitive Abilities: You possess quick thinking, instant learning capabilities, and the ability to reason across diverse domains.Engineering Curiosity: You demonstrate an innate desire to understand how systems function, even beyond your area of expertise.Performance-Driven Attitude: You excel in fast-paced settings, effectively navigate ambiguity, and thrive under demanding circumstances.Machine Learning: Multimodal Foundation ModelsWe are developing unified foundation models capable of reasoning across text, images, video, and kinematics to inform intelligent robotic behaviors.You will engage with large-scale multimodal networks, overseeing the complete process from data handling to model training and deployment.Your ResponsibilitiesConstruct Native Multimodal Policies: Create architectures where vision, language, and other modalities are represented in a unified manner.Enhance Cross-Modal Reasoning: Explore and implement strategies to ensure that the model not only correlates modalities but also comprehends them (e.g., linking visual physics to kinematic constraints).Manage the Training Loop from Start to Finish: Design, execute, troubleshoot, and refine large-scale training experiments; identify failure points, enhance data mixtures, and tighten evaluations to achieve measurable improvements.Deploy and Refine Real Systems: Integrate models into practical robotic frameworks, enhance robot code for model deployment, and optimize performance for edge inference.
Role overview The Principal Research Scientist – Scaling at Databricks leads research projects that advance how the company’s data analytics platform handles large workloads. This San Francisco-based role focuses on designing and improving algorithms that enable efficient large-scale data processing and machine learning. Collaboration is central, with regular work alongside engineering, product, and research teams. What you will do Lead research to develop algorithms that scale for data analytics applications. Work with colleagues across engineering, product, and research to strengthen machine learning capabilities. Use deep expertise to shape the direction and architecture of the Databricks platform. Drive new ideas and solutions that influence the future of data science and analytics at Databricks. Location This role is based in San Francisco, California.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY
As AI continues to play a crucial role across various sectors, Scale AI is committed to accelerating the evolution of AI applications. For nearly a decade, we have been at the forefront of AI data solutions, driving significant innovations such as generative AI, defense technologies, and autonomous vehicles. With recent funding from Meta, we are intensifying our efforts to develop cutting-edge post-training algorithms essential for enhancing the performance of complex enterprise agents globally. The Enterprise ML Research Lab is at the forefront of this AI transformation. Our team is dedicated to creating a suite of proprietary research and resources tailored for our enterprise clientele. As a Machine Learning Systems Research Engineer, you will play a pivotal role in developing algorithms for our next-generation Agent Reinforcement Learning (RL) training platform, support large-scale training operations, and integrate state-of-the-art technologies to optimize our machine learning systems. You will collaborate with other ML Research Engineers and AI Architects on the Enterprise AI team to apply these training algorithms to various client use cases, from next-gen AI cybersecurity firewalls to foundational healthtech search models. If you are passionate about shaping the future of AI, we want to hear from you!
About Hike Medical Hike Medical is building the future of musculoskeletal care by combining advanced technology with practical healthcare solutions. Based in San Francisco’s Rincon Hill, the team develops a platform that spans three core areas: an AI-powered vision system for rapid web-based foot scans that generate custom 3D-printed orthotics, an AI agent platform that manages the entire DME workflow from intake through claims, and SoleForge, a high-scale 3D printing facility for custom medical devices. Hike Medical partners with some of the world’s largest employers and major orthotics and prosthetics organizations. Fortune 50 companies trust the platform to support employee well-being, and a broad network of clinical partners keeps the company connected to real-world needs. Custom insoles are just the starting point. The long-term goal is to reshape the industry with bionic devices: AI-designed, robotically manufactured orthotic and prosthetic products. The company aims to reach this milestone by 2040. Learn more at bionics2040.com. With $22 million raised across Seed and Series A rounds from leading investors, Hike Medical offers a results-oriented culture for those interested in the intersection of AI, manufacturing, and healthcare.
Join VOLT, a trailblazer in crafting advanced AI perception systems that enhance safety and security through real-time risk detection in the physical world.We are on the lookout for a Senior Applied AI & Machine Learning Engineer dedicated to designing, optimizing, and deploying multimodal AI models capable of functioning reliably in diverse real-world scenarios. This is a hands-on role focused on transitioning models from conceptual data to practical production, encompassing both edge devices and cloud infrastructures.In this position, you will engage with vision, video, and language-based models that interpret real-world scenes and events, ensuring their accuracy, latency, robustness, and cost-effectiveness in production systems.Reporting directly to the Head of Engineering, you will play a pivotal role in advancing VOLT AI’s core perception platform.
About Our TeamAt OpenAI, we are dedicated to ensuring that artificial general intelligence (AGI) serves and benefits all of humanity. A vital component of this mission involves developing models that genuinely understand and resonate with human preferences. Our Human Data team is instrumental in making this a reality.The Human Data engineering team is responsible for creating sophisticated systems that facilitate scalable and high-quality human feedback, which is crucial for training and refining OpenAI's most advanced models. Our engineers work in close collaboration with top-tier researchers to implement alignment techniques—from initial experimental concepts to production-ready feedback loops.Position OverviewWe are seeking passionate software engineers to become part of the Human Data team, tasked with developing the platforms, prototypes, tools, and infrastructure essential for training, aligning, and evaluating our AI models. In this role, you will collaborate with researchers and cross-functional teams to actualize alignment concepts, influence the training of future models, and enhance how our models engage with the real world.We are looking for individuals who thrive on technical ownership, enjoy working across the stack, and are eager to tackle complex challenges in a dynamic, impactful environment.This position is based in San Francisco, CA, and follows a hybrid work model of three days in the office each week. We also provide relocation assistance for new hires.Your ResponsibilitiesDevelop and maintain robust full-stack systems for feedback collection, data labeling, and evaluation pipelines while ensuring high levels of security.Convert experimental alignment research into scalable production infrastructure, including inference and model training systems.Design and enhance user-facing tools and backend services to support high-quality data workflows.Collaborate with researchers, engineers, and program leads to refine feedback loops and model interaction strategies.Lead infrastructure improvements that promote faster iterations and scaling across OpenAI’s cutting-edge models, from internal research tools to production-level ChatGPT.QualificationsProven software engineering skills with experience in building scalable production systems.A strong preference for full-stack development with end-to-end ownership—from backend pipelines to user interfaces.Driven by high-impact projects and capable of navigating ambiguous challenges.
Full-time|$251.7K/yr - $330K/yr|On-site|San Francisco Bay Area, CA
Our MissionAt Altos Labs, we are dedicated to restoring cell health and resilience through innovative cell rejuvenation techniques aimed at reversing diseases, injuries, and disabilities that can arise throughout life.For further insights, please visit our website at altoslabs.com.Our ValueOur singular Altos Value is: Everyone Owns Achieving Our Inspiring Mission.Diversity at AltosWe firmly believe that diverse perspectives are crucial for scientific innovation. At Altos, exceptional scientists and industry leaders collaborate globally to further our shared mission. We prioritize Belonging, ensuring all employees feel valued for their unique perspectives, and we hold ourselves accountable for maintaining a diverse and inclusive environment.Your Contributions to AltosAs a member of our team, you will accelerate and enhance our efforts in developing unified, multi-modal generative foundation models tailored for multiscale biology. You will be a key player in multidisciplinary teams that create the computational platforms essential for Altos to fulfill its mission.In this position, you will collaborate with other scientists and engineers across the Institute of Computation to design, develop, and scale cutting-edge foundation models that address biological inquiries and assist in discovering novel interventions for aging and disease. Your focus will be on synthesizing unstructured multimodal signals with structured relational data and knowledge graphs that depict biological realities.The ideal candidate will excel in a dynamic environment that values teamwork, transparency, scientific excellence, originality, and integrity.
Join worldlabs as a Research Engineer focused on scaling multimodal data. In this dynamic role, you will leverage cutting-edge technologies and methodologies to enhance data processing capabilities. You will be responsible for developing innovative solutions that integrate various data types and drive impactful research outcomes.
Eventual Computing builds tools that help AI teams work with large, complex datasets. Based in San Francisco, the company supports projects in robotics, autonomous vehicles, and advanced video generation. Its open-source engine, Daft, is already in use at organizations with demanding data needs. The team focuses on making data curation and model training more efficient, so the right datasets are always within reach. The office is located in the Mission district, where collaboration with leading AI labs and infrastructure companies is part of daily work. Role overview The Research Engineer - Multimodal Data will join the Visual Understanding team. This position centers on building solutions to make vast amounts of video and sensor data accessible and easy to query. The work directly supports researchers who need to find and use specific datasets quickly. What you will do Develop and refine systems that process petabytes of multimodal data, including video and sensor streams. Apply vision-language models to improve how data is discovered and retrieved. Define and influence the roadmap for visual understanding features. Train models to streamline large-scale data annotation and improve efficiency for research teams.
Zyphra is a cutting-edge artificial intelligence firm located in the heart of San Francisco, California, dedicated to advancing technology across various modalities.About the Position:We are seeking a Data Engineer - Multimodal Systems to play a pivotal role in the enhancement and expansion of Zyphra's datasets and data pipelines. This position offers a unique opportunity to collaborate with diverse teams and contribute to innovative data solutions. You will engage in the collection of extensive datasets and the development and optimization of high-performance parallel data pipelines.Your Responsibilities Will Include:Executing large-scale data collection across multiple modalities, including text, audio, and image.Designing and implementing highly efficient, parallelized data processing pipelines that integrate various modalities.Conducting rigorous experimental ablations to evaluate the effectiveness of new data enhancements.Candidate Requirements:Proven ability in implementation and prototyping.Capability to transform ideas into experimental frameworks swiftly.Strong collaborative skills, thriving in a dynamic research environment.Eagerness to learn and apply new concepts effectively.Exceptional communication and teamwork skills, capable of contributing to both research and large-scale engineering projects.Preferred Qualifications:Experience in the collection, management, and processing of large datasets.Familiarity with parallel programming frameworks in Python, such as Dask.In-depth understanding of state-of-the-art dataset curation practices.A detail-oriented mindset with a passion for data integrity and verification.Strong foundation in experimental methodologies for conducting thorough ablation studies and hypothesis testing.Knowledge and interest in large-scale, highly parallel data processing systems.Proficiency in PyTorch and Python.Experience with large, complex codebases and the ability to quickly become productive within them.Published research in respected machine learning venues.Postgraduate degree in a relevant field is a plus.
Full-time|On-site|San Francisco (London/Europe - OK)
Tavus – Multimodal AI Model OptimizationResearch EngineerAt Tavus, we are pioneering the human aspect of AI technology. Our objective is to make human-AI interactions as seamless and natural as in-person conversations, allowing for a human touch in areas that were once considered unscalable.We accomplish this through groundbreaking research in multimodal AI, focusing on human-to-human communication modeling (encompassing language, audio, and video) and the development of audio-visual avatar behaviors. Our innovative models drive applications ranging from text-to-video AI avatars to real-time conversational video experiences across sectors such as healthcare, recruitment, sales, and education.By empowering AI to perceive, listen, and engage with an authentic human-like presence, we are laying the groundwork for the next generation of AI workers, assistants, and companions.As a Series B company, we are supported by renowned investors, including Sequoia, Y Combinator, and Scale VC. Join us as we shape the future of human-AI interaction.The RoleWe are seeking an accomplished Research Scientist/Engineer with expertise in model optimization to be a vital part of our core AI team.The ideal candidate thrives in dynamic startup environments, is adept at setting priorities independently, and is open to making calculated decisions. We are moving swiftly and need individuals who can help navigate our path forward.Your MissionTransform state-of-the-art research models into fast, efficient, and production-ready systems through techniques such as sparsification, distillation, and quantization.Oversee the optimization lifecycle for critical models: establish metrics, conduct experiments, and evaluate trade-offs among latency, cost, and quality.Collaborate closely with researchers and engineers to convert innovative concepts into deployable solutions.RequirementsExtensive experience in deep learning with PyTorch.Practical experience in model optimization and compression, including knowledge distillation, pruning/sparsification, quantization, and mixed precision.Familiarity with efficient architectures such as low-rank adapters.Strong grasp of inference performance and GPU/accelerator fundamentals.Proficient in Python coding and adherence to best practices in research engineering.Experience with large models and datasets in cloud environments.Capability to read ML literature, reproduce results, and modify ideas accordingly.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY
Artificial Intelligence is revolutionizing every aspect of our lives. At Scale AI, we are dedicated to accelerating the advancement of AI applications across industries. For nearly a decade, we have established ourselves as a premier AI data foundry, powering groundbreaking innovations in AI, including generative AI, defense systems, and autonomous technologies. With our recent investment from Meta, we are committed to enhancing our state-of-the-art post-training algorithms to achieve unparalleled performance for complex agents serving enterprises globally. The Enterprise ML Research Lab is at the forefront of this AI evolution. Our team develops a suite of proprietary research, tools, and resources tailored for our enterprise clients. As a Machine Learning Research Engineer on the Data Foundation team, you will engage in pioneering research to optimize the data flywheel that drives our entire machine learning ecosystem. Your work will involve exploring synthetic environments, defining tasks, building agents for trace analysis, and contributing to a cutting-edge framework that automates agent building through advanced evaluation techniques. You will create top-tier agents that deliver state-of-the-art results by leveraging sophisticated post-training and agent-building algorithms. If you are passionate about influencing the future of Generative AI, we encourage you to apply!
About TavusTavus is at the forefront of innovation in human computing. Our mission is to develop AI Humans: an advanced interface that bridges the gap between individuals and machines, eliminating the friction found in current technologies. Our state-of-the-art human simulation models empower machines to see, hear, respond, and even exhibit realistic appearances—facilitating genuine, face-to-face interactions. AI Humans integrate the emotional insight of humans with the scalability and dependability of machines, making them reliable agents accessible 24/7, in any language, on our terms.Imagine having access to an affordable therapist, a personal trainer that fits your schedule, or a team of medical assistants dedicated to providing personalized care for every patient. With Tavus, individuals, enterprises, and developers have the tools to create AI Humans that connect, comprehend, and act with empathy on a large scale.We are a Series A company supported by esteemed investors such as Sequoia Capital, Y Combinator, and Scale Venture Partners.Join us in shaping a future where machines and humans genuinely understand one another.The PositionWe are seeking an AI Researcher to join our core AI team and advance the frontiers of multimodal conversational intelligence. If you excel in dynamic environments, enjoy transforming abstract concepts into functional code, and derive motivation from pushing the boundaries of possibility, this role is designed for you.Your Responsibilities Engage in research focusing on Foundational Multimodal Models specifically in the realm of Conversational Avatars (such as Neural Avatars and Talking-Heads).Develop models for video, audio, and language sequences utilizing Autoregressive and Predictive Architectures (e.g., V-JEPA) and/or Diffusion methodologies, with a focus on temporal and sequential data rather than static images.Collaborate closely with the Applied ML team to implement your research into production systems.Remain at the forefront of multimodal learning and assist us in defining what “cutting edge” will mean in the future.Ideal Candidate ProfilePhD (or nearing completion) in a relevant field, or equivalent practical research experience.Experience in multimodal machine learning, particularly focused on conversational interfaces.
Bland Inc. seeks a Machine Learning Researcher specializing in Multimodal Large Language Models (LLMs) to join the team in San Francisco. The focus is on advancing AI systems that integrate language with other types of data. Role overview This position centers on research and development aimed at improving how AI models process and understand information from multiple sources, such as text combined with images or other modalities. What you will do Investigate how language interacts with additional data types within multimodal LLMs Create and evaluate new methods to enhance AI model performance Work closely with colleagues on projects designed to push the boundaries of machine learning Location This role is based in San Francisco.
About UsTavus is an innovative research lab at the forefront of human computing technology. Our mission is to create AI Humans—advanced interfaces that bridge the gap between individuals and machines, eliminating the friction found in current systems. Our real-time human simulation models empower machines to see, hear, respond, and appear realistic, facilitating genuine, face-to-face conversations. With AI Humans, we blend the emotional intelligence inherent in humans with the extensive reach and reliability of machines, enabling them to serve as capable and trusted agents available 24/7, capable of communicating in any language.Envision a therapist accessible to everyone, a personal trainer that tailors sessions to your schedule, or a fleet of medical assistants dedicated to providing personalized attention to every patient. Tavus enables individuals, enterprises, and developers to create AI Humans that connect, empathize, and act with understanding on a large scale.Backed by prestigious investors like Sequoia Capital, Y Combinator, and Scale Venture Partners, we are a Series A company ready to shape the future of human-machine interaction.Join us in transforming a future where humans and machines genuinely comprehend one another.The RoleWe are seeking a passionate AI Researcher to join our core AI team and advance the science of audio-visual avatar generation. If you thrive in dynamic startup environments, enjoy experimenting with generative models, and are excited to see your research translated into production, you will find a welcoming home here.Your Mission Conduct research and develop cutting-edge audio-visual generation models for conversational agents (e.g., Neural Avatars, Talking Heads).Focus on models that intricately align with conversation flows, ensuring seamless integration of verbal and non-verbal cues.Experiment with diffusion models (DDPMs, LDMs, etc.), long-video generation, and audio synthesis.Collaborate closely with the Applied ML team to transition your research into practical applications.Stay updated on the latest breakthroughs in multimodal generation and contribute to the evolution of this field.You Will Excel If You Have:A PhD (or nearing completion) in a relevant discipline, or equivalent hands-on research experience.Proficiency in applying image/video generation techniques and a solid understanding of machine learning principles.
About Our Team:At OpenAI, we are dedicated to ensuring that artificial general intelligence (AGI) serves the greater good of humanity. Our API has emerged as the most widely embraced AI platform in the industry, catering to a diverse clientele ranging from startups and independent developers to Fortune 500 companies. By leveraging our multimodal APIs—which encompass real-time interactions, text-to-speech (TTS), speech generation, and image creation—we empower users to effectively harness the full spectrum of AI capabilities at scale.About the Role:We are on the lookout for an Engineering Manager to spearhead our multimodal API product suite. In this pivotal role, you will guide a talented team focused on delivering cutting-edge APIs for real-time processing, speech transcription, speech generation, and image creation. You will be instrumental in shaping the product roadmap and developing the tools that enable developers to connect with millions of end users through AI-driven audio, video, and imagery.In this role, you will:Lead, mentor, and cultivate a high-performing engineering team dedicated to multimodal API products, including our real-time API, Whisper transcription models, TTS speech generation models, and DALLE image generation APIs.Collaborate with product managers, designers, and various stakeholders to articulate the strategic vision and product roadmap.Work alongside our research teams to enhance our core multimodal models tailored for API customer use cases.Steer technical and architectural decisions with a focus on scalability, robustness, and user experience.Promote a culture of innovation, continuous improvement, and accountability within your team.Qualifications:Demonstrated experience in managing engineering teams that successfully deliver complex, high-quality products at scale.Strong technical expertise with proficiency in modern software engineering practices and system architecture.Exceptional collaboration and communication skills to effectively engage with diverse teams and stakeholders.Familiarity with or a strong passion for multimodal AI, encompassing speech technologies, real-time systems, and image generation.Adept at thriving in a fast-paced, dynamic startup environment.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY
Artificial Intelligence (AI) is becoming increasingly crucial across all sectors of society. At Scale AI, our mission is to expedite the advancement of AI applications. With nine years of experience, we have established ourselves as the leading AI data foundry, facilitating groundbreaking developments in AI, including generative AI, defense applications, and autonomous vehicles. Following our recent investment from Meta, we are committed to enhancing our capabilities by developing cutting-edge post-training algorithms that are essential for optimizing complex agents in enterprises globally.The Enterprise ML Research Lab is at the forefront of this AI revolution. We are dedicated to crafting a suite of proprietary research tools and resources that cater to all of our enterprise clients. As a Machine Learning Research Engineer focusing on Agents, you will apply our Agent Reinforcement Learning (RL) training and building algorithms to real-world enterprise datasets across our clients and benchmarks. Your role will involve developing top-tier Agents that achieve state-of-the-art results through a blend of post-training and agent-building algorithms.If you are passionate about influencing the trajectory of the modern Generative AI movement, we would love to hear from you!
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone can harness the knowledge and tools necessary to make AI work for their unique objectives.Comprising a team of scientists, engineers, and innovators, we have developed some of the most widely employed AI products, including ChatGPT and Character.ai, as well as open-weight models such as Mistral and popular open-source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleAt Thinking Machines, we prioritize a multimodal-first approach. We are seeking new team members to push the boundaries of visual perception and multimodal learning. Our focus is on understanding the interplay between vision and language at scale. We design innovative architectures that integrate pixels and text, create datasets and evaluation methods that assess real-world comprehension, and develop representations that enable models to connect abstract concepts with the physical world. Our aim is to build multimodal systems that seamlessly integrate into real-world applications.Your work will be at the intersection of visual understanding, multimodal reasoning, and large-scale model training. You will contribute to the development of architectures, data, and evaluation tools that teach AI to perceive, comprehend, and collaborate effectively. The ideal candidate is inquisitive about multimodal interfaces, possesses experience in conducting large-scale experiments, and is adept at contributing to complex engineering systems. While we seek individuals with expertise in multimodality, our collaborative environment encourages all new hires to work across modalities as a unified team.This role merges foundational research with practical engineering since we do not differentiate between these roles internally. You will be expected to write high-performance code and analyze technical reports. This position is perfect for someone who enjoys both deep theoretical inquiry and hands-on experimentation and is eager to influence the foundational aspects of AI learning.Note: This is an "evergreen role" that we keep open continuously to express interest in this research area. We receive a high volume of applications, and there may not always be an immediate position that perfectly matches your experience and skills. We encourage you to apply regardless. Applications are reviewed regularly, and we reach out to candidates as new opportunities arise. You are welcome to reapply if your experience increases, but please refrain from applying more than once every six months. Additionally, we may post specific roles for particular project or team needs, where you are also welcome to apply directly in addition to this evergreen role.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.
The Bot CompanyAt The Bot Company, we are on a mission to create an innovative robotic assistant for every household.Our dynamic team, composed of talented engineers, designers, and operators, is based in San Francisco. We have a rich background from industry leaders such as Tesla, Cruise, OpenAI, Google, and Pixar, and we have successfully delivered products to hundreds of millions of users, honing our ability to create exceptional products and experiences.We pride ourselves on maintaining a streamlined team structure that fosters swift decision-making and minimizes bureaucracy. Each member is considered an Individual Contributor, granted substantial autonomy, ownership, and accountability. Our culture enables us to work across the technology stack with an emphasis on rapid iteration and execution.What We Seek in CandidatesCandidates for all positions at The Bot Company must exhibit remarkable sharpness and the capacity to thrive in high-pressure environments. We expect candidates to showcase:Exceptional Cognitive Abilities: You possess quick thinking, instant learning capabilities, and the ability to reason across diverse domains.Engineering Curiosity: You demonstrate an innate desire to understand how systems function, even beyond your area of expertise.Performance-Driven Attitude: You excel in fast-paced settings, effectively navigate ambiguity, and thrive under demanding circumstances.Machine Learning: Multimodal Foundation ModelsWe are developing unified foundation models capable of reasoning across text, images, video, and kinematics to inform intelligent robotic behaviors.You will engage with large-scale multimodal networks, overseeing the complete process from data handling to model training and deployment.Your ResponsibilitiesConstruct Native Multimodal Policies: Create architectures where vision, language, and other modalities are represented in a unified manner.Enhance Cross-Modal Reasoning: Explore and implement strategies to ensure that the model not only correlates modalities but also comprehends them (e.g., linking visual physics to kinematic constraints).Manage the Training Loop from Start to Finish: Design, execute, troubleshoot, and refine large-scale training experiments; identify failure points, enhance data mixtures, and tighten evaluations to achieve measurable improvements.Deploy and Refine Real Systems: Integrate models into practical robotic frameworks, enhance robot code for model deployment, and optimize performance for edge inference.
Role overview The Principal Research Scientist – Scaling at Databricks leads research projects that advance how the company’s data analytics platform handles large workloads. This San Francisco-based role focuses on designing and improving algorithms that enable efficient large-scale data processing and machine learning. Collaboration is central, with regular work alongside engineering, product, and research teams. What you will do Lead research to develop algorithms that scale for data analytics applications. Work with colleagues across engineering, product, and research to strengthen machine learning capabilities. Use deep expertise to shape the direction and architecture of the Databricks platform. Drive new ideas and solutions that influence the future of data science and analytics at Databricks. Location This role is based in San Francisco, California.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; New York, NY
As AI continues to play a crucial role across various sectors, Scale AI is committed to accelerating the evolution of AI applications. For nearly a decade, we have been at the forefront of AI data solutions, driving significant innovations such as generative AI, defense technologies, and autonomous vehicles. With recent funding from Meta, we are intensifying our efforts to develop cutting-edge post-training algorithms essential for enhancing the performance of complex enterprise agents globally. The Enterprise ML Research Lab is at the forefront of this AI transformation. Our team is dedicated to creating a suite of proprietary research and resources tailored for our enterprise clientele. As a Machine Learning Systems Research Engineer, you will play a pivotal role in developing algorithms for our next-generation Agent Reinforcement Learning (RL) training platform, support large-scale training operations, and integrate state-of-the-art technologies to optimize our machine learning systems. You will collaborate with other ML Research Engineers and AI Architects on the Enterprise AI team to apply these training algorithms to various client use cases, from next-gen AI cybersecurity firewalls to foundational healthtech search models. If you are passionate about shaping the future of AI, we want to hear from you!
About Hike Medical Hike Medical is building the future of musculoskeletal care by combining advanced technology with practical healthcare solutions. Based in San Francisco’s Rincon Hill, the team develops a platform that spans three core areas: an AI-powered vision system for rapid web-based foot scans that generate custom 3D-printed orthotics, an AI agent platform that manages the entire DME workflow from intake through claims, and SoleForge, a high-scale 3D printing facility for custom medical devices. Hike Medical partners with some of the world’s largest employers and major orthotics and prosthetics organizations. Fortune 50 companies trust the platform to support employee well-being, and a broad network of clinical partners keeps the company connected to real-world needs. Custom insoles are just the starting point. The long-term goal is to reshape the industry with bionic devices: AI-designed, robotically manufactured orthotic and prosthetic products. The company aims to reach this milestone by 2040. Learn more at bionics2040.com. With $22 million raised across Seed and Series A rounds from leading investors, Hike Medical offers a results-oriented culture for those interested in the intersection of AI, manufacturing, and healthcare.
Join VOLT, a trailblazer in crafting advanced AI perception systems that enhance safety and security through real-time risk detection in the physical world.We are on the lookout for a Senior Applied AI & Machine Learning Engineer dedicated to designing, optimizing, and deploying multimodal AI models capable of functioning reliably in diverse real-world scenarios. This is a hands-on role focused on transitioning models from conceptual data to practical production, encompassing both edge devices and cloud infrastructures.In this position, you will engage with vision, video, and language-based models that interpret real-world scenes and events, ensuring their accuracy, latency, robustness, and cost-effectiveness in production systems.Reporting directly to the Head of Engineering, you will play a pivotal role in advancing VOLT AI’s core perception platform.
About Our TeamAt OpenAI, we are dedicated to ensuring that artificial general intelligence (AGI) serves and benefits all of humanity. A vital component of this mission involves developing models that genuinely understand and resonate with human preferences. Our Human Data team is instrumental in making this a reality.The Human Data engineering team is responsible for creating sophisticated systems that facilitate scalable and high-quality human feedback, which is crucial for training and refining OpenAI's most advanced models. Our engineers work in close collaboration with top-tier researchers to implement alignment techniques—from initial experimental concepts to production-ready feedback loops.Position OverviewWe are seeking passionate software engineers to become part of the Human Data team, tasked with developing the platforms, prototypes, tools, and infrastructure essential for training, aligning, and evaluating our AI models. In this role, you will collaborate with researchers and cross-functional teams to actualize alignment concepts, influence the training of future models, and enhance how our models engage with the real world.We are looking for individuals who thrive on technical ownership, enjoy working across the stack, and are eager to tackle complex challenges in a dynamic, impactful environment.This position is based in San Francisco, CA, and follows a hybrid work model of three days in the office each week. We also provide relocation assistance for new hires.Your ResponsibilitiesDevelop and maintain robust full-stack systems for feedback collection, data labeling, and evaluation pipelines while ensuring high levels of security.Convert experimental alignment research into scalable production infrastructure, including inference and model training systems.Design and enhance user-facing tools and backend services to support high-quality data workflows.Collaborate with researchers, engineers, and program leads to refine feedback loops and model interaction strategies.Lead infrastructure improvements that promote faster iterations and scaling across OpenAI’s cutting-edge models, from internal research tools to production-level ChatGPT.QualificationsProven software engineering skills with experience in building scalable production systems.A strong preference for full-stack development with end-to-end ownership—from backend pipelines to user interfaces.Driven by high-impact projects and capable of navigating ambiguous challenges.
Full-time|$251.7K/yr - $330K/yr|On-site|San Francisco Bay Area, CA
Our MissionAt Altos Labs, we are dedicated to restoring cell health and resilience through innovative cell rejuvenation techniques aimed at reversing diseases, injuries, and disabilities that can arise throughout life.For further insights, please visit our website at altoslabs.com.Our ValueOur singular Altos Value is: Everyone Owns Achieving Our Inspiring Mission.Diversity at AltosWe firmly believe that diverse perspectives are crucial for scientific innovation. At Altos, exceptional scientists and industry leaders collaborate globally to further our shared mission. We prioritize Belonging, ensuring all employees feel valued for their unique perspectives, and we hold ourselves accountable for maintaining a diverse and inclusive environment.Your Contributions to AltosAs a member of our team, you will accelerate and enhance our efforts in developing unified, multi-modal generative foundation models tailored for multiscale biology. You will be a key player in multidisciplinary teams that create the computational platforms essential for Altos to fulfill its mission.In this position, you will collaborate with other scientists and engineers across the Institute of Computation to design, develop, and scale cutting-edge foundation models that address biological inquiries and assist in discovering novel interventions for aging and disease. Your focus will be on synthesizing unstructured multimodal signals with structured relational data and knowledge graphs that depict biological realities.The ideal candidate will excel in a dynamic environment that values teamwork, transparency, scientific excellence, originality, and integrity.
Feb 19, 2026
Sign in to browse more jobs
Create account — see all 6,193 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.