AnthropicRemote-Friendly (Travel-Required) | San Francisco, CA | New York City, NYNew
Remote Full-time
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Strong background in machine learning and AI model evaluation techniques. Experience with programming languages such as Python or similar. Ability to work collaboratively in a remote team environment. Excellent analytical and problem-solving skills.
About the job
Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel.
Key responsibilities
Design and implement evaluations for Anthropic's AI models
Collaborate with team members to enhance model performance
Contribute to research that pushes the boundaries of AI systems
Location
Remote-friendly (travel required)
San Francisco, CA
New York City, NY
About Anthropic
Anthropic is at the forefront of AI research, dedicated to developing safe and beneficial AI technologies. Our mission is to ensure that AI systems are aligned with human intentions and values. We foster a collaborative and innovative environment that encourages curiosity and creativity.
Similar jobs
1 - 20 of 4,448 Jobs
Search for Ai Researcher For Multimodal Perception Models
About TavusTavus is at the forefront of innovation in human computing. Our mission is to develop AI Humans: an advanced interface that bridges the gap between individuals and machines, eliminating the friction found in current technologies. Our state-of-the-art human simulation models empower machines to see, hear, respond, and even exhibit realistic appearances—facilitating genuine, face-to-face interactions. AI Humans integrate the emotional insight of humans with the scalability and dependability of machines, making them reliable agents accessible 24/7, in any language, on our terms.Imagine having access to an affordable therapist, a personal trainer that fits your schedule, or a team of medical assistants dedicated to providing personalized care for every patient. With Tavus, individuals, enterprises, and developers have the tools to create AI Humans that connect, comprehend, and act with empathy on a large scale.We are a Series A company supported by esteemed investors such as Sequoia Capital, Y Combinator, and Scale Venture Partners.Join us in shaping a future where machines and humans genuinely understand one another.The PositionWe are seeking an AI Researcher to join our core AI team and advance the frontiers of multimodal conversational intelligence. If you excel in dynamic environments, enjoy transforming abstract concepts into functional code, and derive motivation from pushing the boundaries of possibility, this role is designed for you.Your Responsibilities Engage in research focusing on Foundational Multimodal Models specifically in the realm of Conversational Avatars (such as Neural Avatars and Talking-Heads).Develop models for video, audio, and language sequences utilizing Autoregressive and Predictive Architectures (e.g., V-JEPA) and/or Diffusion methodologies, with a focus on temporal and sequential data rather than static images.Collaborate closely with the Applied ML team to implement your research into production systems.Remain at the forefront of multimodal learning and assist us in defining what “cutting edge” will mean in the future.Ideal Candidate ProfilePhD (or nearing completion) in a relevant field, or equivalent practical research experience.Experience in multimodal machine learning, particularly focused on conversational interfaces.
Full-time|On-site|San Francisco (London/Europe - OK)
Tavus – Multimodal AI Model OptimizationResearch EngineerAt Tavus, we are pioneering the human aspect of AI technology. Our objective is to make human-AI interactions as seamless and natural as in-person conversations, allowing for a human touch in areas that were once considered unscalable.We accomplish this through groundbreaking research in multimodal AI, focusing on human-to-human communication modeling (encompassing language, audio, and video) and the development of audio-visual avatar behaviors. Our innovative models drive applications ranging from text-to-video AI avatars to real-time conversational video experiences across sectors such as healthcare, recruitment, sales, and education.By empowering AI to perceive, listen, and engage with an authentic human-like presence, we are laying the groundwork for the next generation of AI workers, assistants, and companions.As a Series B company, we are supported by renowned investors, including Sequoia, Y Combinator, and Scale VC. Join us as we shape the future of human-AI interaction.The RoleWe are seeking an accomplished Research Scientist/Engineer with expertise in model optimization to be a vital part of our core AI team.The ideal candidate thrives in dynamic startup environments, is adept at setting priorities independently, and is open to making calculated decisions. We are moving swiftly and need individuals who can help navigate our path forward.Your MissionTransform state-of-the-art research models into fast, efficient, and production-ready systems through techniques such as sparsification, distillation, and quantization.Oversee the optimization lifecycle for critical models: establish metrics, conduct experiments, and evaluate trade-offs among latency, cost, and quality.Collaborate closely with researchers and engineers to convert innovative concepts into deployable solutions.RequirementsExtensive experience in deep learning with PyTorch.Practical experience in model optimization and compression, including knowledge distillation, pruning/sparsification, quantization, and mixed precision.Familiarity with efficient architectures such as low-rank adapters.Strong grasp of inference performance and GPU/accelerator fundamentals.Proficient in Python coding and adherence to best practices in research engineering.Experience with large models and datasets in cloud environments.Capability to read ML literature, reproduce results, and modify ideas accordingly.
About UsTavus is an innovative research lab at the forefront of human computing technology. Our mission is to create AI Humans—advanced interfaces that bridge the gap between individuals and machines, eliminating the friction found in current systems. Our real-time human simulation models empower machines to see, hear, respond, and appear realistic, facilitating genuine, face-to-face conversations. With AI Humans, we blend the emotional intelligence inherent in humans with the extensive reach and reliability of machines, enabling them to serve as capable and trusted agents available 24/7, capable of communicating in any language.Envision a therapist accessible to everyone, a personal trainer that tailors sessions to your schedule, or a fleet of medical assistants dedicated to providing personalized attention to every patient. Tavus enables individuals, enterprises, and developers to create AI Humans that connect, empathize, and act with understanding on a large scale.Backed by prestigious investors like Sequoia Capital, Y Combinator, and Scale Venture Partners, we are a Series A company ready to shape the future of human-machine interaction.Join us in transforming a future where humans and machines genuinely comprehend one another.The RoleWe are seeking a passionate AI Researcher to join our core AI team and advance the science of audio-visual avatar generation. If you thrive in dynamic startup environments, enjoy experimenting with generative models, and are excited to see your research translated into production, you will find a welcoming home here.Your Mission Conduct research and develop cutting-edge audio-visual generation models for conversational agents (e.g., Neural Avatars, Talking Heads).Focus on models that intricately align with conversation flows, ensuring seamless integration of verbal and non-verbal cues.Experiment with diffusion models (DDPMs, LDMs, etc.), long-video generation, and audio synthesis.Collaborate closely with the Applied ML team to transition your research into practical applications.Stay updated on the latest breakthroughs in multimodal generation and contribute to the evolution of this field.You Will Excel If You Have:A PhD (or nearing completion) in a relevant discipline, or equivalent hands-on research experience.Proficiency in applying image/video generation techniques and a solid understanding of machine learning principles.
The Bot CompanyAt The Bot Company, we are on a mission to create an innovative robotic assistant for every household.Our dynamic team, composed of talented engineers, designers, and operators, is based in San Francisco. We have a rich background from industry leaders such as Tesla, Cruise, OpenAI, Google, and Pixar, and we have successfully delivered products to hundreds of millions of users, honing our ability to create exceptional products and experiences.We pride ourselves on maintaining a streamlined team structure that fosters swift decision-making and minimizes bureaucracy. Each member is considered an Individual Contributor, granted substantial autonomy, ownership, and accountability. Our culture enables us to work across the technology stack with an emphasis on rapid iteration and execution.What We Seek in CandidatesCandidates for all positions at The Bot Company must exhibit remarkable sharpness and the capacity to thrive in high-pressure environments. We expect candidates to showcase:Exceptional Cognitive Abilities: You possess quick thinking, instant learning capabilities, and the ability to reason across diverse domains.Engineering Curiosity: You demonstrate an innate desire to understand how systems function, even beyond your area of expertise.Performance-Driven Attitude: You excel in fast-paced settings, effectively navigate ambiguity, and thrive under demanding circumstances.Machine Learning: Multimodal Foundation ModelsWe are developing unified foundation models capable of reasoning across text, images, video, and kinematics to inform intelligent robotic behaviors.You will engage with large-scale multimodal networks, overseeing the complete process from data handling to model training and deployment.Your ResponsibilitiesConstruct Native Multimodal Policies: Create architectures where vision, language, and other modalities are represented in a unified manner.Enhance Cross-Modal Reasoning: Explore and implement strategies to ensure that the model not only correlates modalities but also comprehends them (e.g., linking visual physics to kinematic constraints).Manage the Training Loop from Start to Finish: Design, execute, troubleshoot, and refine large-scale training experiments; identify failure points, enhance data mixtures, and tighten evaluations to achieve measurable improvements.Deploy and Refine Real Systems: Integrate models into practical robotic frameworks, enhance robot code for model deployment, and optimize performance for edge inference.
Eventual Computing builds tools that help AI teams work with large, complex datasets. Based in San Francisco, the company supports projects in robotics, autonomous vehicles, and advanced video generation. Its open-source engine, Daft, is already in use at organizations with demanding data needs. The team focuses on making data curation and model training more efficient, so the right datasets are always within reach. The office is located in the Mission district, where collaboration with leading AI labs and infrastructure companies is part of daily work. Role overview The Research Engineer - Multimodal Data will join the Visual Understanding team. This position centers on building solutions to make vast amounts of video and sensor data accessible and easy to query. The work directly supports researchers who need to find and use specific datasets quickly. What you will do Develop and refine systems that process petabytes of multimodal data, including video and sensor streams. Apply vision-language models to improve how data is discovered and retrieved. Define and influence the roadmap for visual understanding features. Train models to streamline large-scale data annotation and improve efficiency for research teams.
Join VOLT, a trailblazer in crafting advanced AI perception systems that enhance safety and security through real-time risk detection in the physical world.We are on the lookout for a Senior Applied AI & Machine Learning Engineer dedicated to designing, optimizing, and deploying multimodal AI models capable of functioning reliably in diverse real-world scenarios. This is a hands-on role focused on transitioning models from conceptual data to practical production, encompassing both edge devices and cloud infrastructures.In this position, you will engage with vision, video, and language-based models that interpret real-world scenes and events, ensuring their accuracy, latency, robustness, and cost-effectiveness in production systems.Reporting directly to the Head of Engineering, you will play a pivotal role in advancing VOLT AI’s core perception platform.
Full-time|$251.7K/yr - $330K/yr|On-site|San Francisco Bay Area, CA
Our MissionAt Altos Labs, we are dedicated to restoring cell health and resilience through innovative cell rejuvenation techniques aimed at reversing diseases, injuries, and disabilities that can arise throughout life.For further insights, please visit our website at altoslabs.com.Our ValueOur singular Altos Value is: Everyone Owns Achieving Our Inspiring Mission.Diversity at AltosWe firmly believe that diverse perspectives are crucial for scientific innovation. At Altos, exceptional scientists and industry leaders collaborate globally to further our shared mission. We prioritize Belonging, ensuring all employees feel valued for their unique perspectives, and we hold ourselves accountable for maintaining a diverse and inclusive environment.Your Contributions to AltosAs a member of our team, you will accelerate and enhance our efforts in developing unified, multi-modal generative foundation models tailored for multiscale biology. You will be a key player in multidisciplinary teams that create the computational platforms essential for Altos to fulfill its mission.In this position, you will collaborate with other scientists and engineers across the Institute of Computation to design, develop, and scale cutting-edge foundation models that address biological inquiries and assist in discovering novel interventions for aging and disease. Your focus will be on synthesizing unstructured multimodal signals with structured relational data and knowledge graphs that depict biological realities.The ideal candidate will excel in a dynamic environment that values teamwork, transparency, scientific excellence, originality, and integrity.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone can harness the knowledge and tools necessary to make AI work for their unique objectives.Comprising a team of scientists, engineers, and innovators, we have developed some of the most widely employed AI products, including ChatGPT and Character.ai, as well as open-weight models such as Mistral and popular open-source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleAt Thinking Machines, we prioritize a multimodal-first approach. We are seeking new team members to push the boundaries of visual perception and multimodal learning. Our focus is on understanding the interplay between vision and language at scale. We design innovative architectures that integrate pixels and text, create datasets and evaluation methods that assess real-world comprehension, and develop representations that enable models to connect abstract concepts with the physical world. Our aim is to build multimodal systems that seamlessly integrate into real-world applications.Your work will be at the intersection of visual understanding, multimodal reasoning, and large-scale model training. You will contribute to the development of architectures, data, and evaluation tools that teach AI to perceive, comprehend, and collaborate effectively. The ideal candidate is inquisitive about multimodal interfaces, possesses experience in conducting large-scale experiments, and is adept at contributing to complex engineering systems. While we seek individuals with expertise in multimodality, our collaborative environment encourages all new hires to work across modalities as a unified team.This role merges foundational research with practical engineering since we do not differentiate between these roles internally. You will be expected to write high-performance code and analyze technical reports. This position is perfect for someone who enjoys both deep theoretical inquiry and hands-on experimentation and is eager to influence the foundational aspects of AI learning.Note: This is an "evergreen role" that we keep open continuously to express interest in this research area. We receive a high volume of applications, and there may not always be an immediate position that perfectly matches your experience and skills. We encourage you to apply regardless. Applications are reviewed regularly, and we reach out to candidates as new opportunities arise. You are welcome to reapply if your experience increases, but please refrain from applying more than once every six months. Additionally, we may post specific roles for particular project or team needs, where you are also welcome to apply directly in addition to this evergreen role.
Bland Inc. seeks a Machine Learning Researcher specializing in Multimodal Large Language Models (LLMs) to join the team in San Francisco. The focus is on advancing AI systems that integrate language with other types of data. Role overview This position centers on research and development aimed at improving how AI models process and understand information from multiple sources, such as text combined with images or other modalities. What you will do Investigate how language interacts with additional data types within multimodal LLMs Create and evaluate new methods to enhance AI model performance Work closely with colleagues on projects designed to push the boundaries of machine learning Location This role is based in San Francisco.
Overview: Join us at Spellbrush as we innovate in the world of gaming! We are developing an immersive 3D first-person adventure game where an AI companion drives the gameplay experience. Our goal is to create a game that integrates large language models (LLMs) in a way that enhances storytelling and player engagement, moving beyond simple chat interactions.About SpellbrushAt Spellbrush, we are dedicated to crafting exceptional anime games. As the leading generative AI studio, we are the creative force behind niji・journey. Our mission is clear: to harness AI in bringing vibrant characters to life and to redefine narrative-driven gaming experiences.Our ProjectWe have developed an innovative in-house LLM storytelling system that seamlessly integrates AI with narrative and gameplay, offering players a depth of interaction that transcends traditional gaming.Role OverviewAs an integral member of our small but highly skilled team, you will have the opportunity to shape the future of gaming. Collaborate with leading minds in the industry, including the creator of Warudo and a veteran from Google DeepMind behind Project Astra. Your role will afford you substantial creative and research freedom to pioneer LLM-driven storytelling.
Join Our Innovative TeamTavus is a forward-thinking research laboratory at the forefront of human-computer interaction. We are dedicated to creating AI Humans, a revolutionary interface that bridges the gap between individuals and machines, eliminating the friction commonly encountered in today's technology. Our advanced human simulation models empower machines to see, hear, respond, and even exhibit lifelike appearances, facilitating genuine, face-to-face interactions. By merging human emotional intelligence with machine efficiency, our AI Humans serve as reliable, empathetic agents available around the clock, in any language, tailored to user needs.Picture a therapist within reach for everyone, a personal trainer that fits seamlessly into your schedule, or a comprehensive team of medical assistants providing focused attention to every patient. With Tavus, individuals and organizations can develop AI Humans to foster connection, understanding, and responsiveness on a grand scale.As a Series A startup, we are backed by prestigious investors such as Sequoia Capital, Y Combinator, and Scale Venture Partners. Join us in crafting a future where humans and machines communicate effortlessly.Your RoleWe seek an AI Researcher to join our dynamic AI team and explore the frontiers of large language modeling within conversational AI. If you excel in fast-paced startup settings, enjoy experimenting with innovative ideas, and are eager to see your contributions realized in production, you will thrive here.Your Mission Conduct in-depth research on large language modeling and its adaptation for Conversational Avatars (e.g., Neural Avatars, Talking-Heads).Create methodologies to effectively model both verbal and non-verbal communication, dynamically adjusting avatar behaviors in real-time.Experiment with fine-tuning, adaptation, and conditioning techniques to enhance the expressiveness, control, and task specificity of LLMs.Collaborate with the Applied ML team to transition research from prototype to full-scale deployment.Stay informed on the latest advancements and contribute to defining the next breakthroughs.
Join Cartesia as a Model Architecture ResearcherAt Cartesia, our vision is to revolutionize AI by creating interactive intelligence that is seamlessly integrated into your daily life. Unlike current models, our goal is to develop systems capable of processing extensive streams of audio, video, and text—1 billion text tokens, 10 billion audio tokens, and 1 trillion video tokens—directly on devices.As pioneers in innovative model architectures, our founding team, which originated from the Stanford AI Lab, has developed State Space Models (SSMs)—a groundbreaking foundation for training efficient, large-scale models. Our diverse team merges deep expertise in model innovation with a design-focused engineering approach, allowing us to create and deploy state-of-the-art models and applications.Backed by leading investors such as Index Ventures, Lightspeed Venture Partners, and many others, including industry veterans and advisors, we are poised to shape the future of AI.Your ContributionIn this role, you will drive forward-thinking research in neural network architecture, focusing on alternative models like state space models, efficient transformers, and hybrid architectures.Create innovative architectures that enhance model performance, inference speed, and adaptability in various environments, from cloud infrastructures to on-device implementations.Develop advanced capabilities for models, including statefulness, long-range memory, and novel conditioning mechanisms to boost expressiveness and generalization.Analyze architectural decisions and their effects on model characteristics such as scalability, robustness, latency, and energy consumption.Create frameworks and tools to assess architectural advancements, benchmarking their performance in both research and production contexts.Collaborate with interdisciplinary teams to translate architectural insights into scalable systems that deliver real-world impact.Your QualificationsExtensive experience in architecture design with a focus on advanced models such as state space models, transformers, and RNN/CNN variants.In-depth understanding of the interplay between architectural designs and system constraints, particularly in cloud and on-device deployments.Strong proficiency in the design and evaluation of neural network architectures.
Join Achira in shaping the future of deep learning with cutting-edge generative, representational, and simulation models for molecules and materials. Our mission is to create foundational models that render the atomistic universe understandable, predictable, and designable.Why Choose Achira?Be part of an elite, cross-disciplinary team comprising ML researchers, physicists, chemists, and engineers who are redefining atomistic simulation through expansive foundation models.Advance the integration of deep learning with the principles of nature, merging generative AI, probabilistic reasoning, and molecular physics.Engage in projects at an unparalleled scale, tackling extensive datasets, computational challenges, and ambitious goals.Take full ownership of your research journey — from ideation and architecture to training, evaluation, and deployment.Flourish in a dynamic culture that values rigor, speed, creativity, and impact over bureaucracy.Position OverviewAs a Generative AI Researcher at Achira, you will contribute to the development of foundation simulation models — large-scale systems designed to learn the structure, dynamics, and energetics of the atomistic realm. These models will unite deep representation learning, generative modeling, and sophisticated simulation techniques.Your responsibilities will include:Crafting and training state-of-the-art deep generative models — including diffusion, autoregressive, flow-based, and latent-variable architectures focused on molecules, materials, and atomic systems.Creating expressive representations of molecular and atomistic structures and dynamics utilizing equivariant graph neural networks, geometric transformers, and latent encoders that respect physical symmetries and constraints.Innovating advanced sampling and simulation techniques that blend probabilistic inference, deep learning, and reinforcement learning to facilitate efficient exploration and simulation of learned energy landscapes.Developing models that comprehend, generate, and simulate the physical world, merging reasoning, simulation, and predictive capabilities.Working collaboratively with physicists and chemists to validate models against ab initio, molecular dynamics, and experimental datasets.Rapidly prototyping, benchmarking, and iterating — converting research concepts into reusable, scalable model components across Achira’s foundation model suite.
Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.
Zyphra is a cutting-edge artificial intelligence firm headquartered in the vibrant city of San Francisco, California.Position Overview:As a Research Scientist specializing in Model Architectures, you will play a pivotal role in Zyphra’s AI Architecture Research Team. Your responsibilities will include the design and thorough evaluation of innovative model architectures and training methodologies aimed at enhancing essential modeling capabilities (e.g., loss per flop or loss per parameter) and tackling core limitations inherent in current models. You will collaborate closely with our pre-training team to ensure that your findings are seamlessly integrated into our next-generation models.Qualifications:A strong research acumen and intuition.Proven ability to navigate research projects from initial conception to execution and final write-up.Exceptional implementation and prototyping skills, with the capability to swiftly transform ideas into experimental outcomes.A collaborative spirit and the ability to thrive in a fast-paced research environment.A deep curiosity and enthusiasm for understanding intelligence.Requirements:Experience with long-term memory, RAG/retrieval systems, dynamic/adaptive computation, and alternative credit assignment strategies.Knowledge of reinforcement learning, control theory, and signal processing techniques.A passion for exploring and critically evaluating unconventional ideas, with the ability to maintain a unique perspective.Familiarity with modern training pipelines and the hardware necessities for designing efficient architectures compatible with GPU hardware.Strong understanding of experimental methodologies for conducting rigorous ablations and hypothesis testing.High proficiency in PyTorch and Python programming.Ability to quickly assimilate into large pre-existing codebases and contribute effectively.Prior publication of machine learning research in reputable venues.Postgraduate degree in a scientific discipline (e.g., Computer Science, Electrical Engineering, Mathematics, Physics).Why Join Zyphra?We emphasize a structured research methodology that systematically addresses ambitious challenges in AI.
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY
Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel. Key responsibilities Design and implement evaluations for Anthropic's AI models Collaborate with team members to enhance model performance Contribute to research that pushes the boundaries of AI systems Location Remote-friendly (travel required) San Francisco, CA New York City, NY
Join worldlabs as a Research Engineer focused on scaling multimodal data. In this dynamic role, you will leverage cutting-edge technologies and methodologies to enhance data processing capabilities. You will be responsible for developing innovative solutions that integrate various data types and drive impactful research outcomes.
Overview: At Spellbrush, we are revolutionizing the gaming experience by crafting a 3D first-person adventure game where an AI companion plays a pivotal role. Imagine MiSide enhanced with language learning models, seamlessly integrated into gameplay rather than merely serving as a role-play chatbot.About UsAt Spellbrush, we are dedicated to creating exceptional anime games, and we proudly stand as a global leader in generative AI. Our flagship project is niji・journey.Our mission is straightforward: to use AI to animate characters and redefine narrative-driven gaming.What We’re CreatingWe have engineered an innovative in-house LLM storytelling system that merges AI, narrative, and gameplay, transcending the limitations of conventional chat-only encounters.This results in an AI companion that collaborates with players in solving puzzles, retains memories across different worlds, and alters the progression of each chapter.About The RoleJoin our elite team to redefine video game experiences. Collaborate with leading minds in the industry, including the creator of Warudo and Cytoid, as well as a Google DeepMind veteran behind Project Astra, and top-tier AI researchers.As an integral early member of this team, you will enjoy significant artistic and research autonomy in shaping what could be the next era of LLM-driven storytelling.
Full-time|$211.2K/yr - $290K/yr|On-site|San Francisco Bay Area, CA;San Diego, CA
Our MissionAt Altos Labs, we are dedicated to revitalizing cell health and resilience through innovative cell rejuvenation techniques to reverse disease, injury, and the disabilities that arise throughout life.Discover more about our vision at altoslabs.com.Our ValuesOur core value is simple yet powerful: Everyone Owns Achieving Our Inspiring Mission.Diversity at AltosWe understand that diverse perspectives are crucial for scientific breakthroughs and exploration. At Altos, exceptional scientists and industry leaders collaborate from around the globe to drive our shared mission forward. We prioritize a culture of belonging, ensuring that every employee feels valued for their unique contributions. We are all responsible for maintaining a diverse and inclusive workplace.Your Contributions to AltosJoin Altos Labs in creating a premier AI ecosystem aimed at addressing the most intricate challenges in human biology. You will be instrumental in designing and developing high-performance, scalable solutions that integrate high-dimensional biomedical imaging with molecular and linguistic data.Your role will involve implementing large-scale multimodal data fusion, advancing beyond basic image analysis to develop predictive models that span various biological domains. You will engage directly with data and coding, partnering with our engineering team to ensure these models are scalable, efficiently trainable in distributed cloud environments, and accessible to our global research network.Key ResponsibilitiesModel Development: Create, implement, and train large-scale foundational models (e.g., Vision Transformers, Multimodal LLMs) capable of embedding spatial data and integrating diverse modalities.Innovative Data Fusion: Apply cutting-edge cross-domain mapping and fusion techniques to align heterogeneous biological datasets.Scaling & Training: Develop and oversee high-performance ML pipelines designed to handle petabyte-scale image collections and multi-omics data streams in a cloud infrastructure.Technical Collaboration: Work closely with experimental scientists and software engineers to convert biological complexity into high-performance code and reliable distributed systems.Who You AreWe seek a technical expert who excels at unraveling "unsolvable" challenges through programming and meticulous experimentation. We welcome candidates at the Scientist I, Scientist II, or Senior Scientist levels.
Remote|Remote|Remote-Friendly (Travel Required) | San Francisco, CA
Join Anthropic as a Senior Research Scientist on our Reward Models team, where you will spearhead groundbreaking research aimed at enhancing our understanding of human preferences at scale. Your innovative contributions will directly influence how our AI models, including Claude, align with human values and optimize for user needs. You will delve into the forefront of reward modeling for large language models, designing novel architectures and training methodologies for Reinforcement Learning from Human Feedback (RLHF). Your research will explore advanced evaluation techniques, including rubric-based grading, and tackle challenges such as reward hacking. Collaboration is key, as you'll work alongside teams in Finetuning, Alignment Science, and our broader research organization to ensure your findings result in tangible advancements in AI capabilities and safety. This role offers you an opportunity to address critical AI alignment challenges, leveraging cutting-edge models and substantial computational resources to further the science of safe and capable AI systems.
Jan 29, 2026
Sign in to browse more jobs
Create account — see all 4,448 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.