Technical Staff Member Applied Vision Post Training jobs in San Francisco – Browse 1,776 openings on RoboApply Jobs
Technical Staff Member Applied Vision Post Training jobs in San Francisco
Open roles matching “Technical Staff Member Applied Vision Post Training” with location signals for San Francisco. 1,776 active listings on RoboApply Jobs.
1,776 jobs found
Technical Staff Member - Applied Vision (Post Training)
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Bachelor's degree in Computer Science, Engineering, or a related field. Strong understanding of computer vision concepts and algorithms. Familiarity with machine learning frameworks and tools. Excellent problem-solving skills and a collaborative mindset. Ability to thrive in a fast-paced, innovative environment.
About the job
Join Liquid AI as a Technical Staff Member specializing in Applied Vision. In this dynamic role, you will leverage cutting-edge technology to develop innovative solutions and enhance our product offerings. This position is ideal for recent graduates with a passion for technology and a desire to make a meaningful impact in the field of artificial intelligence.
About Liquid AI
Liquid AI is at the forefront of artificial intelligence innovation, dedicated to transforming industries through advanced technology solutions. Our team is composed of passionate experts committed to pushing the boundaries of what's possible. Join us in shaping the future of AI!
Join Liquid AI as a Technical Staff Member specializing in Applied Vision. In this dynamic role, you will leverage cutting-edge technology to develop innovative solutions and enhance our product offerings. This position is ideal for recent graduates with a passion for technology and a desire to make a meaningful impact in the field of artificial intelligence.
Overview: Join Listen Labs as we respond to a surge in market demand with an ambitious 6-month product roadmap. We are expanding our engineering team and are on the lookout for a highly skilled technical expert (our current team includes three IOI medalists) who is eager to build a transformative product that reshapes decision-making for businesses. If you have a passion for solving intricate problems from start to finish, we want to connect with you.About Listen LabsListen Labs is an AI-driven research platform designed to help teams quickly extract insights from customer interviews in a matter of hours rather than months. We empower our clients by enabling them to analyze conversations, identify key themes, and make faster, more informed product decisions.Why Work with Us?Exceptional Team: Founded by seasoned entrepreneurs with a successful AI exit, along with talent from renowned companies such as Jane Street, Twitter, Stripe, Affirm, Bain, and Goldman Sachs, our team boasts impressive credentials including IOI and ICPC backgrounds.Rapid Growth: As a 40-person team backed by Sequoia Capital, we have achieved a remarkable growth trajectory, scaling from $0 to a $14 million run-rate in less than a year. We prioritize craftsmanship and thrive on collaboration with individuals who take ownership.Impressive Traction: We are experiencing rapid growth across various sectors, securing enterprise clients such as Google, Microsoft, Nestlé, and Procter & Gamble.Proven Performance: We maintain an industry-leading win rate driven by our uniquely differentiated product.Market Validation: We consistently attract customers from diverse segments, achieving six-figure contracts that facilitate quick expansions.Viral Product: Our interviews reach tens of thousands of viewers, promoting product-led growth, organic expansion, and daily interest from Fortune 500 companies.Technical Challenges Await:Research Agent Development:Unlike traditional software purchases, hiring McKinsey offers valuable opinions, expertise, and execution. We aim to provide users with an AI agent that possesses complete knowledge about our platform and best research practices, assisting them in project setup, interview conduction, and response analysis.Human Database Creation:One of our core offerings is the ability to identify target users effectively (e.g., "power users of ChatGPT and Excel"). We are in the process of building a comprehensive database that connects users with the insights they need.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in the development of versatile artificial intelligence systems optimized for performance across various deployment environments, ranging from data center accelerators to on-device hardware. Our focus on low latency, minimal memory consumption, privacy, and reliability allows us to partner effectively with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome talented individuals who can contribute to our mission.The OpportunityThis unique position places you at the forefront of advanced foundation models and their practical applications. You will oversee post-training projects from start to finish for some of the world’s leading enterprises, while also playing a vital role in the ongoing development of Liquid’s core models.In this role, you will not have to choose between impactful customer work and foundational development; instead, you will enjoy deep involvement in both. You will have significant influence over how models are adapted, assessed, and deployed, directly contributing to the enhancement of Liquid’s post-training capabilities.If you are passionate about data integrity, evaluation processes, and ensuring that models perform effectively in real-world scenarios, this is your chance to redefine the standards of applied AI at a foundation-model company.What We're Looking ForWe seek an individual who:Takes ownership: You will lead post-training initiatives from customer requirements to delivery and evaluation.Thinks end-to-end: You will connect the dots across data generation, training, alignment, and evaluation as a cohesive system.Is pragmatic: You prioritize model quality and customer satisfaction over theoretical publications.Communicates clearly: You can interpret customer needs and effectively communicate with internal technical teams, providing constructive feedback when necessary.The WorkServe as the technical lead for post-training engagements with enterprise clients.Translate client requirements into actionable post-training specifications and workflows.Design and implement data generation, filtering, and quality assessment methodologies.Conduct supervised fine-tuning, preference alignment, and reinforcement learning processes.Create task-specific evaluations, analyze outcomes, and integrate insights back into core post-training workflows.
Join Reka as a Member of the Technical Staff in Applied AI!Leverage cutting-edge AI models to tackle intricate real-world challenges.Engage in close collaboration with researchers and fellow team members to explore the latest developments in AI and ML.Partner with our customers to seamlessly integrate our innovative models into their existing technology frameworks.Drive business success with a strong sense of product ownership and accountability.Be part of a pioneering team in a rapidly growing environment, taking on diverse roles.
At Tzafon, we are pioneering the development of scalable computing systems and pushing the boundaries of machine intelligence with our foundation model lab. Located in vibrant cities such as San Francisco, Zurich, and Tel Aviv, we have successfully secured over $12 million in funding to fuel our mission of expanding the horizons of AI technology.Our dynamic team comprises engineers and scientists with extensive expertise in machine learning infrastructure and research. Founded by IOI and IMO medalists, PhDs, and seasoned professionals from top tech firms, we specialize in training advanced models and constructing robust infrastructures to automate tasks across various real-world scenarios.In this role, you will collaborate closely with our product and post-training teams to deploy Large Action Models that drive impactful results. Your responsibilities will include building evaluation frameworks, establishing benchmarks, and creating fine-tuning pipelines to ensure optimal model performance.
Technical Staff Member in Applied AIAbout the OpportunityWe are seeking a highly skilled Technical Staff Member specializing in generative modeling to bridge the gap between our advanced models and the clients who rely on them. You will collaborate with a diverse team of machine learning experts, protein engineers, and biologists to revolutionize biological control and disease treatment. Your role will involve gaining a comprehensive understanding of our proprietary generative models and leveraging that expertise to deploy, adapt, and optimize these models in client environments, particularly within the pharmaceutical and biotech industries.This hybrid position requires a research-oriented mindset to deeply understand our models, paired with the communication skills necessary to translate that knowledge into production systems that yield scientific value for our collaborators.About UsAt Latent Labs, we are pioneering frontier models that decode the fundamentals of biology. Our ambitious goals are driven by curiosity and a commitment to scientific excellence. Prior to founding Latent Labs, our team co-developed DeepMind's Nobel Prize-winning AlphaFold, innovated latent diffusion, and created groundbreaking lab data management systems along with high-throughput protein screening platforms. Here, you will work alongside some of the brightest minds in generative AI and biology.We value interdisciplinary collaboration, continuous learning, and teamwork. Our team offsites foster a culture of trust and connection between our London and San Francisco offices. We are looking for innovators who are passionate about solving complex problems and making a positive global impact. Join us on our ambitious mission.Your QualificationsExpertise in Machine Learning: You are a proficient ML researcher with a strong background in generative modeling, evidenced by your contributions to notable open-source projects, impactful product launches, or significant publications in leading venues such as NeurIPS, ICML, ICLR, or Nature. You possess a deep understanding of generative model architectures, training dynamics, and inference behavior.Proficient ML Developer: You produce robust, tested, and maintainable ML code. Your experience includes using version control and code review systems. You are adept at rapid prototyping while also being able to write elegant production code. Additionally, you have experience in building systems that deploy large models via APIs and executing inference tasks in production environments.
Our MissionAt Reflection AI, our goal is to create open superintelligence and ensure its accessibility for everyone.We are pioneering open weight models for various users, including individuals, enterprises, and even nation-states. Our talented team comprises AI researchers and industry veterans from leading organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, and Anthropic.Role OverviewDevelop systems that convert robust pre-trained models into aligned and versatile agents.Lead research and engineering efforts to advance post-training practices, focusing on data curation and large-scale optimization.Create data generation frameworks, reward models, reinforcement learning algorithms, and techniques for inference-time scaling.Collaborate with both pre-training and post-training teams to achieve significant enhancements in model capabilities.Help refine our understanding of how large models learn to reason, follow instructions, and evolve through reinforcement learning.Your ProfileSolid grasp of machine learning principles with hands-on experience in large-scale LLM training.Proficient engineering skills, with the ability to navigate intricate ML codebases and distributed systems.Experience in enhancing model performance through data, reward modeling, or reinforcement learning techniques.Track record of leading ambitious research or engineering projects resulting in measurable improvements.Thrives in a dynamic, high-agency startup atmosphere; oriented towards action and clarity in execution.Ability to work seamlessly across research and infrastructure boundaries.Excellent communication skills and a collaborative mindset.Driven by a passion for pushing the boundaries of intelligence.What We Provide:At Reflection AI, we believe that to truly build open superintelligence, it must be rooted in a strong foundation. By joining us, you will contribute to building from the ground up within a compact, highly skilled team. Together, we will shape the future of our company and the landscape of open foundational models.We aim for you to accomplish the most impactful work of your career, with the assurance that you and your loved ones are well-supported.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in creating versatile AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware, focusing on low latency, minimal memory consumption, privacy, and dependability. Our collaborations extend across industries including consumer electronics, automotive, life sciences, and financial services. As we undergo rapid expansion, we are on the lookout for outstanding individuals to join our journey.The OpportunityThe Vision-Language Models (VLM) team is dedicated to developing cutting-edge vision-language models that function seamlessly on devices, adhering to stringent latency and memory requirements without compromising quality. Having already launched four premier models, we are excited about what lies ahead.This team is responsible for the complete VLM pipeline, encompassing research on novel architectures, training algorithms, data curation, evaluation, and deployment. You will be part of a dedicated, hands-on team that directly engages with models and works closely with our pretraining, post-training, and infrastructure teams. Your success will be gauged by the performance of the models we deliver.
Join Our Team at XDOFAt XDOF, we are at the forefront of building the future of robotics by focusing on the critical element that drives innovation: data. Our mission is to create robust data collection systems and annotation pipelines that serve as the backbone of advanced foundation models in robotics.We are seeking a passionate Research Engineer / Scientist to spearhead technical initiatives in the intersection of vision-language models and robotic learning. Your role will involve transforming egocentric and teleoperation video into high-quality training data for VLA models, as well as contributing to the development of these models directly.In addition to pipeline construction, you will delve into research aimed at enhancing the utility of robot data. This includes uncovering new metadata such as contact events, affordance labels, and dynamics priors from video to unlock new capabilities that current methodologies overlook. You will investigate the impact of structured annotations on cross-embodiment transfer, automatic curriculum generation, and the development of world models that predict essential factors for manipulation. In our view, the data layer is not merely a supporting element; it is integral to the research itself.Your ResponsibilitiesDesign and implement vision-language pipelines for processing egocentric and teleoperation video, focusing on structured captioning, temporal grounding, action-conditioned scene understanding, and large-scale semantic annotation.Develop and assess representations that connect visual perception, language, and robotic actions, encompassing VLAs, video predictions, and world models.Create and enhance data curation systems that evaluate the quality, diversity, and comprehensiveness of extensive robot demonstration datasets.Engage with bimanual and high-DoF manipulation data, including real teleoperation footage and simulation-generated rollouts.Collaborate closely with partner labs to identify data requirements and establish a feedback loop between data quality and downstream policy performance.Stay updated on cutting-edge research in VLAs, video foundation models, flow matching, DiT architectures, and egocentric pretraining, translating insights into practical applications.
Join Our TeamAt Liquid AI, we are not just creating AI models; we are revolutionizing the very fabric of intelligence. Originating from MIT, our objective is to develop efficient AI systems across all scales. Our Liquid Foundation Models (LFMs) excel in environments where others falter—on-device, at the edge, and under real-time constraints. We are not simply refining existing concepts; we are pioneering the future of AI.We recognize that exceptional talent drives remarkable technology. The Liquid team is a collective of elite engineers, researchers, and innovators dedicated to crafting the next generation of AI solutions. Whether you are designing model architectures, enhancing our development platforms, or facilitating enterprise integrations, your contributions will significantly influence the evolution of intelligent systems.While San Francisco and Boston are preferred locations, we welcome applicants from other regions within the United States.
Our MissionAt Reflection AI, our goal is to develop open superintelligence and make it universally accessible.We are pioneering open weight models tailored for individuals, agents, enterprises, and even entire nations. Our diverse team comprises talented AI researchers and industry veterans from prestigious organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic, and many more.Role OverviewConstruct and enhance distributed training systems that drive the pre-training of cutting-edge models.Collaborate with research teams to design and execute extensive training runs for foundational models.Create infrastructure that facilitates efficient training across thousands of GPUs leveraging contemporary distributed training frameworks.Enhance training throughput, stability, and efficiency for extensive model training tasks.Work closely with pre-training researchers to convert experimental concepts into scalable, production-ready training systems.Boost performance of distributed training tasks through optimization of communication, memory management, and GPU utilization.Develop and maintain training pipelines that accommodate large-scale datasets, checkpointing, and iterative experiments.Identify and resolve performance bottlenecks within distributed training systems, including model parallelism, GPU communication, and training runtime environments.Contribute to the creation of systems that promote swift experimentation and iteration on novel training methods.
Join our innovative team at liquid-ai as a Member of the Technical Staff specializing in audio applications. As a post-training role, you will have the opportunity to apply your knowledge in cutting-edge audio technologies, contributing to the development of advanced machine learning solutions.This position is ideal for individuals who are eager to work in a collaborative environment and are passionate about audio technology and its applications in artificial intelligence.
At Catalog, we are pioneering the commerce infrastructure for AI—creating the essential framework that enables digital agents to not only explore the web but also comprehend, analyze, and engage with products. Our innovations drive the future of AI-driven shopping experiences, fundamentally transforming how consumers discover and purchase items online.Role OverviewAs a Technical Staff Member, you will be instrumental in developing core systems, shaping our engineering culture, and transitioning our vision from prototype to a robust platform. This role requires full-stack expertise and a commitment to owning and resolving challenges from start to finish.Who You AreYou have experience creating beloved and trusted products from the ground up.You combine technical proficiency with a keen product sense and data-driven intuition.You are well-versed in AI technologies.You prioritize speed, write clean code, and ensure thorough instrumentation.You seek a high level of ownership within a small, talent-rich team based in San Francisco.Challenges You Will TackleDevelop and deploy agentic-search APIs that deliver structured and real-time product data in milliseconds.Build checkout systems enabling agents to conduct transactions with any merchant.Create an embeddings and retrieval layer that optimizes recall, precision, and cost efficiency.Establish a product graph and ranking pipeline that adapts based on actual user outcomes.Preferred QualificationsProven experience shipping data-centric products in a live environment.Experience with recommendation systems or information retrieval methodologies.Familiarity with API development, search indexing, and data pipeline construction.Our Work CultureWe operate with a small, high-trust, and highly motivated team, fostering an environment of in-person collaboration in North Beach, San Francisco. Our process involves debate, decision-making, and execution.If your profile aligns with our needs, we will contact you to arrange 2-3 brief technical interviews, followed by an onsite meeting in our office where you will collaborate on a small project, exchange ideas, and meet the team.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in the development of general-purpose AI systems designed to operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our focus is on delivering low latency, efficient memory usage, privacy, and reliability. We collaborate with organizations in diverse sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek outstanding talent to join our mission.The OpportunityThe Training Infrastructure team is at the forefront of building the distributed systems that empower our next-generation Liquid Foundation Models. As our operations expand, we aim to innovate, implement, and enhance the infrastructure crucial for large-scale training.This role is centered around high ownership of training systems, emphasizing runtime, performance, and reliability rather than a typical platform or SRE function. You will collaborate within a small, agile team, creating vital systems from the ground up instead of working with pre-existing infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe are seeking an individual who:Embraces the complexity of distributed systems: Our team is dedicated to maintaining stability during extensive training runs, troubleshooting training failures across GPU clusters, and enhancing overall performance.Is passionate about building: We value team members who take pride in developing robust, efficient, and reliable infrastructure.Excels in uncertain environments: Our systems are designed to support evolving model architectures. You will be making decisions based on incomplete information and rapidly iterating.Aligns with team goals and delivers results: The best engineers on our team align with collective priorities while providing data-driven feedback when challenges arise.The WorkDesign and develop core systems that ensure quick and reliable large training runs.Create scalable distributed training infrastructure for GPU clusters.Implement and refine parallelism and sharding strategies for evolving architectures.Optimize distributed efficiency through topology-aware collectives, communication/compute overlap, and straggler mitigation.Develop data loading systems to eliminate I/O bottlenecks for multimodal datasets.
Join our dynamic team at Adyen as a Technical Staff Member in San Francisco! We are seeking innovative minds passionate about technology and problem-solving. In this role, you will collaborate with cross-functional teams to craft solutions that enhance our services and improve customer experiences.
About TierZero TierZero helps engineering teams use AI to build and ship code more efficiently. The platform targets the bottleneck of human speed in production, giving teams tools for faster incident response, better operational visibility, and shared knowledge. TierZero is backed by $7M in funding from investors including Accel and SV Angel. Companies like Discord, Drata, and Framer trust TierZero to strengthen their infrastructure for AI-driven engineering. Role Overview: Founding Member of Technical Staff This is an on-site role based at TierZero’s San Francisco headquarters, with three days a week in the office. As a founding member, direct collaboration with the CEO, CTO, and early customers shapes the direction of both product and systems. The work spans hands-on development and close engagement with users and leadership. What You Will Do Design and build intelligent AI systems to analyze large volumes of unstructured data. Deliver full-stack features based on real user feedback. Improve the product experience so AI agents are both reliable and easy for engineers to use. Develop systems that automatically evaluate LLM outputs and advance agentic reasoning using self-play and feedback loops. Create machine learning pipelines, including data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG), vector search, and graph databases. Prototype with open-source and new LLMs, comparing their strengths and weaknesses. Build scalable infrastructure for long-running, multi-step agents, with attention to memory, state, and asynchronous workflows. What We Look For Over five years of relevant professional or open-source experience. Comfort working in environments with uncertainty and evolving challenges. Strong product focus and a drive for customer satisfaction. Interest in large language models (LLMs), Model Control Planes (MCPs), cloud infrastructure, and observability tools. Previous startup experience is a plus. Location This position is based in San Francisco. Expect to work on-site three days per week at TierZero’s HQ.
TierZero builds tools that help engineering teams deliver and manage code efficiently. The platform enables quicker incident response, clearer operational visibility, and shared knowledge among engineers. Backed by $7 million from investors like Accel and SV Angel, TierZero supports clients such as Discord, Drata, and Framer as they strengthen infrastructure for AI-driven work. This in-person role is based at TierZero's San Francisco headquarters, with a hybrid schedule requiring three days onsite each week. As a founding member of the technical staff, work directly with the CEO, CTO, and customers to influence the direction of TierZero’s core products and systems. The position calls for flexibility as priorities shift and close collaboration across the company. What you will do Design and develop AI systems that handle large volumes of unstructured data. Build full-stack product features, informed by direct feedback from users. Enhance the product so agents are intelligent, reliable, and easy for engineers to use. Create systems to automatically evaluate outputs from large language models and improve agentic reasoning through self-play and feedback. Construct machine learning pipelines, including data ingestion, feature creation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search, and graph databases. Experiment with open-source and emerging large language models to compare different approaches. Develop scalable infrastructure for long-running, multi-step agents, including memory, state management, and asynchronous workflows. Requirements Interest in working with large language models, managed cloud platforms, cloud infrastructure, and observability tools. At least 5 years of professional experience or significant open-source contributions. Comfort with shifting priorities and tackling new technical problems. Strong product focus and commitment to customer outcomes. Openness to learning from a team with a track record of delivering over $10 billion in value. Ability to work onsite in San Francisco three days per week. Bonus: Experience in a startup setting and familiarity with startup dynamics.
tierzero is looking for a Founding Member of Technical Staff to help shape the direction of its technology from the ground up. This role is based at the company's San Francisco headquarters. Role overview As an early technical hire, you will work closely with engineers and product managers to build new products and features. The work centers on designing, coding, and delivering software solutions that address client needs and support tierzero's growth. Impact Contributions in this role will directly influence the company's future. The team values initiative and hands-on problem solving, giving each member a chance to make a visible difference in how the company evolves. Collaboration This position involves regular collaboration with a small, focused team. Input and ideas from every member help guide product direction and technical decisions.
Join Baseten as a Post-Training Applied Researcher, where you will be at the forefront of innovative research applications. Your expertise will help bridge the gap between training and real-world applications, making a tangible impact in the industry.
tierzero seeks a Founding Member of Technical Staff to play a key role in building the company’s technology from the earliest stages. This position is based at the San Francisco headquarters and offers the chance to collaborate directly with founders and engineers. Role overview As an early team member, you will help design and develop new products and systems. The work involves close collaboration with others in the office, shaping both the technical direction and the culture of the engineering team. What you will do Develop core technology in partnership with founders and engineers Contribute ideas and code that guide the evolution of tierzero’s products Help define engineering standards and establish best practices Location This position is based onsite at the San Francisco HQ.
Join Liquid AI as a Technical Staff Member specializing in Applied Vision. In this dynamic role, you will leverage cutting-edge technology to develop innovative solutions and enhance our product offerings. This position is ideal for recent graduates with a passion for technology and a desire to make a meaningful impact in the field of artificial intelligence.
Overview: Join Listen Labs as we respond to a surge in market demand with an ambitious 6-month product roadmap. We are expanding our engineering team and are on the lookout for a highly skilled technical expert (our current team includes three IOI medalists) who is eager to build a transformative product that reshapes decision-making for businesses. If you have a passion for solving intricate problems from start to finish, we want to connect with you.About Listen LabsListen Labs is an AI-driven research platform designed to help teams quickly extract insights from customer interviews in a matter of hours rather than months. We empower our clients by enabling them to analyze conversations, identify key themes, and make faster, more informed product decisions.Why Work with Us?Exceptional Team: Founded by seasoned entrepreneurs with a successful AI exit, along with talent from renowned companies such as Jane Street, Twitter, Stripe, Affirm, Bain, and Goldman Sachs, our team boasts impressive credentials including IOI and ICPC backgrounds.Rapid Growth: As a 40-person team backed by Sequoia Capital, we have achieved a remarkable growth trajectory, scaling from $0 to a $14 million run-rate in less than a year. We prioritize craftsmanship and thrive on collaboration with individuals who take ownership.Impressive Traction: We are experiencing rapid growth across various sectors, securing enterprise clients such as Google, Microsoft, Nestlé, and Procter & Gamble.Proven Performance: We maintain an industry-leading win rate driven by our uniquely differentiated product.Market Validation: We consistently attract customers from diverse segments, achieving six-figure contracts that facilitate quick expansions.Viral Product: Our interviews reach tens of thousands of viewers, promoting product-led growth, organic expansion, and daily interest from Fortune 500 companies.Technical Challenges Await:Research Agent Development:Unlike traditional software purchases, hiring McKinsey offers valuable opinions, expertise, and execution. We aim to provide users with an AI agent that possesses complete knowledge about our platform and best research practices, assisting them in project setup, interview conduction, and response analysis.Human Database Creation:One of our core offerings is the ability to identify target users effectively (e.g., "power users of ChatGPT and Excel"). We are in the process of building a comprehensive database that connects users with the insights they need.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in the development of versatile artificial intelligence systems optimized for performance across various deployment environments, ranging from data center accelerators to on-device hardware. Our focus on low latency, minimal memory consumption, privacy, and reliability allows us to partner effectively with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome talented individuals who can contribute to our mission.The OpportunityThis unique position places you at the forefront of advanced foundation models and their practical applications. You will oversee post-training projects from start to finish for some of the world’s leading enterprises, while also playing a vital role in the ongoing development of Liquid’s core models.In this role, you will not have to choose between impactful customer work and foundational development; instead, you will enjoy deep involvement in both. You will have significant influence over how models are adapted, assessed, and deployed, directly contributing to the enhancement of Liquid’s post-training capabilities.If you are passionate about data integrity, evaluation processes, and ensuring that models perform effectively in real-world scenarios, this is your chance to redefine the standards of applied AI at a foundation-model company.What We're Looking ForWe seek an individual who:Takes ownership: You will lead post-training initiatives from customer requirements to delivery and evaluation.Thinks end-to-end: You will connect the dots across data generation, training, alignment, and evaluation as a cohesive system.Is pragmatic: You prioritize model quality and customer satisfaction over theoretical publications.Communicates clearly: You can interpret customer needs and effectively communicate with internal technical teams, providing constructive feedback when necessary.The WorkServe as the technical lead for post-training engagements with enterprise clients.Translate client requirements into actionable post-training specifications and workflows.Design and implement data generation, filtering, and quality assessment methodologies.Conduct supervised fine-tuning, preference alignment, and reinforcement learning processes.Create task-specific evaluations, analyze outcomes, and integrate insights back into core post-training workflows.
Join Reka as a Member of the Technical Staff in Applied AI!Leverage cutting-edge AI models to tackle intricate real-world challenges.Engage in close collaboration with researchers and fellow team members to explore the latest developments in AI and ML.Partner with our customers to seamlessly integrate our innovative models into their existing technology frameworks.Drive business success with a strong sense of product ownership and accountability.Be part of a pioneering team in a rapidly growing environment, taking on diverse roles.
At Tzafon, we are pioneering the development of scalable computing systems and pushing the boundaries of machine intelligence with our foundation model lab. Located in vibrant cities such as San Francisco, Zurich, and Tel Aviv, we have successfully secured over $12 million in funding to fuel our mission of expanding the horizons of AI technology.Our dynamic team comprises engineers and scientists with extensive expertise in machine learning infrastructure and research. Founded by IOI and IMO medalists, PhDs, and seasoned professionals from top tech firms, we specialize in training advanced models and constructing robust infrastructures to automate tasks across various real-world scenarios.In this role, you will collaborate closely with our product and post-training teams to deploy Large Action Models that drive impactful results. Your responsibilities will include building evaluation frameworks, establishing benchmarks, and creating fine-tuning pipelines to ensure optimal model performance.
Technical Staff Member in Applied AIAbout the OpportunityWe are seeking a highly skilled Technical Staff Member specializing in generative modeling to bridge the gap between our advanced models and the clients who rely on them. You will collaborate with a diverse team of machine learning experts, protein engineers, and biologists to revolutionize biological control and disease treatment. Your role will involve gaining a comprehensive understanding of our proprietary generative models and leveraging that expertise to deploy, adapt, and optimize these models in client environments, particularly within the pharmaceutical and biotech industries.This hybrid position requires a research-oriented mindset to deeply understand our models, paired with the communication skills necessary to translate that knowledge into production systems that yield scientific value for our collaborators.About UsAt Latent Labs, we are pioneering frontier models that decode the fundamentals of biology. Our ambitious goals are driven by curiosity and a commitment to scientific excellence. Prior to founding Latent Labs, our team co-developed DeepMind's Nobel Prize-winning AlphaFold, innovated latent diffusion, and created groundbreaking lab data management systems along with high-throughput protein screening platforms. Here, you will work alongside some of the brightest minds in generative AI and biology.We value interdisciplinary collaboration, continuous learning, and teamwork. Our team offsites foster a culture of trust and connection between our London and San Francisco offices. We are looking for innovators who are passionate about solving complex problems and making a positive global impact. Join us on our ambitious mission.Your QualificationsExpertise in Machine Learning: You are a proficient ML researcher with a strong background in generative modeling, evidenced by your contributions to notable open-source projects, impactful product launches, or significant publications in leading venues such as NeurIPS, ICML, ICLR, or Nature. You possess a deep understanding of generative model architectures, training dynamics, and inference behavior.Proficient ML Developer: You produce robust, tested, and maintainable ML code. Your experience includes using version control and code review systems. You are adept at rapid prototyping while also being able to write elegant production code. Additionally, you have experience in building systems that deploy large models via APIs and executing inference tasks in production environments.
Our MissionAt Reflection AI, our goal is to create open superintelligence and ensure its accessibility for everyone.We are pioneering open weight models for various users, including individuals, enterprises, and even nation-states. Our talented team comprises AI researchers and industry veterans from leading organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, and Anthropic.Role OverviewDevelop systems that convert robust pre-trained models into aligned and versatile agents.Lead research and engineering efforts to advance post-training practices, focusing on data curation and large-scale optimization.Create data generation frameworks, reward models, reinforcement learning algorithms, and techniques for inference-time scaling.Collaborate with both pre-training and post-training teams to achieve significant enhancements in model capabilities.Help refine our understanding of how large models learn to reason, follow instructions, and evolve through reinforcement learning.Your ProfileSolid grasp of machine learning principles with hands-on experience in large-scale LLM training.Proficient engineering skills, with the ability to navigate intricate ML codebases and distributed systems.Experience in enhancing model performance through data, reward modeling, or reinforcement learning techniques.Track record of leading ambitious research or engineering projects resulting in measurable improvements.Thrives in a dynamic, high-agency startup atmosphere; oriented towards action and clarity in execution.Ability to work seamlessly across research and infrastructure boundaries.Excellent communication skills and a collaborative mindset.Driven by a passion for pushing the boundaries of intelligence.What We Provide:At Reflection AI, we believe that to truly build open superintelligence, it must be rooted in a strong foundation. By joining us, you will contribute to building from the ground up within a compact, highly skilled team. Together, we will shape the future of our company and the landscape of open foundational models.We aim for you to accomplish the most impactful work of your career, with the assurance that you and your loved ones are well-supported.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in creating versatile AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware, focusing on low latency, minimal memory consumption, privacy, and dependability. Our collaborations extend across industries including consumer electronics, automotive, life sciences, and financial services. As we undergo rapid expansion, we are on the lookout for outstanding individuals to join our journey.The OpportunityThe Vision-Language Models (VLM) team is dedicated to developing cutting-edge vision-language models that function seamlessly on devices, adhering to stringent latency and memory requirements without compromising quality. Having already launched four premier models, we are excited about what lies ahead.This team is responsible for the complete VLM pipeline, encompassing research on novel architectures, training algorithms, data curation, evaluation, and deployment. You will be part of a dedicated, hands-on team that directly engages with models and works closely with our pretraining, post-training, and infrastructure teams. Your success will be gauged by the performance of the models we deliver.
Join Our Team at XDOFAt XDOF, we are at the forefront of building the future of robotics by focusing on the critical element that drives innovation: data. Our mission is to create robust data collection systems and annotation pipelines that serve as the backbone of advanced foundation models in robotics.We are seeking a passionate Research Engineer / Scientist to spearhead technical initiatives in the intersection of vision-language models and robotic learning. Your role will involve transforming egocentric and teleoperation video into high-quality training data for VLA models, as well as contributing to the development of these models directly.In addition to pipeline construction, you will delve into research aimed at enhancing the utility of robot data. This includes uncovering new metadata such as contact events, affordance labels, and dynamics priors from video to unlock new capabilities that current methodologies overlook. You will investigate the impact of structured annotations on cross-embodiment transfer, automatic curriculum generation, and the development of world models that predict essential factors for manipulation. In our view, the data layer is not merely a supporting element; it is integral to the research itself.Your ResponsibilitiesDesign and implement vision-language pipelines for processing egocentric and teleoperation video, focusing on structured captioning, temporal grounding, action-conditioned scene understanding, and large-scale semantic annotation.Develop and assess representations that connect visual perception, language, and robotic actions, encompassing VLAs, video predictions, and world models.Create and enhance data curation systems that evaluate the quality, diversity, and comprehensiveness of extensive robot demonstration datasets.Engage with bimanual and high-DoF manipulation data, including real teleoperation footage and simulation-generated rollouts.Collaborate closely with partner labs to identify data requirements and establish a feedback loop between data quality and downstream policy performance.Stay updated on cutting-edge research in VLAs, video foundation models, flow matching, DiT architectures, and egocentric pretraining, translating insights into practical applications.
Join Our TeamAt Liquid AI, we are not just creating AI models; we are revolutionizing the very fabric of intelligence. Originating from MIT, our objective is to develop efficient AI systems across all scales. Our Liquid Foundation Models (LFMs) excel in environments where others falter—on-device, at the edge, and under real-time constraints. We are not simply refining existing concepts; we are pioneering the future of AI.We recognize that exceptional talent drives remarkable technology. The Liquid team is a collective of elite engineers, researchers, and innovators dedicated to crafting the next generation of AI solutions. Whether you are designing model architectures, enhancing our development platforms, or facilitating enterprise integrations, your contributions will significantly influence the evolution of intelligent systems.While San Francisco and Boston are preferred locations, we welcome applicants from other regions within the United States.
Our MissionAt Reflection AI, our goal is to develop open superintelligence and make it universally accessible.We are pioneering open weight models tailored for individuals, agents, enterprises, and even entire nations. Our diverse team comprises talented AI researchers and industry veterans from prestigious organizations such as DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic, and many more.Role OverviewConstruct and enhance distributed training systems that drive the pre-training of cutting-edge models.Collaborate with research teams to design and execute extensive training runs for foundational models.Create infrastructure that facilitates efficient training across thousands of GPUs leveraging contemporary distributed training frameworks.Enhance training throughput, stability, and efficiency for extensive model training tasks.Work closely with pre-training researchers to convert experimental concepts into scalable, production-ready training systems.Boost performance of distributed training tasks through optimization of communication, memory management, and GPU utilization.Develop and maintain training pipelines that accommodate large-scale datasets, checkpointing, and iterative experiments.Identify and resolve performance bottlenecks within distributed training systems, including model parallelism, GPU communication, and training runtime environments.Contribute to the creation of systems that promote swift experimentation and iteration on novel training methods.
Join our innovative team at liquid-ai as a Member of the Technical Staff specializing in audio applications. As a post-training role, you will have the opportunity to apply your knowledge in cutting-edge audio technologies, contributing to the development of advanced machine learning solutions.This position is ideal for individuals who are eager to work in a collaborative environment and are passionate about audio technology and its applications in artificial intelligence.
At Catalog, we are pioneering the commerce infrastructure for AI—creating the essential framework that enables digital agents to not only explore the web but also comprehend, analyze, and engage with products. Our innovations drive the future of AI-driven shopping experiences, fundamentally transforming how consumers discover and purchase items online.Role OverviewAs a Technical Staff Member, you will be instrumental in developing core systems, shaping our engineering culture, and transitioning our vision from prototype to a robust platform. This role requires full-stack expertise and a commitment to owning and resolving challenges from start to finish.Who You AreYou have experience creating beloved and trusted products from the ground up.You combine technical proficiency with a keen product sense and data-driven intuition.You are well-versed in AI technologies.You prioritize speed, write clean code, and ensure thorough instrumentation.You seek a high level of ownership within a small, talent-rich team based in San Francisco.Challenges You Will TackleDevelop and deploy agentic-search APIs that deliver structured and real-time product data in milliseconds.Build checkout systems enabling agents to conduct transactions with any merchant.Create an embeddings and retrieval layer that optimizes recall, precision, and cost efficiency.Establish a product graph and ranking pipeline that adapts based on actual user outcomes.Preferred QualificationsProven experience shipping data-centric products in a live environment.Experience with recommendation systems or information retrieval methodologies.Familiarity with API development, search indexing, and data pipeline construction.Our Work CultureWe operate with a small, high-trust, and highly motivated team, fostering an environment of in-person collaboration in North Beach, San Francisco. Our process involves debate, decision-making, and execution.If your profile aligns with our needs, we will contact you to arrange 2-3 brief technical interviews, followed by an onsite meeting in our office where you will collaborate on a small project, exchange ideas, and meet the team.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in the development of general-purpose AI systems designed to operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our focus is on delivering low latency, efficient memory usage, privacy, and reliability. We collaborate with organizations in diverse sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek outstanding talent to join our mission.The OpportunityThe Training Infrastructure team is at the forefront of building the distributed systems that empower our next-generation Liquid Foundation Models. As our operations expand, we aim to innovate, implement, and enhance the infrastructure crucial for large-scale training.This role is centered around high ownership of training systems, emphasizing runtime, performance, and reliability rather than a typical platform or SRE function. You will collaborate within a small, agile team, creating vital systems from the ground up instead of working with pre-existing infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe are seeking an individual who:Embraces the complexity of distributed systems: Our team is dedicated to maintaining stability during extensive training runs, troubleshooting training failures across GPU clusters, and enhancing overall performance.Is passionate about building: We value team members who take pride in developing robust, efficient, and reliable infrastructure.Excels in uncertain environments: Our systems are designed to support evolving model architectures. You will be making decisions based on incomplete information and rapidly iterating.Aligns with team goals and delivers results: The best engineers on our team align with collective priorities while providing data-driven feedback when challenges arise.The WorkDesign and develop core systems that ensure quick and reliable large training runs.Create scalable distributed training infrastructure for GPU clusters.Implement and refine parallelism and sharding strategies for evolving architectures.Optimize distributed efficiency through topology-aware collectives, communication/compute overlap, and straggler mitigation.Develop data loading systems to eliminate I/O bottlenecks for multimodal datasets.
Join our dynamic team at Adyen as a Technical Staff Member in San Francisco! We are seeking innovative minds passionate about technology and problem-solving. In this role, you will collaborate with cross-functional teams to craft solutions that enhance our services and improve customer experiences.
About TierZero TierZero helps engineering teams use AI to build and ship code more efficiently. The platform targets the bottleneck of human speed in production, giving teams tools for faster incident response, better operational visibility, and shared knowledge. TierZero is backed by $7M in funding from investors including Accel and SV Angel. Companies like Discord, Drata, and Framer trust TierZero to strengthen their infrastructure for AI-driven engineering. Role Overview: Founding Member of Technical Staff This is an on-site role based at TierZero’s San Francisco headquarters, with three days a week in the office. As a founding member, direct collaboration with the CEO, CTO, and early customers shapes the direction of both product and systems. The work spans hands-on development and close engagement with users and leadership. What You Will Do Design and build intelligent AI systems to analyze large volumes of unstructured data. Deliver full-stack features based on real user feedback. Improve the product experience so AI agents are both reliable and easy for engineers to use. Develop systems that automatically evaluate LLM outputs and advance agentic reasoning using self-play and feedback loops. Create machine learning pipelines, including data ingestion, feature generation, embedding stores, retrieval-augmented generation (RAG), vector search, and graph databases. Prototype with open-source and new LLMs, comparing their strengths and weaknesses. Build scalable infrastructure for long-running, multi-step agents, with attention to memory, state, and asynchronous workflows. What We Look For Over five years of relevant professional or open-source experience. Comfort working in environments with uncertainty and evolving challenges. Strong product focus and a drive for customer satisfaction. Interest in large language models (LLMs), Model Control Planes (MCPs), cloud infrastructure, and observability tools. Previous startup experience is a plus. Location This position is based in San Francisco. Expect to work on-site three days per week at TierZero’s HQ.
TierZero builds tools that help engineering teams deliver and manage code efficiently. The platform enables quicker incident response, clearer operational visibility, and shared knowledge among engineers. Backed by $7 million from investors like Accel and SV Angel, TierZero supports clients such as Discord, Drata, and Framer as they strengthen infrastructure for AI-driven work. This in-person role is based at TierZero's San Francisco headquarters, with a hybrid schedule requiring three days onsite each week. As a founding member of the technical staff, work directly with the CEO, CTO, and customers to influence the direction of TierZero’s core products and systems. The position calls for flexibility as priorities shift and close collaboration across the company. What you will do Design and develop AI systems that handle large volumes of unstructured data. Build full-stack product features, informed by direct feedback from users. Enhance the product so agents are intelligent, reliable, and easy for engineers to use. Create systems to automatically evaluate outputs from large language models and improve agentic reasoning through self-play and feedback. Construct machine learning pipelines, including data ingestion, feature creation, embedding stores, retrieval-augmented generation (RAG) pipelines, vector search, and graph databases. Experiment with open-source and emerging large language models to compare different approaches. Develop scalable infrastructure for long-running, multi-step agents, including memory, state management, and asynchronous workflows. Requirements Interest in working with large language models, managed cloud platforms, cloud infrastructure, and observability tools. At least 5 years of professional experience or significant open-source contributions. Comfort with shifting priorities and tackling new technical problems. Strong product focus and commitment to customer outcomes. Openness to learning from a team with a track record of delivering over $10 billion in value. Ability to work onsite in San Francisco three days per week. Bonus: Experience in a startup setting and familiarity with startup dynamics.
tierzero is looking for a Founding Member of Technical Staff to help shape the direction of its technology from the ground up. This role is based at the company's San Francisco headquarters. Role overview As an early technical hire, you will work closely with engineers and product managers to build new products and features. The work centers on designing, coding, and delivering software solutions that address client needs and support tierzero's growth. Impact Contributions in this role will directly influence the company's future. The team values initiative and hands-on problem solving, giving each member a chance to make a visible difference in how the company evolves. Collaboration This position involves regular collaboration with a small, focused team. Input and ideas from every member help guide product direction and technical decisions.
Join Baseten as a Post-Training Applied Researcher, where you will be at the forefront of innovative research applications. Your expertise will help bridge the gap between training and real-world applications, making a tangible impact in the industry.
tierzero seeks a Founding Member of Technical Staff to play a key role in building the company’s technology from the earliest stages. This position is based at the San Francisco headquarters and offers the chance to collaborate directly with founders and engineers. Role overview As an early team member, you will help design and develop new products and systems. The work involves close collaboration with others in the office, shaping both the technical direction and the culture of the engineering team. What you will do Develop core technology in partnership with founders and engineers Contribute ideas and code that guide the evolution of tierzero’s products Help define engineering standards and establish best practices Location This position is based onsite at the San Francisco HQ.
Apr 27, 2026
Sign in to browse more jobs
Create account — see all 1,776 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.