Member Of Technical Staff Multi Modal Vision At Liquid Ai San Francisco jobs in San Francisco – Browse 11,536 openings on RoboApply Jobs
Member Of Technical Staff Multi Modal Vision At Liquid Ai San Francisco jobs in San Francisco
Open roles matching “Member Of Technical Staff Multi Modal Vision At Liquid Ai San Francisco” with location signals for San Francisco. 11,536 active listings on RoboApply Jobs.
11,536 jobs found
Member of Technical Staff - Multi-Modal Vision at Liquid AI | San Francisco
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
Minimum Qualifications:Proven hands-on experience in training or evaluating Vision-Language Models (VLMs) with a strong emphasis on experimental rigor. Capability to convert research concepts into scalable implementations, with an iterative approach to hypothesis refinement. Strong proficiency in Python and at least one deep learning framework. M. S. or Ph. D. in Computer Science, Mathematics, or a related domain; or equivalent practical experience. Preferred Experience:Experience in building or optimizing multimodal training or data pipelines. Familiarity with distributed training frameworks (e.g., DeepSpeed, FSDP, Megatron-LM). Experience with multimodal post-training techniques (e.g., SFT, preference optimization, reinforcement learning methods). Expertise in dataset design and data quality assessment (including quality and diversity evaluation, long-tail mining). Prior contributions to open-source projects (code, data, models) on platforms like GitHub or Hugging Face. Published research in leading AI conferences (e.g., NeurIPS, ICML, CVPR, ECCV, ICLR, ACL). Experience in computer vision or visual representation learning.
About the job
About Liquid AI
Originating from MIT CSAIL, Liquid AI specializes in creating versatile AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware, focusing on low latency, minimal memory consumption, privacy, and dependability. Our collaborations extend across industries including consumer electronics, automotive, life sciences, and financial services. As we undergo rapid expansion, we are on the lookout for outstanding individuals to join our journey.
The Opportunity
The Vision-Language Models (VLM) team is dedicated to developing cutting-edge vision-language models that function seamlessly on devices, adhering to stringent latency and memory requirements without compromising quality. Having already launched four premier models, we are excited about what lies ahead.
This team is responsible for the complete VLM pipeline, encompassing research on novel architectures, training algorithms, data curation, evaluation, and deployment. You will be part of a dedicated, hands-on team that directly engages with models and works closely with our pretraining, post-training, and infrastructure teams. Your success will be gauged by the performance of the models we deliver.
About Liquid AI
Liquid AI is a pioneering organization emerging from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), dedicated to crafting advanced AI systems that meet the demands of modern technology across various sectors. Our mission is to provide privacy-focused, reliable, and efficient AI solutions that enhance user experiences across industries, including automotive, life sciences, and financial services.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in creating versatile AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware, focusing on low latency, minimal memory consumption, privacy, and dependability. Our collaborations extend across industries including consumer electronics, automotive, life sciences, and financial services. As we undergo rapid expansion, we are on the lookout for outstanding individuals to join our journey.The OpportunityThe Vision-Language Models (VLM) team is dedicated to developing cutting-edge vision-language models that function seamlessly on devices, adhering to stringent latency and memory requirements without compromising quality. Having already launched four premier models, we are excited about what lies ahead.This team is responsible for the complete VLM pipeline, encompassing research on novel architectures, training algorithms, data curation, evaluation, and deployment. You will be part of a dedicated, hands-on team that directly engages with models and works closely with our pretraining, post-training, and infrastructure teams. Your success will be gauged by the performance of the models we deliver.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in the development of versatile artificial intelligence systems optimized for performance across various deployment environments, ranging from data center accelerators to on-device hardware. Our focus on low latency, minimal memory consumption, privacy, and reliability allows us to partner effectively with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome talented individuals who can contribute to our mission.The OpportunityThis unique position places you at the forefront of advanced foundation models and their practical applications. You will oversee post-training projects from start to finish for some of the world’s leading enterprises, while also playing a vital role in the ongoing development of Liquid’s core models.In this role, you will not have to choose between impactful customer work and foundational development; instead, you will enjoy deep involvement in both. You will have significant influence over how models are adapted, assessed, and deployed, directly contributing to the enhancement of Liquid’s post-training capabilities.If you are passionate about data integrity, evaluation processes, and ensuring that models perform effectively in real-world scenarios, this is your chance to redefine the standards of applied AI at a foundation-model company.What We're Looking ForWe seek an individual who:Takes ownership: You will lead post-training initiatives from customer requirements to delivery and evaluation.Thinks end-to-end: You will connect the dots across data generation, training, alignment, and evaluation as a cohesive system.Is pragmatic: You prioritize model quality and customer satisfaction over theoretical publications.Communicates clearly: You can interpret customer needs and effectively communicate with internal technical teams, providing constructive feedback when necessary.The WorkServe as the technical lead for post-training engagements with enterprise clients.Translate client requirements into actionable post-training specifications and workflows.Design and implement data generation, filtering, and quality assessment methodologies.Conduct supervised fine-tuning, preference alignment, and reinforcement learning processes.Create task-specific evaluations, analyze outcomes, and integrate insights back into core post-training workflows.
Join the Innovative Team at Liquid AIFounded as a spin-off from MIT’s CSAIL, Liquid AI is at the forefront of developing cutting-edge AI systems that operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our technology is designed to ensure low latency, efficient memory usage, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services as we rapidly scale our operations. We are seeking talented individuals who are passionate about technology and innovation.Your Role in Our TeamAs a GPU Performance Engineer, your expertise will be critical in enhancing our models and workflows beyond the capabilities of standard frameworks. You will be responsible for designing and deploying custom CUDA kernels, conducting hardware-level profiling, and transforming research concepts into production code that yields tangible improvements in our pipelines (training, post-training, and inference). Our dynamic team values initiative and ownership, and we are looking for a candidate who thrives on tackling complex challenges related to memory hierarchies, tensor cores, and profiling outputs.While San Francisco and Boston are preferred, we welcome applications from other locations.
About Us:Modal is at the forefront of AI infrastructure. We provide seamless access to GPUs, quick container startups, and integrated storage solutions, simplifying the process of model training, batch job execution, and low-latency inference. Leading companies such as Suno, Lovable, and Substack trust Modal to transition from prototypes to full-scale production without the complexities of infrastructure management.Our rapidly expanding team operates out of NYC, San Francisco, and Stockholm. With a remarkable 9-figure annual recurring revenue (ARR), we recently achieved a valuation of $1.1 billion after our successful Series B funding round. Thousands of customers, including Lovable, Scale AI, Substack, and Suno, depend on us for their AI workload needs.Joining Modal means becoming part of a dynamic, rapidly growing AI infrastructure company with substantial opportunities for personal and professional advancement. Our team comprises creators of well-known open-source projects (e.g., Seaborn, Luigi), academic researchers, international competition medalists, and seasoned engineering and product leaders with extensive experience.The Role:We are seeking a strategic Solutions Architect to influence technical strategies across our key enterprise accounts.In this position, you will serve as the technical partner to Enterprise Account Executives, managing intricate evaluations, developing infrastructure modernization strategies, and promoting multi-product adoption across AI and ML applications.This is a strategic, consultative role that demands a robust architectural background, a strong executive presence, and the ability to impact major infrastructure decisions worth millions. You will collaborate directly with CTOs, VPs of Engineering, and leaders of ML platforms to redefine the construction and operation of AI infrastructure.If you excel in fast-paced technical sales contexts and are eager to shape the infrastructure that drives modern AI innovations, we would love to hear from you.
About Liquid AIFounded as a spinoff of MIT CSAIL, Liquid AI specializes in developing versatile AI systems designed for optimized performance across various deployment platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory consumption, privacy, and reliability sets us apart. We collaborate with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek remarkable talent to join our journey.The OpportunityAs we establish our solutions architecture function from the ground up, you will play a pivotal role as one of our inaugural Solutions Architects. Collaborating closely with the Head of Solutions Architecture and the go-to-market organization, you will manage customer engagements from inception to completion.Our models are specifically engineered for environments constrained by memory, latency, and power, encompassing edge devices, mobile applications, embedded systems, and on-premises infrastructure where traditional models cannot operate. You will engage with this boundary daily.Our clientele ranges from AI-native startups to established enterprises venturing into AI for the first time. Your mission is to bridge the gap between our models' capabilities and customers' expectations, delivering on that promise from technical validation through to go-live.
Join Liquid AI as a Technical Staff Member specializing in Applied Vision. In this dynamic role, you will leverage cutting-edge technology to develop innovative solutions and enhance our product offerings. This position is ideal for recent graduates with a passion for technology and a desire to make a meaningful impact in the field of artificial intelligence.
Join Our TeamAt Liquid AI, we are not just creating AI models; we are revolutionizing the very fabric of intelligence. Originating from MIT, our objective is to develop efficient AI systems across all scales. Our Liquid Foundation Models (LFMs) excel in environments where others falter—on-device, at the edge, and under real-time constraints. We are not simply refining existing concepts; we are pioneering the future of AI.We recognize that exceptional talent drives remarkable technology. The Liquid team is a collective of elite engineers, researchers, and innovators dedicated to crafting the next generation of AI solutions. Whether you are designing model architectures, enhancing our development platforms, or facilitating enterprise integrations, your contributions will significantly influence the evolution of intelligent systems.While San Francisco and Boston are preferred locations, we welcome applicants from other regions within the United States.
About Liquid AILiquid AI, a pioneering company spun out of MIT CSAIL, is at the forefront of developing general-purpose AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory usage, privacy, and reliability allows us to partner with some of the most esteemed enterprises in consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are seeking exceptional talent to join our innovative journey.The OpportunityJoin our cutting-edge Audio team, where we are developing advanced speech-language models capable of handling Speech-to-Text (STT), Text-to-Speech (TTS), and speech-to-speech tasks within a unified architecture. This pivotal role supports applied audio model development, directly collaborating with the technical lead to deliver production systems that operate on-device under real-time constraints. You will take ownership of key workstreams encompassing data pipelines, evaluation systems, and customer deployments. If you are eager to tackle unique technical challenges within a small, elite team where your contributions are impactful, this is the role for you.What We're Looking ForWe are seeking an individual who:Builds first, theorizes later: You prioritize shipping working systems over theoretical models; production-grade code is your default.Owns outcomes end-to-end: You take full responsibility for everything from data pipelines to customer deployments and don't shy away from challenges.Thrives under constraints: On-device, low-latency, memory-constrained environments motivate you. You view constraints as opportunities for innovative design.Ramps quickly on new territory: You are comfortable closing knowledge gaps swiftly and actively seek feedback to drive results.The WorkDevelop and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale.Design, implement, and maintain evaluation systems that assess multimodal performance across both internal and public benchmarks.Fine-tune and adapt audio models to cater to customer-specific use cases, taking charge from requirement gathering through to deployment.Contribute production code to the core audio repository while collaborating closely with infrastructure and research teams.Facilitate experimentation under real hardware constraints, transitioning smoothly between customer-focused projects and core development initiatives.
About Liquid AIBorn from the innovative environment of MIT CSAIL, Liquid AI develops cutting-edge general-purpose AI systems that operate seamlessly across various deployment environments—from data center accelerators to on-device hardware. Our solutions prioritize low latency, minimal memory consumption, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome exceptional talent to our dynamic team.The OpportunityJoin us in establishing the product function that will transform Liquid AI's technological advancements into scalable, repeatable solutions for enterprise clients. As a key member of our Product team, you will work closely with technical leaders to define, package, and launch AI solutions that meet market needs. This role requires daily collaboration with ML engineers, GTM leaders, and enterprise customers to understand the value of our technology and effectively deliver it. This position offers significant ownership, allowing you to treat your solution area like your own startup within our organization.What We're Looking ForWe are seeking an individual who embodies the following qualities:Customer Obsession: Prioritizes understanding customer needs through direct feedback rather than assumptions.Self-Direction: Takes initiative to dive deep into problems without prompting, and is comfortable navigating uncertainty to propose effective solutions.Technical Fluency: Engages confidently with ML engineers and researchers, understanding the complexities of deploying AI systems in real-world applications.Founder Mentality: Treats their solution area as a startup, owning outcomes across various functions, from technical architecture to go-to-market strategies.The WorkOversee one or more go-to-market ready solutions from inception to scalable customer deployment.Analyze customer interactions to extract insights and identify productization opportunities.Collaborate with ML and inference teams to develop tools that streamline implementation.Define Ideal Customer Profiles (ICPs), pricing strategies, and packaging for scalable solutions.Partner with GTM teams to enhance outbound sales efforts around productized offerings.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in the development of general-purpose AI systems designed to operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our focus is on delivering low latency, efficient memory usage, privacy, and reliability. We collaborate with organizations in diverse sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek outstanding talent to join our mission.The OpportunityThe Training Infrastructure team is at the forefront of building the distributed systems that empower our next-generation Liquid Foundation Models. As our operations expand, we aim to innovate, implement, and enhance the infrastructure crucial for large-scale training.This role is centered around high ownership of training systems, emphasizing runtime, performance, and reliability rather than a typical platform or SRE function. You will collaborate within a small, agile team, creating vital systems from the ground up instead of working with pre-existing infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe are seeking an individual who:Embraces the complexity of distributed systems: Our team is dedicated to maintaining stability during extensive training runs, troubleshooting training failures across GPU clusters, and enhancing overall performance.Is passionate about building: We value team members who take pride in developing robust, efficient, and reliable infrastructure.Excels in uncertain environments: Our systems are designed to support evolving model architectures. You will be making decisions based on incomplete information and rapidly iterating.Aligns with team goals and delivers results: The best engineers on our team align with collective priorities while providing data-driven feedback when challenges arise.The WorkDesign and develop core systems that ensure quick and reliable large training runs.Create scalable distributed training infrastructure for GPU clusters.Implement and refine parallelism and sharding strategies for evolving architectures.Optimize distributed efficiency through topology-aware collectives, communication/compute overlap, and straggler mitigation.Develop data loading systems to eliminate I/O bottlenecks for multimodal datasets.
Liquid AI, a spin-off from MIT's CSAIL, develops AI systems designed to run efficiently on standard CPUs. The team emphasizes low latency, minimal memory consumption, and strong reliability. Liquid AI works with major players in consumer electronics, automotive, life sciences, and financial services, and is expanding its team as the company grows. Role overview The Executive Assistant supports the C-suite and executive leadership at Liquid AI's San Francisco office. This position serves as a central point for keeping leaders aligned and informed across Go-To-Market, Product, and Engineering. The Executive Assistant ensures smooth information flow between meetings, decisions, and teams, and also coordinates events and partner visits at the office. What makes a strong candidate Information hub: Tracks project status, decisions, and commitments across different functions. Responds to inquiries directly or knows how to find accurate answers. Proactive planning: Reviews upcoming meetings, identifies preparation gaps, flags conflicts, and ensures materials are ready ahead of time. Clear communication: Liaises with external partners, delivers concise updates to technical leaders, and manages follow-ups across teams. Adaptability: Navigates shifting priorities, changing schedules, and new workstreams with resilience and flexibility. Key responsibilities Serves as the operational backbone for senior leaders by tracking action items, maintaining continuity between meetings, and keeping details organized. Manages complex and dynamic calendars, evaluates meeting requests, aligns schedules with strategic priorities, and protects time for high-impact work. Maintains visibility into cross-functional workstreams, providing leadership with relevant context without requiring them to seek updates. Oversees logistics and preparation for partner and customer visits, ensuring a welcoming and effective experience at the San Francisco office.
Liquid AI, a company spun out of MIT CSAIL, develops general-purpose AI systems designed for efficiency, privacy, and reliability across a wide range of platforms. The team partners with enterprises in fields such as consumer electronics, automotive, life sciences, and financial services. As Liquid AI expands, the company is seeking new team members to help shape the direction of AI technology. Role overview This Product Marketing Manager position is the first of its kind at Liquid AI and reports directly to the VP of Marketing in San Francisco. The role bridges product development, communications, and go-to-market planning. The Product Marketing Manager will play a key part in ensuring Liquid AI’s innovations reach the right audiences and will help establish the foundation for future marketing initiatives. Success in this position requires a strategic mindset, the ability to understand complex technical products, and an understanding of both enterprise clients and technical users. Adaptability and a willingness to build new processes from the ground up are important for this role. What matters at Liquid AI Builder: Uses modern AI tools (including Claude Code) for content creation, campaign management, and prototyping. Able to create demos or proofs-of-concept for launches and sales, and guide others in these areas. Translator: Communicates complex technical details in clear, credible ways for the market. Knows when to consult engineering for more information before a launch and balances immediate needs with long-term planning. Operator: Establishes repeatable systems such as kickoffs, briefs, and retrospectives, rather than focusing only on one-off projects. Cross-functional collaborator: Works effectively with product, go-to-market, communications, and sales teams to drive campaigns across multiple channels. Networker: Brings recommendations for tools, contractors, and agencies to help scale marketing and communications. Comfortable working with AI tools and freelancers. Main responsibilities Lead competitive analysis and market research to refine Liquid AI’s positioning and identify new growth opportunities. Collaborate with cross-functional teams to develop and execute product marketing strategies.
Role overview As a Product Engineer at liquid-ai, this position centers on shaping the company’s internal data and agent platform. The work involves designing, building, and launching solutions that reinforce the product lineup. Collaboration is key, with regular interaction across multiple teams. What you will do Partner with colleagues from various disciplines to define and deliver technical solutions Develop and maintain systems that support internal data and agent platform requirements Facilitate smooth integration between platforms and focus on optimizing system performance Location This role is based in San Francisco.
About Liquid LabsAt Liquid AI, research has always been at the forefront of our mission. Liquid Labs serves as a dedicated internal research accelerator, facilitating groundbreaking advancements in the development of intelligent, personalized, and adaptive machines.Our roots extend back to MIT CSAIL, where pioneering work on Liquid Neural Networks established a new category of efficient sequence-processing architectures. This research laid the groundwork for our Liquid Foundation Models (LFMs), which are scalable, multimodal models designed for real-world applications in resource-constrained settings.In Liquid Labs, we continue this legacy by advancing the realm of efficient, adaptive intelligence through both fundamental research and practical engineering efforts.We collaborate closely with Liquid’s core foundation model and systems teams to turn theoretical concepts into deployable capabilities, setting the stage for a new era of powerful and efficient intelligent systems.About The Role:As a Research Engineer at Liquid Labs, you will be part of a dynamic, high-impact team pushing the boundaries of adaptive intelligence. You will be responsible for designing and implementing innovative architectures, training methodologies, and inference strategies to expand the potential of efficient AI.Your work will blend research and engineering, as you translate scientific concepts into functional systems, publish findings that advance the field, and deploy solutions that redefine what is achievable.While we prefer candidates from San Francisco and Boston, we welcome applications from other locations within the United States.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in creating versatile AI systems designed for optimal performance across various deployment platforms, including data center accelerators and on-device hardware. Our technology emphasizes low latency, minimal memory consumption, privacy, and dependability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are on the lookout for exceptional talent to join our team.The OpportunityThe Data team at Liquid AI drives the development of our Liquid Foundation Models, focusing on pre-training, vision, audio, and emerging modalities. With the stagnation of public data sources, the effectiveness of our models increasingly relies on specially curated datasets. We are seeking engineers with a machine learning mindset who can efficiently gather, filter, and synthesize high-quality data at scale.At Liquid AI, we regard data as a research challenge rather than an infrastructural issue. Our engineers conduct experiments, design ablations, and assess how data-related decisions impact model quality. We will align you with a team where you can experience rapid growth and make a significant impact, be it in pre-training, post-training reinforcement learning, vision-language, audio, or multimodal applications.While we prefer candidates in San Francisco and Boston, we are open to considering other locations.What We're Looking ForWe are in search of a candidate who:Thinks like a researcher and executes like an engineer: You should be able to formulate hypotheses, conduct experiments, and evaluate results. Our engineers produce research-level code while our researchers implement production systems.Learns quickly and adapts: You will be working in rapidly evolving modalities, so the ability to quickly grasp new domains and thrive in ambiguity is essential.Prioritizes data quality: We hold data quality in high regard; tasks such as filtering, deduplication, augmentation, and evaluation are key responsibilities, not afterthoughts.Solves problems autonomously: Data engineers operate within training groups (pre-training and multimodal). While collaboration is crucial, we expect ownership and self-direction.The WorkDevelop and maintain data processing, filtering, and selection pipelines at scale.Establish pipelines for pretraining, midtraining, supervised fine-tuning, and preference optimization datasets.Design synthetic data generation systems utilizing large language models (LLMs), structured prompting, and domain-specific generative techniques.
About tierzero tierzero builds tools that help engineering teams manage production code with stronger incident response, better operational visibility, and collaborative knowledge sharing. Companies like Discord, Drata, and Framer use tierzero to support their infrastructure in an AI-driven landscape. Backed by $7 million from investors including Accel and SV Angel, tierzero is growing quickly from its San Francisco headquarters. Role Overview: Founding Member of Technical Staff This is a hands-on role shaping tierzero’s core product and systems from the ground up. The founding technical team works closely with the CEO, CTO, and early customers to solve real engineering challenges. The position is based in San Francisco, with a hybrid schedule: three days each week in the office. What You’ll Do Design and build intelligent AI systems that process large volumes of unstructured data Deliver full-stack features informed by real-time user feedback Improve usability so AI agents are both effective and trustworthy for engineers Develop systems for automated evaluation of LLM outputs, including feedback loops and self-play Construct machine learning pipelines for data ingestion, feature generation, embedding storage, retrieval-augmented generation (RAG), vector search, and graph databases Prototype with open-source LLMs to understand their strengths and weaknesses Create scalable infrastructure for complex, multi-step agents, focusing on memory, state management, and asynchronous workflows Who We’re Looking For 5+ years of professional experience or significant open-source contributions Interest in LLMs, MCPs, cloud infrastructure, and observability tools Comfort working in changing, ambiguous situations Product-focused and customer-first mindset Experience learning from and collaborating with engineers from diverse backgrounds Bonus: Previous experience in a startup setting Work Location Hybrid schedule: three days per week in-person at the San Francisco HQ.
About tierzero tierzero helps engineering teams build and deploy code with greater speed and operational clarity in an AI-driven world. The company focuses on improving incident response, operational visibility, and knowledge sharing for engineers. Backed by $7 million in funding from investors like Accel and SV Angel, tierzero supports large-scale systems for clients such as Discord, Drata, and Framer. Role Overview: Founding Member of Technical Staff This role is based at tierzero's San Francisco headquarters. In-person work is required three days a week. As a founding member of the technical team, you will help design and build core products and systems from the ground up. Collaboration is central: expect to work closely with the CEO, CTO, and customers. Projects span a wide range of technical challenges and product areas. What You Will Do Design and implement intelligent AI systems that process and reason over large volumes of unstructured data. Develop full-stack features, incorporating direct feedback from users. Improve the product experience so intelligent agents are practical and reliable for engineers. Create systems that automatically evaluate LLM outputs and refine agent reasoning using self-play and feedback loops. Build machine learning pipelines covering data ingestion, feature generation, embedding stores, RAG pipelines, vector search, and graph databases. Prototype and experiment with open-source and advanced LLMs to weigh different approaches. Set up scalable infrastructure for long-running, multi-step agents, including memory management, state handling, and asynchronous workflows. What We Look For At least 5 years of professional or open-source experience in a relevant technical field. Comfort working in a setting that changes and evolves quickly. Strong product focus and an understanding of customer needs. Interest in LLMs, MCPs, cloud infrastructure, and observability tools. Ability to learn from and collaborate with engineers who have delivered over $10 billion in value. Commitment to working onsite in San Francisco three days per week. Startup experience is a plus.
About Liquid AIOriginating from the prestigious MIT CSAIL, Liquid AI crafts cutting-edge, general-purpose AI systems designed for optimal efficiency across a variety of platforms, from data center accelerators to edge devices. Our solutions prioritize low latency, minimal memory requirements, privacy, and reliability. We collaborate with industry leaders in consumer electronics, automotive, life sciences, and financial services, and as we expand rapidly, we are looking for exceptional talent to join our journey.The OpportunityJoin us at the exciting crossroads of advanced foundation models and the open-source community. In this pivotal role, you will oversee developer relations and community engagement, influencing how our models are adopted, documented, and integrated throughout the AI ecosystem. This unique position allows you to balance impactful community work with essential technical contributions, giving you the chance to shape how our models are represented and utilized by developers worldwide. If you are passionate about excellent documentation, enhancing developer experience, and democratizing access to powerful AI models, this is your chance to influence the future of open-source AI.What We're Looking ForWe seek a proactive individual who:Takes ownership: Manages open-source partnerships from initial outreach to ongoing collaboration.Thinks community-first: Integrates documentation, tutorials, integrations, and support into a seamless developer experience.Is pragmatic: Focuses on developer adoption and partner success rather than superficial metrics.Communicates clearly: Bridges the gap between technical teams and external partners, representing Liquid's interests while fostering genuine relationships.The WorkServe as the primary liaison for open-source partners.Assist in model releases with both marketing and technical content.Create tutorials, articles, and guides on training and utilizing our foundation models.Enhance and maintain LFM documentation for clarity and thoroughness.Collect community feedback and communicate insights to internal teams.
TierZero seeks a Founding Member of Technical Staff to join the team in San Francisco. This in-person position requires working from the SF headquarters at least three days per week. Role overview This role centers on close collaboration with a group of engineers who have collectively delivered over $10 billion in value during their careers. Expect to work side by side with teammates, sharing ideas and building strong connections in the office. The environment often shifts, so adaptability and comfort with changing priorities are important. Key responsibilities Work directly with experienced engineers to design and build new products Prioritize customer needs and satisfaction in product decisions Develop solutions using large language models (LLMs), multi-cloud platforms (MCPs), cloud infrastructure, and observability tools Requirements Minimum 5 years of professional engineering experience or a strong record of open-source contributions Experience in startups and familiarity with their unique challenges is a plus Location This position is based in San Francisco. In-office presence is required three days each week for collaboration.
Overview: Join Listen Labs as we embark on an exciting journey to revolutionize decision-making for companies through cutting-edge AI technology. With a robust product roadmap planned for the next six months, we are expanding our engineering team. We are in search of a highly technical individual who thrives on solving complex problems and is eager to contribute to our mission. If you are passionate about innovation and want to be part of a team that includes several IOI medalists, we want to hear from you!About Listen Labs:Listen Labs is at the forefront of AI-powered research, enabling teams to extract valuable insights from customer interviews in a matter of hours rather than months. Our platform assists users in analyzing conversations, identifying key themes, and making informed product decisions swiftly.Why Join Us?Exceptional Team: Our founding team consists of seasoned entrepreneurs with a proven track record in AI, alongside top talents from renowned organizations such as Jane Street, Twitter, Stripe, and Goldman Sachs.Rapid Growth: Backed by Sequoia Capital, we have grown from zero to a $14M run-rate in under a year, with a dedicated team of 40.Impressive Clientele: We are witnessing significant traction across various sectors, securing enterprise clients like Google, Microsoft, and Nestlé.Product Excellence: Our differentiated product offers an industry-leading win rate, which is a testament to our commitment to quality.Market Success: Our customer base is expanding rapidly, with numerous six-figure contracts leading to further growth.Viral Impact: Our product's interviews reach tens of thousands of viewers, driving organic growth and interest from Fortune 500 companies.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in creating versatile AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware, focusing on low latency, minimal memory consumption, privacy, and dependability. Our collaborations extend across industries including consumer electronics, automotive, life sciences, and financial services. As we undergo rapid expansion, we are on the lookout for outstanding individuals to join our journey.The OpportunityThe Vision-Language Models (VLM) team is dedicated to developing cutting-edge vision-language models that function seamlessly on devices, adhering to stringent latency and memory requirements without compromising quality. Having already launched four premier models, we are excited about what lies ahead.This team is responsible for the complete VLM pipeline, encompassing research on novel architectures, training algorithms, data curation, evaluation, and deployment. You will be part of a dedicated, hands-on team that directly engages with models and works closely with our pretraining, post-training, and infrastructure teams. Your success will be gauged by the performance of the models we deliver.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in the development of versatile artificial intelligence systems optimized for performance across various deployment environments, ranging from data center accelerators to on-device hardware. Our focus on low latency, minimal memory consumption, privacy, and reliability allows us to partner effectively with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome talented individuals who can contribute to our mission.The OpportunityThis unique position places you at the forefront of advanced foundation models and their practical applications. You will oversee post-training projects from start to finish for some of the world’s leading enterprises, while also playing a vital role in the ongoing development of Liquid’s core models.In this role, you will not have to choose between impactful customer work and foundational development; instead, you will enjoy deep involvement in both. You will have significant influence over how models are adapted, assessed, and deployed, directly contributing to the enhancement of Liquid’s post-training capabilities.If you are passionate about data integrity, evaluation processes, and ensuring that models perform effectively in real-world scenarios, this is your chance to redefine the standards of applied AI at a foundation-model company.What We're Looking ForWe seek an individual who:Takes ownership: You will lead post-training initiatives from customer requirements to delivery and evaluation.Thinks end-to-end: You will connect the dots across data generation, training, alignment, and evaluation as a cohesive system.Is pragmatic: You prioritize model quality and customer satisfaction over theoretical publications.Communicates clearly: You can interpret customer needs and effectively communicate with internal technical teams, providing constructive feedback when necessary.The WorkServe as the technical lead for post-training engagements with enterprise clients.Translate client requirements into actionable post-training specifications and workflows.Design and implement data generation, filtering, and quality assessment methodologies.Conduct supervised fine-tuning, preference alignment, and reinforcement learning processes.Create task-specific evaluations, analyze outcomes, and integrate insights back into core post-training workflows.
Join the Innovative Team at Liquid AIFounded as a spin-off from MIT’s CSAIL, Liquid AI is at the forefront of developing cutting-edge AI systems that operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our technology is designed to ensure low latency, efficient memory usage, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services as we rapidly scale our operations. We are seeking talented individuals who are passionate about technology and innovation.Your Role in Our TeamAs a GPU Performance Engineer, your expertise will be critical in enhancing our models and workflows beyond the capabilities of standard frameworks. You will be responsible for designing and deploying custom CUDA kernels, conducting hardware-level profiling, and transforming research concepts into production code that yields tangible improvements in our pipelines (training, post-training, and inference). Our dynamic team values initiative and ownership, and we are looking for a candidate who thrives on tackling complex challenges related to memory hierarchies, tensor cores, and profiling outputs.While San Francisco and Boston are preferred, we welcome applications from other locations.
About Us:Modal is at the forefront of AI infrastructure. We provide seamless access to GPUs, quick container startups, and integrated storage solutions, simplifying the process of model training, batch job execution, and low-latency inference. Leading companies such as Suno, Lovable, and Substack trust Modal to transition from prototypes to full-scale production without the complexities of infrastructure management.Our rapidly expanding team operates out of NYC, San Francisco, and Stockholm. With a remarkable 9-figure annual recurring revenue (ARR), we recently achieved a valuation of $1.1 billion after our successful Series B funding round. Thousands of customers, including Lovable, Scale AI, Substack, and Suno, depend on us for their AI workload needs.Joining Modal means becoming part of a dynamic, rapidly growing AI infrastructure company with substantial opportunities for personal and professional advancement. Our team comprises creators of well-known open-source projects (e.g., Seaborn, Luigi), academic researchers, international competition medalists, and seasoned engineering and product leaders with extensive experience.The Role:We are seeking a strategic Solutions Architect to influence technical strategies across our key enterprise accounts.In this position, you will serve as the technical partner to Enterprise Account Executives, managing intricate evaluations, developing infrastructure modernization strategies, and promoting multi-product adoption across AI and ML applications.This is a strategic, consultative role that demands a robust architectural background, a strong executive presence, and the ability to impact major infrastructure decisions worth millions. You will collaborate directly with CTOs, VPs of Engineering, and leaders of ML platforms to redefine the construction and operation of AI infrastructure.If you excel in fast-paced technical sales contexts and are eager to shape the infrastructure that drives modern AI innovations, we would love to hear from you.
About Liquid AIFounded as a spinoff of MIT CSAIL, Liquid AI specializes in developing versatile AI systems designed for optimized performance across various deployment platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory consumption, privacy, and reliability sets us apart. We collaborate with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek remarkable talent to join our journey.The OpportunityAs we establish our solutions architecture function from the ground up, you will play a pivotal role as one of our inaugural Solutions Architects. Collaborating closely with the Head of Solutions Architecture and the go-to-market organization, you will manage customer engagements from inception to completion.Our models are specifically engineered for environments constrained by memory, latency, and power, encompassing edge devices, mobile applications, embedded systems, and on-premises infrastructure where traditional models cannot operate. You will engage with this boundary daily.Our clientele ranges from AI-native startups to established enterprises venturing into AI for the first time. Your mission is to bridge the gap between our models' capabilities and customers' expectations, delivering on that promise from technical validation through to go-live.
Join Liquid AI as a Technical Staff Member specializing in Applied Vision. In this dynamic role, you will leverage cutting-edge technology to develop innovative solutions and enhance our product offerings. This position is ideal for recent graduates with a passion for technology and a desire to make a meaningful impact in the field of artificial intelligence.
Join Our TeamAt Liquid AI, we are not just creating AI models; we are revolutionizing the very fabric of intelligence. Originating from MIT, our objective is to develop efficient AI systems across all scales. Our Liquid Foundation Models (LFMs) excel in environments where others falter—on-device, at the edge, and under real-time constraints. We are not simply refining existing concepts; we are pioneering the future of AI.We recognize that exceptional talent drives remarkable technology. The Liquid team is a collective of elite engineers, researchers, and innovators dedicated to crafting the next generation of AI solutions. Whether you are designing model architectures, enhancing our development platforms, or facilitating enterprise integrations, your contributions will significantly influence the evolution of intelligent systems.While San Francisco and Boston are preferred locations, we welcome applicants from other regions within the United States.
About Liquid AILiquid AI, a pioneering company spun out of MIT CSAIL, is at the forefront of developing general-purpose AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory usage, privacy, and reliability allows us to partner with some of the most esteemed enterprises in consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are seeking exceptional talent to join our innovative journey.The OpportunityJoin our cutting-edge Audio team, where we are developing advanced speech-language models capable of handling Speech-to-Text (STT), Text-to-Speech (TTS), and speech-to-speech tasks within a unified architecture. This pivotal role supports applied audio model development, directly collaborating with the technical lead to deliver production systems that operate on-device under real-time constraints. You will take ownership of key workstreams encompassing data pipelines, evaluation systems, and customer deployments. If you are eager to tackle unique technical challenges within a small, elite team where your contributions are impactful, this is the role for you.What We're Looking ForWe are seeking an individual who:Builds first, theorizes later: You prioritize shipping working systems over theoretical models; production-grade code is your default.Owns outcomes end-to-end: You take full responsibility for everything from data pipelines to customer deployments and don't shy away from challenges.Thrives under constraints: On-device, low-latency, memory-constrained environments motivate you. You view constraints as opportunities for innovative design.Ramps quickly on new territory: You are comfortable closing knowledge gaps swiftly and actively seek feedback to drive results.The WorkDevelop and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale.Design, implement, and maintain evaluation systems that assess multimodal performance across both internal and public benchmarks.Fine-tune and adapt audio models to cater to customer-specific use cases, taking charge from requirement gathering through to deployment.Contribute production code to the core audio repository while collaborating closely with infrastructure and research teams.Facilitate experimentation under real hardware constraints, transitioning smoothly between customer-focused projects and core development initiatives.
About Liquid AIBorn from the innovative environment of MIT CSAIL, Liquid AI develops cutting-edge general-purpose AI systems that operate seamlessly across various deployment environments—from data center accelerators to on-device hardware. Our solutions prioritize low latency, minimal memory consumption, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome exceptional talent to our dynamic team.The OpportunityJoin us in establishing the product function that will transform Liquid AI's technological advancements into scalable, repeatable solutions for enterprise clients. As a key member of our Product team, you will work closely with technical leaders to define, package, and launch AI solutions that meet market needs. This role requires daily collaboration with ML engineers, GTM leaders, and enterprise customers to understand the value of our technology and effectively deliver it. This position offers significant ownership, allowing you to treat your solution area like your own startup within our organization.What We're Looking ForWe are seeking an individual who embodies the following qualities:Customer Obsession: Prioritizes understanding customer needs through direct feedback rather than assumptions.Self-Direction: Takes initiative to dive deep into problems without prompting, and is comfortable navigating uncertainty to propose effective solutions.Technical Fluency: Engages confidently with ML engineers and researchers, understanding the complexities of deploying AI systems in real-world applications.Founder Mentality: Treats their solution area as a startup, owning outcomes across various functions, from technical architecture to go-to-market strategies.The WorkOversee one or more go-to-market ready solutions from inception to scalable customer deployment.Analyze customer interactions to extract insights and identify productization opportunities.Collaborate with ML and inference teams to develop tools that streamline implementation.Define Ideal Customer Profiles (ICPs), pricing strategies, and packaging for scalable solutions.Partner with GTM teams to enhance outbound sales efforts around productized offerings.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in the development of general-purpose AI systems designed to operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our focus is on delivering low latency, efficient memory usage, privacy, and reliability. We collaborate with organizations in diverse sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek outstanding talent to join our mission.The OpportunityThe Training Infrastructure team is at the forefront of building the distributed systems that empower our next-generation Liquid Foundation Models. As our operations expand, we aim to innovate, implement, and enhance the infrastructure crucial for large-scale training.This role is centered around high ownership of training systems, emphasizing runtime, performance, and reliability rather than a typical platform or SRE function. You will collaborate within a small, agile team, creating vital systems from the ground up instead of working with pre-existing infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe are seeking an individual who:Embraces the complexity of distributed systems: Our team is dedicated to maintaining stability during extensive training runs, troubleshooting training failures across GPU clusters, and enhancing overall performance.Is passionate about building: We value team members who take pride in developing robust, efficient, and reliable infrastructure.Excels in uncertain environments: Our systems are designed to support evolving model architectures. You will be making decisions based on incomplete information and rapidly iterating.Aligns with team goals and delivers results: The best engineers on our team align with collective priorities while providing data-driven feedback when challenges arise.The WorkDesign and develop core systems that ensure quick and reliable large training runs.Create scalable distributed training infrastructure for GPU clusters.Implement and refine parallelism and sharding strategies for evolving architectures.Optimize distributed efficiency through topology-aware collectives, communication/compute overlap, and straggler mitigation.Develop data loading systems to eliminate I/O bottlenecks for multimodal datasets.
Liquid AI, a spin-off from MIT's CSAIL, develops AI systems designed to run efficiently on standard CPUs. The team emphasizes low latency, minimal memory consumption, and strong reliability. Liquid AI works with major players in consumer electronics, automotive, life sciences, and financial services, and is expanding its team as the company grows. Role overview The Executive Assistant supports the C-suite and executive leadership at Liquid AI's San Francisco office. This position serves as a central point for keeping leaders aligned and informed across Go-To-Market, Product, and Engineering. The Executive Assistant ensures smooth information flow between meetings, decisions, and teams, and also coordinates events and partner visits at the office. What makes a strong candidate Information hub: Tracks project status, decisions, and commitments across different functions. Responds to inquiries directly or knows how to find accurate answers. Proactive planning: Reviews upcoming meetings, identifies preparation gaps, flags conflicts, and ensures materials are ready ahead of time. Clear communication: Liaises with external partners, delivers concise updates to technical leaders, and manages follow-ups across teams. Adaptability: Navigates shifting priorities, changing schedules, and new workstreams with resilience and flexibility. Key responsibilities Serves as the operational backbone for senior leaders by tracking action items, maintaining continuity between meetings, and keeping details organized. Manages complex and dynamic calendars, evaluates meeting requests, aligns schedules with strategic priorities, and protects time for high-impact work. Maintains visibility into cross-functional workstreams, providing leadership with relevant context without requiring them to seek updates. Oversees logistics and preparation for partner and customer visits, ensuring a welcoming and effective experience at the San Francisco office.
Liquid AI, a company spun out of MIT CSAIL, develops general-purpose AI systems designed for efficiency, privacy, and reliability across a wide range of platforms. The team partners with enterprises in fields such as consumer electronics, automotive, life sciences, and financial services. As Liquid AI expands, the company is seeking new team members to help shape the direction of AI technology. Role overview This Product Marketing Manager position is the first of its kind at Liquid AI and reports directly to the VP of Marketing in San Francisco. The role bridges product development, communications, and go-to-market planning. The Product Marketing Manager will play a key part in ensuring Liquid AI’s innovations reach the right audiences and will help establish the foundation for future marketing initiatives. Success in this position requires a strategic mindset, the ability to understand complex technical products, and an understanding of both enterprise clients and technical users. Adaptability and a willingness to build new processes from the ground up are important for this role. What matters at Liquid AI Builder: Uses modern AI tools (including Claude Code) for content creation, campaign management, and prototyping. Able to create demos or proofs-of-concept for launches and sales, and guide others in these areas. Translator: Communicates complex technical details in clear, credible ways for the market. Knows when to consult engineering for more information before a launch and balances immediate needs with long-term planning. Operator: Establishes repeatable systems such as kickoffs, briefs, and retrospectives, rather than focusing only on one-off projects. Cross-functional collaborator: Works effectively with product, go-to-market, communications, and sales teams to drive campaigns across multiple channels. Networker: Brings recommendations for tools, contractors, and agencies to help scale marketing and communications. Comfortable working with AI tools and freelancers. Main responsibilities Lead competitive analysis and market research to refine Liquid AI’s positioning and identify new growth opportunities. Collaborate with cross-functional teams to develop and execute product marketing strategies.
Role overview As a Product Engineer at liquid-ai, this position centers on shaping the company’s internal data and agent platform. The work involves designing, building, and launching solutions that reinforce the product lineup. Collaboration is key, with regular interaction across multiple teams. What you will do Partner with colleagues from various disciplines to define and deliver technical solutions Develop and maintain systems that support internal data and agent platform requirements Facilitate smooth integration between platforms and focus on optimizing system performance Location This role is based in San Francisco.
About Liquid LabsAt Liquid AI, research has always been at the forefront of our mission. Liquid Labs serves as a dedicated internal research accelerator, facilitating groundbreaking advancements in the development of intelligent, personalized, and adaptive machines.Our roots extend back to MIT CSAIL, where pioneering work on Liquid Neural Networks established a new category of efficient sequence-processing architectures. This research laid the groundwork for our Liquid Foundation Models (LFMs), which are scalable, multimodal models designed for real-world applications in resource-constrained settings.In Liquid Labs, we continue this legacy by advancing the realm of efficient, adaptive intelligence through both fundamental research and practical engineering efforts.We collaborate closely with Liquid’s core foundation model and systems teams to turn theoretical concepts into deployable capabilities, setting the stage for a new era of powerful and efficient intelligent systems.About The Role:As a Research Engineer at Liquid Labs, you will be part of a dynamic, high-impact team pushing the boundaries of adaptive intelligence. You will be responsible for designing and implementing innovative architectures, training methodologies, and inference strategies to expand the potential of efficient AI.Your work will blend research and engineering, as you translate scientific concepts into functional systems, publish findings that advance the field, and deploy solutions that redefine what is achievable.While we prefer candidates from San Francisco and Boston, we welcome applications from other locations within the United States.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in creating versatile AI systems designed for optimal performance across various deployment platforms, including data center accelerators and on-device hardware. Our technology emphasizes low latency, minimal memory consumption, privacy, and dependability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are on the lookout for exceptional talent to join our team.The OpportunityThe Data team at Liquid AI drives the development of our Liquid Foundation Models, focusing on pre-training, vision, audio, and emerging modalities. With the stagnation of public data sources, the effectiveness of our models increasingly relies on specially curated datasets. We are seeking engineers with a machine learning mindset who can efficiently gather, filter, and synthesize high-quality data at scale.At Liquid AI, we regard data as a research challenge rather than an infrastructural issue. Our engineers conduct experiments, design ablations, and assess how data-related decisions impact model quality. We will align you with a team where you can experience rapid growth and make a significant impact, be it in pre-training, post-training reinforcement learning, vision-language, audio, or multimodal applications.While we prefer candidates in San Francisco and Boston, we are open to considering other locations.What We're Looking ForWe are in search of a candidate who:Thinks like a researcher and executes like an engineer: You should be able to formulate hypotheses, conduct experiments, and evaluate results. Our engineers produce research-level code while our researchers implement production systems.Learns quickly and adapts: You will be working in rapidly evolving modalities, so the ability to quickly grasp new domains and thrive in ambiguity is essential.Prioritizes data quality: We hold data quality in high regard; tasks such as filtering, deduplication, augmentation, and evaluation are key responsibilities, not afterthoughts.Solves problems autonomously: Data engineers operate within training groups (pre-training and multimodal). While collaboration is crucial, we expect ownership and self-direction.The WorkDevelop and maintain data processing, filtering, and selection pipelines at scale.Establish pipelines for pretraining, midtraining, supervised fine-tuning, and preference optimization datasets.Design synthetic data generation systems utilizing large language models (LLMs), structured prompting, and domain-specific generative techniques.
About tierzero tierzero builds tools that help engineering teams manage production code with stronger incident response, better operational visibility, and collaborative knowledge sharing. Companies like Discord, Drata, and Framer use tierzero to support their infrastructure in an AI-driven landscape. Backed by $7 million from investors including Accel and SV Angel, tierzero is growing quickly from its San Francisco headquarters. Role Overview: Founding Member of Technical Staff This is a hands-on role shaping tierzero’s core product and systems from the ground up. The founding technical team works closely with the CEO, CTO, and early customers to solve real engineering challenges. The position is based in San Francisco, with a hybrid schedule: three days each week in the office. What You’ll Do Design and build intelligent AI systems that process large volumes of unstructured data Deliver full-stack features informed by real-time user feedback Improve usability so AI agents are both effective and trustworthy for engineers Develop systems for automated evaluation of LLM outputs, including feedback loops and self-play Construct machine learning pipelines for data ingestion, feature generation, embedding storage, retrieval-augmented generation (RAG), vector search, and graph databases Prototype with open-source LLMs to understand their strengths and weaknesses Create scalable infrastructure for complex, multi-step agents, focusing on memory, state management, and asynchronous workflows Who We’re Looking For 5+ years of professional experience or significant open-source contributions Interest in LLMs, MCPs, cloud infrastructure, and observability tools Comfort working in changing, ambiguous situations Product-focused and customer-first mindset Experience learning from and collaborating with engineers from diverse backgrounds Bonus: Previous experience in a startup setting Work Location Hybrid schedule: three days per week in-person at the San Francisco HQ.
About tierzero tierzero helps engineering teams build and deploy code with greater speed and operational clarity in an AI-driven world. The company focuses on improving incident response, operational visibility, and knowledge sharing for engineers. Backed by $7 million in funding from investors like Accel and SV Angel, tierzero supports large-scale systems for clients such as Discord, Drata, and Framer. Role Overview: Founding Member of Technical Staff This role is based at tierzero's San Francisco headquarters. In-person work is required three days a week. As a founding member of the technical team, you will help design and build core products and systems from the ground up. Collaboration is central: expect to work closely with the CEO, CTO, and customers. Projects span a wide range of technical challenges and product areas. What You Will Do Design and implement intelligent AI systems that process and reason over large volumes of unstructured data. Develop full-stack features, incorporating direct feedback from users. Improve the product experience so intelligent agents are practical and reliable for engineers. Create systems that automatically evaluate LLM outputs and refine agent reasoning using self-play and feedback loops. Build machine learning pipelines covering data ingestion, feature generation, embedding stores, RAG pipelines, vector search, and graph databases. Prototype and experiment with open-source and advanced LLMs to weigh different approaches. Set up scalable infrastructure for long-running, multi-step agents, including memory management, state handling, and asynchronous workflows. What We Look For At least 5 years of professional or open-source experience in a relevant technical field. Comfort working in a setting that changes and evolves quickly. Strong product focus and an understanding of customer needs. Interest in LLMs, MCPs, cloud infrastructure, and observability tools. Ability to learn from and collaborate with engineers who have delivered over $10 billion in value. Commitment to working onsite in San Francisco three days per week. Startup experience is a plus.
About Liquid AIOriginating from the prestigious MIT CSAIL, Liquid AI crafts cutting-edge, general-purpose AI systems designed for optimal efficiency across a variety of platforms, from data center accelerators to edge devices. Our solutions prioritize low latency, minimal memory requirements, privacy, and reliability. We collaborate with industry leaders in consumer electronics, automotive, life sciences, and financial services, and as we expand rapidly, we are looking for exceptional talent to join our journey.The OpportunityJoin us at the exciting crossroads of advanced foundation models and the open-source community. In this pivotal role, you will oversee developer relations and community engagement, influencing how our models are adopted, documented, and integrated throughout the AI ecosystem. This unique position allows you to balance impactful community work with essential technical contributions, giving you the chance to shape how our models are represented and utilized by developers worldwide. If you are passionate about excellent documentation, enhancing developer experience, and democratizing access to powerful AI models, this is your chance to influence the future of open-source AI.What We're Looking ForWe seek a proactive individual who:Takes ownership: Manages open-source partnerships from initial outreach to ongoing collaboration.Thinks community-first: Integrates documentation, tutorials, integrations, and support into a seamless developer experience.Is pragmatic: Focuses on developer adoption and partner success rather than superficial metrics.Communicates clearly: Bridges the gap between technical teams and external partners, representing Liquid's interests while fostering genuine relationships.The WorkServe as the primary liaison for open-source partners.Assist in model releases with both marketing and technical content.Create tutorials, articles, and guides on training and utilizing our foundation models.Enhance and maintain LFM documentation for clarity and thoroughness.Collect community feedback and communicate insights to internal teams.
TierZero seeks a Founding Member of Technical Staff to join the team in San Francisco. This in-person position requires working from the SF headquarters at least three days per week. Role overview This role centers on close collaboration with a group of engineers who have collectively delivered over $10 billion in value during their careers. Expect to work side by side with teammates, sharing ideas and building strong connections in the office. The environment often shifts, so adaptability and comfort with changing priorities are important. Key responsibilities Work directly with experienced engineers to design and build new products Prioritize customer needs and satisfaction in product decisions Develop solutions using large language models (LLMs), multi-cloud platforms (MCPs), cloud infrastructure, and observability tools Requirements Minimum 5 years of professional engineering experience or a strong record of open-source contributions Experience in startups and familiarity with their unique challenges is a plus Location This position is based in San Francisco. In-office presence is required three days each week for collaboration.
Overview: Join Listen Labs as we embark on an exciting journey to revolutionize decision-making for companies through cutting-edge AI technology. With a robust product roadmap planned for the next six months, we are expanding our engineering team. We are in search of a highly technical individual who thrives on solving complex problems and is eager to contribute to our mission. If you are passionate about innovation and want to be part of a team that includes several IOI medalists, we want to hear from you!About Listen Labs:Listen Labs is at the forefront of AI-powered research, enabling teams to extract valuable insights from customer interviews in a matter of hours rather than months. Our platform assists users in analyzing conversations, identifying key themes, and making informed product decisions swiftly.Why Join Us?Exceptional Team: Our founding team consists of seasoned entrepreneurs with a proven track record in AI, alongside top talents from renowned organizations such as Jane Street, Twitter, Stripe, and Goldman Sachs.Rapid Growth: Backed by Sequoia Capital, we have grown from zero to a $14M run-rate in under a year, with a dedicated team of 40.Impressive Clientele: We are witnessing significant traction across various sectors, securing enterprise clients like Google, Microsoft, and Nestlé.Product Excellence: Our differentiated product offers an industry-leading win rate, which is a testament to our commitment to quality.Market Success: Our customer base is expanding rapidly, with numerous six-figure contracts leading to further growth.Viral Impact: Our product's interviews reach tens of thousands of viewers, driving organic growth and interest from Fortune 500 companies.
Feb 25, 2026
Sign in to browse more jobs
Create account — see all 11,536 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.