Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
We are looking for candidates with a strong background in software engineering, particularly in product development and data management. Ideal candidates should possess:Bachelor's Degree in Computer Science, Engineering, or a related fieldExperience with programming languages such as Python, Java, or C++Familiarity with cloud platforms and data analytics toolsStrong problem-solving skills and the ability to work collaboratively in a team environment
About the job
As a Product Engineer at liquid-ai, this position centers on shaping the company’s internal data and agent platform. The work involves designing, building, and launching solutions that reinforce the product lineup. Collaboration is key, with regular interaction across multiple teams.
What you will do
Partner with colleagues from various disciplines to define and deliver technical solutions
Develop and maintain systems that support internal data and agent platform requirements
Facilitate smooth integration between platforms and focus on optimizing system performance
Location
This role is based in San Francisco.
About liquid-ai
liquid-ai is a cutting-edge technology company focused on leveraging artificial intelligence to deliver innovative solutions. Our mission is to empower businesses through intelligent data management and automation, fostering efficiency and growth. Join us and be part of a team that is driving the future of technology.
Similar jobs
1 - 20 of 12,227 Jobs
Search for Technical Staff Member Gpu Performance Engineering At Liquid Ai San Francisco
Join the Innovative Team at Liquid AIFounded as a spin-off from MIT’s CSAIL, Liquid AI is at the forefront of developing cutting-edge AI systems that operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our technology is designed to ensure low latency, efficient memory usage, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services as we rapidly scale our operations. We are seeking talented individuals who are passionate about technology and innovation.Your Role in Our TeamAs a GPU Performance Engineer, your expertise will be critical in enhancing our models and workflows beyond the capabilities of standard frameworks. You will be responsible for designing and deploying custom CUDA kernels, conducting hardware-level profiling, and transforming research concepts into production code that yields tangible improvements in our pipelines (training, post-training, and inference). Our dynamic team values initiative and ownership, and we are looking for a candidate who thrives on tackling complex challenges related to memory hierarchies, tensor cores, and profiling outputs.While San Francisco and Boston are preferred, we welcome applications from other locations.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in creating versatile AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware, focusing on low latency, minimal memory consumption, privacy, and dependability. Our collaborations extend across industries including consumer electronics, automotive, life sciences, and financial services. As we undergo rapid expansion, we are on the lookout for outstanding individuals to join our journey.The OpportunityThe Vision-Language Models (VLM) team is dedicated to developing cutting-edge vision-language models that function seamlessly on devices, adhering to stringent latency and memory requirements without compromising quality. Having already launched four premier models, we are excited about what lies ahead.This team is responsible for the complete VLM pipeline, encompassing research on novel architectures, training algorithms, data curation, evaluation, and deployment. You will be part of a dedicated, hands-on team that directly engages with models and works closely with our pretraining, post-training, and infrastructure teams. Your success will be gauged by the performance of the models we deliver.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in the development of versatile artificial intelligence systems optimized for performance across various deployment environments, ranging from data center accelerators to on-device hardware. Our focus on low latency, minimal memory consumption, privacy, and reliability allows us to partner effectively with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome talented individuals who can contribute to our mission.The OpportunityThis unique position places you at the forefront of advanced foundation models and their practical applications. You will oversee post-training projects from start to finish for some of the world’s leading enterprises, while also playing a vital role in the ongoing development of Liquid’s core models.In this role, you will not have to choose between impactful customer work and foundational development; instead, you will enjoy deep involvement in both. You will have significant influence over how models are adapted, assessed, and deployed, directly contributing to the enhancement of Liquid’s post-training capabilities.If you are passionate about data integrity, evaluation processes, and ensuring that models perform effectively in real-world scenarios, this is your chance to redefine the standards of applied AI at a foundation-model company.What We're Looking ForWe seek an individual who:Takes ownership: You will lead post-training initiatives from customer requirements to delivery and evaluation.Thinks end-to-end: You will connect the dots across data generation, training, alignment, and evaluation as a cohesive system.Is pragmatic: You prioritize model quality and customer satisfaction over theoretical publications.Communicates clearly: You can interpret customer needs and effectively communicate with internal technical teams, providing constructive feedback when necessary.The WorkServe as the technical lead for post-training engagements with enterprise clients.Translate client requirements into actionable post-training specifications and workflows.Design and implement data generation, filtering, and quality assessment methodologies.Conduct supervised fine-tuning, preference alignment, and reinforcement learning processes.Create task-specific evaluations, analyze outcomes, and integrate insights back into core post-training workflows.
At Gimlet Labs, we are pioneering the first heterogeneous neocloud tailored for AI workloads. As the demand for AI systems grows, traditional infrastructure faces significant limitations in terms of power, capacity, and cost. Our innovative platform addresses these challenges by decoupling AI workloads from the hardware, intelligently partitioning tasks, and directing each component to the most suitable hardware for optimal performance and efficiency. This method allows for the creation of heterogeneous systems that span multiple vendors and generations of hardware, including the latest cutting-edge accelerators, achieving substantial improvements in performance and cost-effectiveness.Building upon this robust foundation, Gimlet is developing a production-grade neocloud designed for agentic workloads. Our customers can effortlessly deploy and manage their workloads with stable, production-ready APIs, eliminating the complexities of hardware selection, placement, or low-level performance optimization.We collaborate with foundational labs, hyperscalers, and AI-native companies to drive real production workloads capable of scaling to gigawatt-class AI data centers.We are currently seeking a dedicated Member of Technical Staff specializing in kernels and GPU performance. In this role, you will work closely with accelerators and execution hardware to extract maximum performance from AI workloads across diverse and rapidly evolving platforms. You will analyze low-level execution behaviors, design and optimize kernels, and ensure consistent performance across both established and emerging hardware.This position is perfect for engineers who thrive on deep performance analysis, enjoy exploring hardware trade-offs, and are passionate about transforming theoretical peak performance into tangible real-world outcomes.
About Liquid AIOriginating from MIT CSAIL, Liquid AI specializes in the development of general-purpose AI systems designed to operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our focus is on delivering low latency, efficient memory usage, privacy, and reliability. We collaborate with organizations in diverse sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek outstanding talent to join our mission.The OpportunityThe Training Infrastructure team is at the forefront of building the distributed systems that empower our next-generation Liquid Foundation Models. As our operations expand, we aim to innovate, implement, and enhance the infrastructure crucial for large-scale training.This role is centered around high ownership of training systems, emphasizing runtime, performance, and reliability rather than a typical platform or SRE function. You will collaborate within a small, agile team, creating vital systems from the ground up instead of working with pre-existing infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe are seeking an individual who:Embraces the complexity of distributed systems: Our team is dedicated to maintaining stability during extensive training runs, troubleshooting training failures across GPU clusters, and enhancing overall performance.Is passionate about building: We value team members who take pride in developing robust, efficient, and reliable infrastructure.Excels in uncertain environments: Our systems are designed to support evolving model architectures. You will be making decisions based on incomplete information and rapidly iterating.Aligns with team goals and delivers results: The best engineers on our team align with collective priorities while providing data-driven feedback when challenges arise.The WorkDesign and develop core systems that ensure quick and reliable large training runs.Create scalable distributed training infrastructure for GPU clusters.Implement and refine parallelism and sharding strategies for evolving architectures.Optimize distributed efficiency through topology-aware collectives, communication/compute overlap, and straggler mitigation.Develop data loading systems to eliminate I/O bottlenecks for multimodal datasets.
About the PositionAt Wafer, we are on a mission to enhance the intelligence per watt by developing AI systems that can self-optimize. Our journey begins with GPU kernels, and we aim to revolutionize every aspect of ML systems and AI infrastructure. We are a compact, dynamic team of four, supported by renowned investors including Fifty Years, Y Combinator, Jeff Dean, and Woj Zaremba, co-founder of OpenAI. We are seeking passionate engineers eager to innovate at the convergence of AI agents and systems programming.In this role, you will collaborate closely with our founding team to create the systems that power our GPU optimization platform. Your projects will range from the agent framework that refines kernels to the profiling infrastructure that interfaces with NCU and ROCprofiler, as well as the compiler tools that scrutinize PTX and SASS.
About Liquid AIFounded as a spinoff of MIT CSAIL, Liquid AI specializes in developing versatile AI systems designed for optimized performance across various deployment platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory consumption, privacy, and reliability sets us apart. We collaborate with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek remarkable talent to join our journey.The OpportunityAs we establish our solutions architecture function from the ground up, you will play a pivotal role as one of our inaugural Solutions Architects. Collaborating closely with the Head of Solutions Architecture and the go-to-market organization, you will manage customer engagements from inception to completion.Our models are specifically engineered for environments constrained by memory, latency, and power, encompassing edge devices, mobile applications, embedded systems, and on-premises infrastructure where traditional models cannot operate. You will engage with this boundary daily.Our clientele ranges from AI-native startups to established enterprises venturing into AI for the first time. Your mission is to bridge the gap between our models' capabilities and customers' expectations, delivering on that promise from technical validation through to go-live.
About Liquid LabsAt Liquid AI, research has always been at the forefront of our mission. Liquid Labs serves as a dedicated internal research accelerator, facilitating groundbreaking advancements in the development of intelligent, personalized, and adaptive machines.Our roots extend back to MIT CSAIL, where pioneering work on Liquid Neural Networks established a new category of efficient sequence-processing architectures. This research laid the groundwork for our Liquid Foundation Models (LFMs), which are scalable, multimodal models designed for real-world applications in resource-constrained settings.In Liquid Labs, we continue this legacy by advancing the realm of efficient, adaptive intelligence through both fundamental research and practical engineering efforts.We collaborate closely with Liquid’s core foundation model and systems teams to turn theoretical concepts into deployable capabilities, setting the stage for a new era of powerful and efficient intelligent systems.About The Role:As a Research Engineer at Liquid Labs, you will be part of a dynamic, high-impact team pushing the boundaries of adaptive intelligence. You will be responsible for designing and implementing innovative architectures, training methodologies, and inference strategies to expand the potential of efficient AI.Your work will blend research and engineering, as you translate scientific concepts into functional systems, publish findings that advance the field, and deploy solutions that redefine what is achievable.While we prefer candidates from San Francisco and Boston, we welcome applications from other locations within the United States.
Role overview As a Product Engineer at liquid-ai, this position centers on shaping the company’s internal data and agent platform. The work involves designing, building, and launching solutions that reinforce the product lineup. Collaboration is key, with regular interaction across multiple teams. What you will do Partner with colleagues from various disciplines to define and deliver technical solutions Develop and maintain systems that support internal data and agent platform requirements Facilitate smooth integration between platforms and focus on optimizing system performance Location This role is based in San Francisco.
Join Our TeamAt Liquid AI, we are not just creating AI models; we are revolutionizing the very fabric of intelligence. Originating from MIT, our objective is to develop efficient AI systems across all scales. Our Liquid Foundation Models (LFMs) excel in environments where others falter—on-device, at the edge, and under real-time constraints. We are not simply refining existing concepts; we are pioneering the future of AI.We recognize that exceptional talent drives remarkable technology. The Liquid team is a collective of elite engineers, researchers, and innovators dedicated to crafting the next generation of AI solutions. Whether you are designing model architectures, enhancing our development platforms, or facilitating enterprise integrations, your contributions will significantly influence the evolution of intelligent systems.While San Francisco and Boston are preferred locations, we welcome applicants from other regions within the United States.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in creating versatile AI systems designed for optimal performance across various deployment platforms, including data center accelerators and on-device hardware. Our technology emphasizes low latency, minimal memory consumption, privacy, and dependability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are on the lookout for exceptional talent to join our team.The OpportunityThe Data team at Liquid AI drives the development of our Liquid Foundation Models, focusing on pre-training, vision, audio, and emerging modalities. With the stagnation of public data sources, the effectiveness of our models increasingly relies on specially curated datasets. We are seeking engineers with a machine learning mindset who can efficiently gather, filter, and synthesize high-quality data at scale.At Liquid AI, we regard data as a research challenge rather than an infrastructural issue. Our engineers conduct experiments, design ablations, and assess how data-related decisions impact model quality. We will align you with a team where you can experience rapid growth and make a significant impact, be it in pre-training, post-training reinforcement learning, vision-language, audio, or multimodal applications.While we prefer candidates in San Francisco and Boston, we are open to considering other locations.What We're Looking ForWe are in search of a candidate who:Thinks like a researcher and executes like an engineer: You should be able to formulate hypotheses, conduct experiments, and evaluate results. Our engineers produce research-level code while our researchers implement production systems.Learns quickly and adapts: You will be working in rapidly evolving modalities, so the ability to quickly grasp new domains and thrive in ambiguity is essential.Prioritizes data quality: We hold data quality in high regard; tasks such as filtering, deduplication, augmentation, and evaluation are key responsibilities, not afterthoughts.Solves problems autonomously: Data engineers operate within training groups (pre-training and multimodal). While collaboration is crucial, we expect ownership and self-direction.The WorkDevelop and maintain data processing, filtering, and selection pipelines at scale.Establish pipelines for pretraining, midtraining, supervised fine-tuning, and preference optimization datasets.Design synthetic data generation systems utilizing large language models (LLMs), structured prompting, and domain-specific generative techniques.
About Liquid AIBorn from the innovative environment of MIT CSAIL, Liquid AI develops cutting-edge general-purpose AI systems that operate seamlessly across various deployment environments—from data center accelerators to on-device hardware. Our solutions prioritize low latency, minimal memory consumption, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome exceptional talent to our dynamic team.The OpportunityJoin us in establishing the product function that will transform Liquid AI's technological advancements into scalable, repeatable solutions for enterprise clients. As a key member of our Product team, you will work closely with technical leaders to define, package, and launch AI solutions that meet market needs. This role requires daily collaboration with ML engineers, GTM leaders, and enterprise customers to understand the value of our technology and effectively deliver it. This position offers significant ownership, allowing you to treat your solution area like your own startup within our organization.What We're Looking ForWe are seeking an individual who embodies the following qualities:Customer Obsession: Prioritizes understanding customer needs through direct feedback rather than assumptions.Self-Direction: Takes initiative to dive deep into problems without prompting, and is comfortable navigating uncertainty to propose effective solutions.Technical Fluency: Engages confidently with ML engineers and researchers, understanding the complexities of deploying AI systems in real-world applications.Founder Mentality: Treats their solution area as a startup, owning outcomes across various functions, from technical architecture to go-to-market strategies.The WorkOversee one or more go-to-market ready solutions from inception to scalable customer deployment.Analyze customer interactions to extract insights and identify productization opportunities.Collaborate with ML and inference teams to develop tools that streamline implementation.Define Ideal Customer Profiles (ICPs), pricing strategies, and packaging for scalable solutions.Partner with GTM teams to enhance outbound sales efforts around productized offerings.
Liquid AI, a spin-off from MIT's CSAIL, develops AI systems designed to run efficiently on standard CPUs. The team emphasizes low latency, minimal memory consumption, and strong reliability. Liquid AI works with major players in consumer electronics, automotive, life sciences, and financial services, and is expanding its team as the company grows. Role overview The Executive Assistant supports the C-suite and executive leadership at Liquid AI's San Francisco office. This position serves as a central point for keeping leaders aligned and informed across Go-To-Market, Product, and Engineering. The Executive Assistant ensures smooth information flow between meetings, decisions, and teams, and also coordinates events and partner visits at the office. What makes a strong candidate Information hub: Tracks project status, decisions, and commitments across different functions. Responds to inquiries directly or knows how to find accurate answers. Proactive planning: Reviews upcoming meetings, identifies preparation gaps, flags conflicts, and ensures materials are ready ahead of time. Clear communication: Liaises with external partners, delivers concise updates to technical leaders, and manages follow-ups across teams. Adaptability: Navigates shifting priorities, changing schedules, and new workstreams with resilience and flexibility. Key responsibilities Serves as the operational backbone for senior leaders by tracking action items, maintaining continuity between meetings, and keeping details organized. Manages complex and dynamic calendars, evaluates meeting requests, aligns schedules with strategic priorities, and protects time for high-impact work. Maintains visibility into cross-functional workstreams, providing leadership with relevant context without requiring them to seek updates. Oversees logistics and preparation for partner and customer visits, ensuring a welcoming and effective experience at the San Francisco office.
Liquid AI, a company spun out of MIT CSAIL, develops general-purpose AI systems designed for efficiency, privacy, and reliability across a wide range of platforms. The team partners with enterprises in fields such as consumer electronics, automotive, life sciences, and financial services. As Liquid AI expands, the company is seeking new team members to help shape the direction of AI technology. Role overview This Product Marketing Manager position is the first of its kind at Liquid AI and reports directly to the VP of Marketing in San Francisco. The role bridges product development, communications, and go-to-market planning. The Product Marketing Manager will play a key part in ensuring Liquid AI’s innovations reach the right audiences and will help establish the foundation for future marketing initiatives. Success in this position requires a strategic mindset, the ability to understand complex technical products, and an understanding of both enterprise clients and technical users. Adaptability and a willingness to build new processes from the ground up are important for this role. What matters at Liquid AI Builder: Uses modern AI tools (including Claude Code) for content creation, campaign management, and prototyping. Able to create demos or proofs-of-concept for launches and sales, and guide others in these areas. Translator: Communicates complex technical details in clear, credible ways for the market. Knows when to consult engineering for more information before a launch and balances immediate needs with long-term planning. Operator: Establishes repeatable systems such as kickoffs, briefs, and retrospectives, rather than focusing only on one-off projects. Cross-functional collaborator: Works effectively with product, go-to-market, communications, and sales teams to drive campaigns across multiple channels. Networker: Brings recommendations for tools, contractors, and agencies to help scale marketing and communications. Comfortable working with AI tools and freelancers. Main responsibilities Lead competitive analysis and market research to refine Liquid AI’s positioning and identify new growth opportunities. Collaborate with cross-functional teams to develop and execute product marketing strategies.
Join our dynamic team at Reka as a GPU Performance Engineer, where you will leverage your expertise in Python and large-scale model training to enhance our training infrastructure. You will play a pivotal role in optimizing model performance, contributing to critical technical decisions, and improving our post-training processes, including reinforcement learning and fine-tuning. Your contributions will also focus on enhancing the efficiency and scalability of our model serving infrastructure.
About tierzero tierzero helps engineering teams build and deploy code with greater speed and operational clarity in an AI-driven world. The company focuses on improving incident response, operational visibility, and knowledge sharing for engineers. Backed by $7 million in funding from investors like Accel and SV Angel, tierzero supports large-scale systems for clients such as Discord, Drata, and Framer. Role Overview: Founding Member of Technical Staff This role is based at tierzero's San Francisco headquarters. In-person work is required three days a week. As a founding member of the technical team, you will help design and build core products and systems from the ground up. Collaboration is central: expect to work closely with the CEO, CTO, and customers. Projects span a wide range of technical challenges and product areas. What You Will Do Design and implement intelligent AI systems that process and reason over large volumes of unstructured data. Develop full-stack features, incorporating direct feedback from users. Improve the product experience so intelligent agents are practical and reliable for engineers. Create systems that automatically evaluate LLM outputs and refine agent reasoning using self-play and feedback loops. Build machine learning pipelines covering data ingestion, feature generation, embedding stores, RAG pipelines, vector search, and graph databases. Prototype and experiment with open-source and advanced LLMs to weigh different approaches. Set up scalable infrastructure for long-running, multi-step agents, including memory management, state handling, and asynchronous workflows. What We Look For At least 5 years of professional or open-source experience in a relevant technical field. Comfort working in a setting that changes and evolves quickly. Strong product focus and an understanding of customer needs. Interest in LLMs, MCPs, cloud infrastructure, and observability tools. Ability to learn from and collaborate with engineers who have delivered over $10 billion in value. Commitment to working onsite in San Francisco three days per week. Startup experience is a plus.
About Liquid AIBorn from the innovation of MIT CSAIL, Liquid AI is at the forefront of developing general-purpose AI systems that operate seamlessly across various deployment platforms, including data center accelerators and on-device hardware. Our solutions prioritize low latency, minimal memory consumption, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek extraordinary talent to join our mission.The OpportunityJoin our Edge Inference team, where we transform Liquid Foundation Models into highly optimized machine code for resource-limited devices such as smartphones, laptops, Raspberry Pis, and smartwatches. As key contributors to llama.cpp, we establish the infrastructure necessary for efficient on-device AI. You will collaborate closely with our technical lead to tackle complex challenges that demand a profound understanding of machine learning architectures and hardware constraints. This role offers high ownership, allowing your code to be deployed in production environments and directly influence model performance on real devices.While San Francisco and Boston are preferred, we welcome applicants from other locations.
About tierzero tierzero builds tools that help engineering teams manage production code with stronger incident response, better operational visibility, and collaborative knowledge sharing. Companies like Discord, Drata, and Framer use tierzero to support their infrastructure in an AI-driven landscape. Backed by $7 million from investors including Accel and SV Angel, tierzero is growing quickly from its San Francisco headquarters. Role Overview: Founding Member of Technical Staff This is a hands-on role shaping tierzero’s core product and systems from the ground up. The founding technical team works closely with the CEO, CTO, and early customers to solve real engineering challenges. The position is based in San Francisco, with a hybrid schedule: three days each week in the office. What You’ll Do Design and build intelligent AI systems that process large volumes of unstructured data Deliver full-stack features informed by real-time user feedback Improve usability so AI agents are both effective and trustworthy for engineers Develop systems for automated evaluation of LLM outputs, including feedback loops and self-play Construct machine learning pipelines for data ingestion, feature generation, embedding storage, retrieval-augmented generation (RAG), vector search, and graph databases Prototype with open-source LLMs to understand their strengths and weaknesses Create scalable infrastructure for complex, multi-step agents, focusing on memory, state management, and asynchronous workflows Who We’re Looking For 5+ years of professional experience or significant open-source contributions Interest in LLMs, MCPs, cloud infrastructure, and observability tools Comfort working in changing, ambiguous situations Product-focused and customer-first mindset Experience learning from and collaborating with engineers from diverse backgrounds Bonus: Previous experience in a startup setting Work Location Hybrid schedule: three days per week in-person at the San Francisco HQ.
TierZero seeks a Founding Member of Technical Staff to join the team in San Francisco. This in-person position requires working from the SF headquarters at least three days per week. Role overview This role centers on close collaboration with a group of engineers who have collectively delivered over $10 billion in value during their careers. Expect to work side by side with teammates, sharing ideas and building strong connections in the office. The environment often shifts, so adaptability and comfort with changing priorities are important. Key responsibilities Work directly with experienced engineers to design and build new products Prioritize customer needs and satisfaction in product decisions Develop solutions using large language models (LLMs), multi-cloud platforms (MCPs), cloud infrastructure, and observability tools Requirements Minimum 5 years of professional engineering experience or a strong record of open-source contributions Experience in startups and familiarity with their unique challenges is a plus Location This position is based in San Francisco. In-office presence is required three days each week for collaboration.
About Liquid AIOriginating from the prestigious MIT CSAIL, Liquid AI crafts cutting-edge, general-purpose AI systems designed for optimal efficiency across a variety of platforms, from data center accelerators to edge devices. Our solutions prioritize low latency, minimal memory requirements, privacy, and reliability. We collaborate with industry leaders in consumer electronics, automotive, life sciences, and financial services, and as we expand rapidly, we are looking for exceptional talent to join our journey.The OpportunityJoin us at the exciting crossroads of advanced foundation models and the open-source community. In this pivotal role, you will oversee developer relations and community engagement, influencing how our models are adopted, documented, and integrated throughout the AI ecosystem. This unique position allows you to balance impactful community work with essential technical contributions, giving you the chance to shape how our models are represented and utilized by developers worldwide. If you are passionate about excellent documentation, enhancing developer experience, and democratizing access to powerful AI models, this is your chance to influence the future of open-source AI.What We're Looking ForWe seek a proactive individual who:Takes ownership: Manages open-source partnerships from initial outreach to ongoing collaboration.Thinks community-first: Integrates documentation, tutorials, integrations, and support into a seamless developer experience.Is pragmatic: Focuses on developer adoption and partner success rather than superficial metrics.Communicates clearly: Bridges the gap between technical teams and external partners, representing Liquid's interests while fostering genuine relationships.The WorkServe as the primary liaison for open-source partners.Assist in model releases with both marketing and technical content.Create tutorials, articles, and guides on training and utilizing our foundation models.Enhance and maintain LFM documentation for clarity and thoroughness.Collect community feedback and communicate insights to internal teams.
Feb 4, 2026
Sign in to browse more jobs
Create account — see all 12,227 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.