company

Edge Inference Engineer - Member of Technical Staff

Liquid AISan Francisco
Remote Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Experience

Qualifications

What We're Looking ForWe seek a candidate who:Works autonomously: You will independently determine solutions to meet performance goals for target devices, diagnosing bottlenecks and iterating on prototypes until objectives are achieved. Thinks at the hardware level: You possess an understanding of cache hierarchies, memory access patterns, and instruction-level optimization, allowing you to identify code inefficiencies without relying solely on profilers. Bridges ML and systems: You have a solid grasp of the mathematical principles behind neural networks (including matrix operations and attention mechanisms) and can translate that into optimized code implementations. Ships production code: Your contributions will feed into open-source projects and customer devices, meaning you will write maintainable and extendable code.

About the job

About Liquid AI

Born from the innovation of MIT CSAIL, Liquid AI is at the forefront of developing general-purpose AI systems that operate seamlessly across various deployment platforms, including data center accelerators and on-device hardware. Our solutions prioritize low latency, minimal memory consumption, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek extraordinary talent to join our mission.

The Opportunity

Join our Edge Inference team, where we transform Liquid Foundation Models into highly optimized machine code for resource-limited devices such as smartphones, laptops, Raspberry Pis, and smartwatches. As key contributors to llama.cpp, we establish the infrastructure necessary for efficient on-device AI. You will collaborate closely with our technical lead to tackle complex challenges that demand a profound understanding of machine learning architectures and hardware constraints. This role offers high ownership, allowing your code to be deployed in production environments and directly influence model performance on real devices.

While San Francisco and Boston are preferred, we welcome applicants from other locations.

About Liquid AI

Liquid AI is a pioneering company emerging from MIT CSAIL, dedicated to creating AI systems that are efficient and versatile across different hardware platforms. Our commitment to innovation and collaboration with industry leaders sets us apart as a formidable player in the AI landscape.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.