Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
• Bachelor's Degree in Computer Science, Engineering, or related field• Proven experience with machine learning frameworks (e.g., TensorFlow, PyTorch)• Proficiency in programming languages such as Python and R• Familiarity with data visualization tools (e.g., Tableau, Matplotlib)• Strong analytical and critical thinking skills• Excellent communication and teamwork abilities
About the job
Join Orchard as a Machine Learning Engineer and play a pivotal role in transforming data into actionable insights. In this dynamic position, you will leverage your expertise in machine learning algorithms and data analysis to develop innovative solutions that enhance our products and services.
We are looking for a proactive team player who thrives in a fast-paced environment and possesses strong problem-solving skills. You will collaborate with cross-functional teams, engage with large datasets, and contribute to the design and implementation of machine learning models.
About Orchard
Orchard is an innovative tech company based in San Francisco, dedicated to harnessing the power of data to improve user experiences and drive business growth. We foster a culture of collaboration and creativity, empowering our employees to push the boundaries of technology.
Similar jobs
1 - 20 of 5,812 Jobs
Search for Machine Learning Engineer Intelligent Agents Systems
Zyphra is a pioneering artificial intelligence company located in Palo Alto, California, committed to revolutionizing the way humans interact with technology.About the Position:As a Machine Learning Engineer at Zyphra, you will play a vital role in advancing our Agentic Systems and Interaction initiatives. You will lead the development of a cutting-edge desktop and browser-based agent designed to autonomously navigate the web, manage filesystem interactions, and execute intricate user tasks. Your responsibilities will encompass frontend interface development, secure sandbox environments, large-scale document search and retrieval, and the integration of language and vision models.Your Contributions Will Include:Crafting and implementing an advanced agentic system capable of seamless interactions with browsers, operating systems, and enterprise filesystems.Developing robust search and retrieval pipelines for expansive structured and unstructured datasets.Integrating large language models (LLMs), vision models, reinforcement learning, and scaffolding frameworks to facilitate autonomous, multi-step decision-making.Engineering secure virtualized runtimes and backend services for agent execution.Demonstrating a strong commitment to building production-grade ML systems that redefine the capabilities of software agents.Embracing velocity and curiosity, particularly in dynamic and ambiguous settings.Qualifications:Expertise in Python and proficiency in building and debugging complex machine learning applications.Hands-on experience with desktop operating systems (Windows and macOS), including APIs for screen reading, file manipulation, and accessibility frameworks.Proven track record in developing browser extensions or automation tools with precise control over browser functionalities (mouse interactions, tabs, DOM manipulation).Solid understanding of large language models (LLMs), prompting strategies, and orchestration frameworks for multi-step reasoning.Capability to navigate the entire ML stack, from model integration to serving infrastructure.Experience in designing or working with secure and virtualized execution environments.Exceptional communication and collaboration skills across product, research, and engineering teams.Preferred Qualifications:Experience in building or integrating retrieval systems.
Full-time|$250K/yr - $385K/yr|Hybrid|San Francisco, CA
Superhuman embraces a hybrid working model designed to offer team members the ideal balance of focused work and collaborative, in-person interactions that cultivate trust, innovation, and a vibrant team culture.About SuperhumanSuperhuman, now inclusive of Grammarly, is an AI productivity platform dedicated to unleashing the superhuman potential within everyone. Our suite of applications and agents extends AI capabilities across 1 million+ applications and websites. Our products include Grammarly's writing assistance, Coda's collaborative workspaces, Mail's inbox management, and Go, a proactive AI assistant that intuitively understands context and provides automated support. Since our inception in 2009, Superhuman has empowered over 40 million individuals, 50,000 organizations, and 3,000 educational institutions globally to reduce busywork and concentrate on what truly matters. Discover more at superhuman.com and explore our core values here.The OpportunitiesJoin us in developing a groundbreaking platform for AI Agents, designed to collaboratively tackle complex tasks, utilizing Superhuman's intuitive UI. As a Machine Learning Engineer on this pioneering team, you will play a critical role in our company's transformation.Shape the Future of Productivity: Take on a vital role in evolving Grammarly from a cherished writing assistant into an indispensable AI-driven productivity suite for enterprises.Build an Innovative AI Agent Platform: Lead the charge in creating a new platform where multiple AI agents work together to address intricate user challenges. You will oversee the core orchestration, routing, and planning systems.Own Key ML Systems: Design and implement advanced machine learning models that enhance core product experiences, including search ranking and proactive suggestions that anticipate user needs.
OverviewPulse is revolutionizing data infrastructure by addressing the critical challenge of extracting accurate, structured information from complex documents on a large scale. Our innovative approach to document understanding integrates intelligent schema mapping with advanced extraction models, outperforming traditional OCR and parsing methods.As a dynamic and rapidly growing team of engineers based in San Francisco, we empower Fortune 100 companies, Y Combinator startups, public investment firms, and growth-oriented businesses. With the backing of top-tier investors, we are on an exciting growth trajectory.What sets our technology apart is our cutting-edge multi-stage architecture:Layout comprehension with specialized component detection modelsLow-latency OCR models designed for targeted data extractionAdvanced algorithms for determining reading order in complex formatsProprietary table structure recognition and parsing capabilitiesFine-tuned vision-language models for interpreting charts, tables, and figuresIf you are passionate about the convergence of computer vision, natural language processing, and data infrastructure, your contributions at Pulse will directly influence our customers and shape the future of document intelligence.
At Superhuman, we embrace a dynamic hybrid working model, allowing team members to enjoy a balance of focused work time and collaborative in-person interactions that foster trust, innovation, and a vibrant team culture.About SuperhumanSuperhuman, now inclusive of Grammarly, is an innovative AI productivity platform dedicated to unlocking the superhuman potential in individuals. Our suite of applications and agents seamlessly integrates AI into the workflow, connecting with over a million applications and websites. Our products range from Grammarly’s writing assistance to Coda’s collaborative environments, Mail’s inbox management, and Go, a proactive AI assistant that understands context and provides assistance automatically. Since our inception in 2009, Superhuman has empowered over 40 million users across 50,000 organizations and 3,000 educational institutions globally, enabling them to eliminate busywork and concentrate on what truly matters. Discover more atsuperhuman.com and explore our values here.The OpportunitiesJoin our team as we develop a pioneering platform for AI Agents to collaboratively tackle complex tasks using Superhuman's intuitive UI. As a Machine Learning Engineer, you will be a key player in our company's transformation.Shape the Future of Productivity: Be instrumental in transitioning Grammarly from a beloved writing companion to a crucial, AI-driven productivity suite for enterprises.Build a Groundbreaking AI Agent Platform: Lead the creation of a new platform where AI agents work together to address intricate user challenges. You will manage the essential orchestration, routing, and planning systems.Own Critical ML Systems: Design and implement advanced ML models for fundamental product experiences, including search ranking and proactive suggestions that foresee user needs.Integrate Cutting-Edge AI: Work at the cutting edge of AI technology, developing ML components that utilize the latest models to craft extraordinary user experiences.Thrive in a High-Impact Environment: Join a foundational team where you will enjoy a high level of autonomy and product insight in a fast-paced, evolving atmosphere.
About KreaKrea is at the forefront of developing advanced AI creative tools designed to enhance and empower human creativity. Our mission is to create intuitive and controllable AI solutions that allow creatives to express themselves across various formats including text, images, video, sound, and 3D.About the PositionWe are seeking a talented Machine Learning Engineer to lead the design and implementation of Krea’s personalization and recommendation systems from the ground up. You will take full ownership of how we comprehend user preferences, curate engaging content, and customize generative models to reflect individual aesthetics.This role sits at the exciting intersection of recommendation systems, representation learning, and generative imaging and video technologies.Your ResponsibilitiesLead the architecture and development of Krea’s personalization and recommendation framework, overseeing the technical direction from inception to deployment.Craft algorithms that effectively model user preferences and tastes, enabling our systems to adapt to individual styles and aesthetics.Develop high-quality, curated feeds that strike a balance between exploration, personalization, and aesthetic coherence.Collaborate closely with our model and research teams to co-create personalization mechanisms that shape how our generative models learn, adapt, and express creative styles.Contribute to research in personalized image generation, with a focus on style, taste, and subjective quality.Work in tandem with product, design, and research teams to define what “good personalization” means in a creative context.Take systems from initial research and prototyping stages through to production, ongoing iteration, and enhancement.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!
Join Matter Intelligence as a Data and Machine Learning Infrastructure Engineer, where you will play a pivotal role in shaping the future of data-driven decision-making. You will be part of a dynamic team focused on building and optimizing infrastructure that supports innovative machine learning applications. Your expertise will help us enhance our data pipelines and ensure seamless integration of machine learning models into production.
On-site|On-site|San Francisco, CA | New York City, NY | Seattle, WA
Join Anthropic as a Machine Learning Systems Engineer within our Encodings and Tokenization team, where you'll play a pivotal role in refining and optimizing our tokenization systems across Pretraining and Finetuning workflows. By bridging the gap between our Pretraining and Finetuning teams, you will help shape the essential infrastructure that enhances how our AI models learn from diverse data. Your contributions will be crucial in ensuring our AI systems remain reliable, interpretable, and steerable, driving forward our mission of developing beneficial AI technologies.
About UsAt Applied Compute, we specialize in creating Specific Intelligence solutions for enterprises, developing agents that learn continuously from an organization’s processes, data, expertise, and objectives. We recognize a significant gap between the capabilities of AI models in isolation and their practical applications in real-world business contexts. Our systems often fall short because they lack adaptability to feedback. To address this, we are building a continual learning infrastructure that captures context, memory, and decision-making processes throughout the enterprise, enabling specialized agents to effectively execute real tasks.What Excites Us: We operate at a unique intersection where our product team constructs the platform that fuels a new generation of digital coworkers. Our research team pushes the boundaries of post-training and reinforcement learning, creating innovative product experiences. Our applied research engineers collaborate closely with clients to deploy models into production. This blend of strong product focus, deep research, and hands-on customer engagement is crucial for integrating AI into the enterprise. We are product-driven, research-informed, and actively engaged with our clients.Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have built RL infrastructure at leading organizations like OpenAI and Scale AI, and developed systems at Together, Two Sigma, and Watershed. We proudly serve Fortune 50 clients alongside companies like DoorDash, Mercor, and Cognition. Our work is supported by renowned investors, including Benchmark, Sequoia, and Lux.Who Thrives in Our Environment: We seek individuals eager to apply cutting-edge research and complex systems to tackle real-world challenges. You should be adept at quickly adapting to new environments, whether it’s a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment of customer interactions—listening, empathizing, and understanding how tasks are accomplished within their organizations—is essential. Those with entrepreneurial backgrounds, extensive side projects, or demonstrated end-to-end ownership typically excel in our company.
Full-time|$275K/yr - $350K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
About Scale AI At Scale AI, we are dedicated to propelling the advancement of AI applications. Over the past eight years, we have established ourselves as the premier AI data foundry, supporting groundbreaking innovations in fields such as generative AI, defense technologies, and autonomous vehicles. Following our recent Series F funding round, we are intensifying our efforts to harness frontier data, paving the way toward achieving Artificial General Intelligence (AGI). Our work with enterprise clients and governments has enhanced our model evaluation capabilities, allowing us to expand our offerings for both public and private evaluations. About the ACE Team The Agent Capabilities & Environments (ACE) team, a vital part of Scale’s Research organization, unites customer-focused Researchers and Applied AI Engineers. Our primary mission is to conduct research on agent environments and reinforcement learning reward signals, benchmark autonomous agent performance in real-world contexts, and develop robust data programs aimed at enhancing the capabilities of Large Language Models (LLMs). We are committed to creating foundational tools and frameworks for evaluating models as agents, focusing on autonomous agents that interact dynamically with a wide range of external environments, including code repositories and GUI interfaces. About This Role This position sits at the cutting edge of AI research and its practical applications, concentrating on the data types necessary for the development of state-of-the-art agents, including browser and software engineering agents. The ideal candidate will investigate the data landscape required to propel intelligent and adaptable AI agents, steering the data strategy at Scale to foster innovation. This role demands not only expertise in LLM agents and planning algorithms but also creative problem-solving skills to tackle novel challenges pertaining to data, interaction, and evaluation. You will contribute to influential research publications on agents, collaborate with customer researchers, and partner with the engineering team to transform these advancements into scalable real-world solutions.
At Physical Intelligence, we are pioneering general-purpose AI applications for the physical world. Our innovative approach involves orchestrating thousands of accelerators across a diverse ecosystem of GPU and TPU clusters, which encompass various hardware generations, cloud platforms, and cluster configurations.Researchers frequently encounter challenges in identifying the optimal cluster for their tasks, understanding resource availability, and configuring their workloads efficiently. This process is not scalable. To enhance productivity, we require an intelligent scheduling and compute system that can automatically determine the best job placements based on availability, hardware compatibility, cost considerations, and priority levels, allowing researchers to concentrate on their scientific endeavors.This position encompasses the complete ownership of this challenge: the development of scheduling systems, placement logic, cluster management frameworks, and operational tools essential for seamless operations.This role is distinct from traditional cloud DevOps; it focuses on resource allocation intelligence, utilization efficiency, fault tolerance, and ensuring a smooth experience for large-scale distributed training.About the TeamThe ML Infrastructure team is dedicated to bolstering and accelerating Physical Intelligence’s fundamental modeling initiatives by creating systems that ensure large-scale training is reliable, reproducible, and efficient. You will collaborate closely with the ML Infrastructure, data platform, and research teams to eliminate compute scheduling as a bottleneck.Key Responsibilities- Lead Intelligent Job Scheduling and Placement: Design and implement multi-tenant scheduling systems that automatically allocate training jobs to the most suitable cluster based on hardware specifications, topology, availability, cost, and priority. Facilitate equitable resource sharing across teams and projects through quota management, priority tiers, and preemption policies. Simplify cluster discrepancies so researchers can submit jobs without needing detailed knowledge of cluster specifics.- Enhance Multi-cluster Orchestration: Develop the control plane responsible for overseeing the job lifecycle across various clusters (including mixed GPU/TPU setups, multi-generational hardware, both on-premises and cloud-based) and enable effortless job migration, failover, and rescheduling.- Optimize Accelerator Utilization and Performance: Continuously monitor and enhance GPU/TPU usage across the entire fleet. Apply priority, preemption, queuing, and fairness strategies that balance research momentum with cost efficiency.- Guarantee Scalability and Stability: Implement fault detection, automatic recovery mechanisms, and resilience strategies for long-running multi-node training tasks. Oversee health checks, node management, and scaling strategies to ensure optimal performance.
About Our TeamJoin the innovative Sora team at OpenAI, where we are at the forefront of developing multimodal capabilities for our foundation models. Our hybrid research and product team is dedicated to seamlessly integrating multimodal functionalities into our AI solutions, ensuring they are dependable, user-centric, and aligned with our vision of benefiting society at large.Role OverviewAs a Machine Learning Engineer specializing in Distributed Data Systems, you will be instrumental in designing and scaling the infrastructure that facilitates large-scale multimodal training and evaluation at OpenAI. Your role will involve managing complex distributed data pipelines, collaborating closely with researchers to convert their requirements into robust, production-ready systems, and enhancing pipelines that are essential for Sora's rapid iteration cycles.We are seeking detail-oriented engineers with extensive experience in distributed systems who thrive in high-stakes environments and excel in building resilient infrastructure.This position is located in San Francisco, CA, and follows a hybrid work model, requiring three days in the office each week. We also provide relocation assistance for new team members.Key Responsibilities:Design, implement, and maintain data infrastructure systems, including distributed computing, data orchestration, distributed storage, streaming infrastructure, and machine learning systems, with a focus on scalability, reliability, and security.Ensure our data platform can scale exponentially while maintaining high reliability and efficiency.Collaborate with researchers to gain a deep understanding of their requirements, translating them into production-ready systems.Strengthen, optimize, and manage critical data infrastructure systems that support multimodal training and evaluation.You Will Excel in This Role If You:Possess strong experience with distributed systems and large-scale infrastructure, coupled with a keen interest in data.Exhibit meticulous attention to detail and a commitment to building and maintaining reliable systems.Demonstrate solid software engineering fundamentals and effective organizational skills.Thrive in environments characterized by ambiguity and rapid change.About OpenAIOpenAI is a trailblazing AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves humanity. We continuously push the boundaries of AI capabilities and strive to create technology that benefits everyone.
Full-time|Remote|San Francisco, CA or remote within the U.S.
At Philo, we are a dedicated team of technology and product enthusiasts committed to reshaping the television landscape. We blend cutting-edge technology with the captivating medium of television to create the ultimate viewing experience. Our mission is to enhance streaming capabilities through innovative cloud delivery and sophisticated machine learning algorithms that personalize content discovery. As a Senior Machine Learning Engineer specializing in Recommendation Systems, you will be at the forefront of our content personalization initiatives, significantly enhancing user engagement and satisfaction. Your expertise will help ensure that every time users open the Philo app, they find something they want to watch. In this pivotal role, you will spearhead the development of advanced algorithms and large-scale systems that drive Philo's recommendation engine. Collaborating closely with data science, product, infrastructure, and backend engineering teams, you will tackle complex machine learning challenges and develop innovative, data-driven solutions that enhance content discovery and foster user retention.
Full-time|$200K/yr - $240K/yr|On-site|San Francisco, CA
Join Us in Building a Safer World.At TRM Labs, we specialize in blockchain analytics and AI solutions aimed at assisting law enforcement, national security agencies, financial institutions, and cryptocurrency businesses in identifying, investigating, and preventing crypto-related fraud and financial crime. Our innovative platforms leverage blockchain intelligence and AI technology to trace funds, detect illicit activity, and construct comprehensive threat profiles. Trusted by leading organizations worldwide, TRM Labs is committed to enabling a safer and more secure environment for all.Our AI Engineering Team is dedicated to pioneering next-generation AI applications, particularly in the realm of Large Language Models (LLMs) and agentic systems. Our goal is to develop resilient pipelines and high-performance infrastructure that facilitate the swift, safe, and scalable deployment of AI systems.We manage extensive petabyte-scale pipelines, ensuring model serving with millisecond latency while providing the necessary observability and governance to make AI production-ready. Our team actively evaluates and integrates leading-edge tools in the LLM and agent space, including open-source stacks, vector databases, evaluation frameworks, and orchestration tools to accelerate TRM’s innovation pace.As a Senior or Staff ML Systems Engineer – LLM, you will play a pivotal role in constructing and scaling our technical infrastructure for AI/ML systems. Your responsibilities will include:Creating reusable CI/CD workflows for model training, evaluation, and deployment, integrating tools such as Langfuse, GitHub Actions, and experiment tracking.Automating model versioning, approval processes, and compliance checks across various environments.Developing a modular and scalable AI infrastructure stack that encompasses vector databases, feature stores, model registries, and observability tools.Collaborating with engineering and data science teams to embed AI models and agents into real-time applications and workflows.Continuously assessing and incorporating state-of-the-art AI tools (e.g., LangChain, LlamaIndex, vLLM, MLflow, BentoML).Promoting AI reliability and governance while enabling experimentation, ensuring compliance, security, and continuous uptime.Enhancing AI/ML Model Performance and ensuring data accuracy and consistency, leading to improved model training and inference.Implementing infrastructure to facilitate both offline and online evaluation of LLMs and agents.
As a Machine Learning Infrastructure Engineer at Physical Intelligence, you will play a vital role in enhancing and optimizing our training systems and core model code. You will take ownership of critical infrastructure for large-scale training, which includes managing GPU/TPU compute, orchestrating jobs, and developing reusable and efficient JAX training pipelines. Collaborating closely with researchers and model engineers, you will help transform innovative ideas into experiments and subsequently into production training runs.This position is hands-on and offers significant leverage at the intersection of machine learning, software engineering, and scalable infrastructure.The TeamOur ML Infrastructure team is dedicated to supporting and accelerating Physical Intelligence's core modeling initiatives by building systems that ensure large-scale training is reliable, reproducible, and efficient. The team collaborates with research, data, and platform engineers to guarantee that models can seamlessly transition from prototype to production-grade training runs.Key Responsibilities- Manage training/inference infrastructure: Design, implement, and maintain systems for large-scale model training, which includes scheduling, job management, checkpointing, and performance metrics/logging.- Expand distributed training: Collaborate with researchers to efficiently scale JAX-based training across TPU and GPU clusters.- Enhance performance: Profile and optimize memory usage, device utilization, throughput, and distributed synchronization to maximize efficiency.- Facilitate rapid iteration: Develop abstractions for launching, monitoring, debugging, and reproducing experiments.- Oversee compute resources: Ensure optimal allocation and utilization of cloud-based GPU/TPU compute resources while managing costs effectively.- Collaborate with researchers: Translate research requirements into infrastructure capabilities and promote best practices for large-scale training.- Contribute to core training code: Evolve the JAX model and training code to accommodate new architectures, modalities, and evaluation metrics.
Join the Arena Intelligence TeamArena Intelligence is a cutting-edge platform dedicated to evaluating the performance of AI models in real-world scenarios. Founded by a team of researchers from UC Berkeley’s SkyLab, our mission is to push the boundaries of AI through comprehensive measurements and advancements.Every month, millions turn to Arena Intelligence to gain insights into the performance of pioneering AI systems. Our community-driven feedback loop helps us create transparent, rigorous, and human-centered evaluations. Major enterprises and AI laboratories trust our assessments for their reliability, alignment, and impact. Our leaderboards have become the benchmark for AI performance, influencing the global discourse on model efficacy and innovation.Our team comprises top researchers, engineers, and builders from prestigious institutions like UC Berkeley, Google, Stanford, DeepMind, and Discord. We prioritize truth, agility, craftsmanship, curiosity, and impactful work over traditional hierarchies, fostering an environment where diverse talents can thrive. Our office is a hub of excellence, energy, and focus.Your Role as a Machine Learning ScientistWe are looking for a skilled Machine Learning Scientist to enhance our methods for evaluating and understanding AI models. You will design and analyze experiments that reveal the factors contributing to the usefulness, trustworthiness, and capabilities of models based on human preference signals. Your contributions will lay the groundwork for scalable AI understanding.This interdisciplinary role involves close collaboration with engineers, product teams, marketing, and the wider research community to develop innovative methodologies for model comparison, preference data analysis, and performance factor disentangling, including style, reasoning, and robustness. Your work will directly impact our public leaderboard and the resources we provide to model developers.If you are intrigued by open-ended challenges, rigorous evaluations, and impactful research, we invite you to apply. We are looking for candidates with:Hands-on experience in training large-scale models, including reward and preference models, as well as fine-tuning LLMs using methodologies such as RLHF, DPO, and contrastive learning.A solid foundation in machine learning and statistics, with proven experience in designing innovative training objectives, evaluation schemes, or statistical frameworks to enhance model reliability and alignment.Proficiency in the entire experimental pipeline, from dataset design and large-batch training to thorough evaluation and ablation, with an understanding of scalability for production.
Join Ando Technologies as a Machine Learning Engineer specializing in AI-native systems and forecasting. In this role, you will leverage cutting-edge machine learning algorithms to develop predictive models and enhance our AI-driven solutions. Collaborate with cross-functional teams to transform data into actionable insights and drive strategic decisions. Ideal candidates will have a passion for innovation and a strong understanding of AI technologies.
Join Orchard as a Machine Learning Engineer and play a pivotal role in transforming data into actionable insights. In this dynamic position, you will leverage your expertise in machine learning algorithms and data analysis to develop innovative solutions that enhance our products and services.We are looking for a proactive team player who thrives in a fast-paced environment and possesses strong problem-solving skills. You will collaborate with cross-functional teams, engage with large datasets, and contribute to the design and implementation of machine learning models.
About NomadicMLAt NomadicML, we are harnessing the power of artificial intelligence to revolutionize the way machines understand and interpret motion. Our vision-language models (VLMs) transform vast amounts of video data into actionable insights, paving the way for advancements in self-driving technology, robotics, and industrial automation.Founded by Mustafa Bal and Varun Krishnan, both alumni of Harvard University, our team is comprised of experts who have previously developed critical AI systems at industry giants like Snowflake, Lyft, Microsoft, Amazon, and IBM Research. With a commitment to innovation, we are dedicated to mining insights from the 5 trillion miles driven by Americans annually, uncovering the next frontier in machine intelligence.About the RoleWe are looking for a passionate Machine Learning Engineer who excels at the intersection of foundational model research and production engineering. In this role, you will play a key part in optimizing how machines learn from motion, focusing on training and refining large-scale Vision-Language Models that analyze complex real-world video data.You will be responsible for creating multi-modal architectures that accurately perceive, localize, and describe motion events across millions of video frames, transforming these innovations into robust APIs and SDKs for enterprise clients.Working closely with the founders, your contributions will include:Training and assessing VLMs tailored for motion comprehension within autonomous driving and robotics datasets.Designing and scaling GPU-accelerated pipelines for training, fine-tuning, and inference on diverse data types (video, language, and sensor metadata).Developing evaluation frameworks that benchmark spatiotemporal reasoning and localization precision.
At Causal Labs, we are on a groundbreaking mission to develop general causal intelligence, harnessing AI to (1) forecast future events and (2) pinpoint optimal actions to influence that future.To realize this vision, we are constructing a Large Physics foundation Model (LPM), as the domains governed by physics inherently feature cause-and-effect relationships, which is distinct from visual or textual data.Weather serves as the perfect training environment for our LPM, being the most extensively observed physical system and providing rapid, objective ground truth feedback from sensory data at an unprecedented scale, far exceeding what is utilized for current large language models (LLMs).Our team comprises elite researchers and engineers with backgrounds in self-driving technology, drug discovery, and robotics, including talents from Google DeepMind, Cruise, Waymo, Meta, Nabla Bio, and Apple. We believe that achieving general causal intelligence will be a pivotal technological advancement for humanity.We are searching for infrastructure engineers who are eager to tackle formidable challenges and contribute to our mission.Your expertise in distributed training clusters and performance optimization for large models will be crucial as we address our training and inference challenges. If you possess experience in developing large-scale ML infrastructure within fields like language models, vision systems, robotics, or biology, we invite you to join us.
Oct 29, 2025
Sign in to browse more jobs
Create account — see all 5,812 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.