Senior Machine Learning Engineer Applied Research Model Development jobs in San Francisco – Browse 7,798 openings on RoboApply Jobs
Senior Machine Learning Engineer Applied Research Model Development jobs in San Francisco
Open roles matching “Senior Machine Learning Engineer Applied Research Model Development” with location signals for San Francisco. 7,798 active listings on RoboApply Jobs.
7,798 jobs found
Machine Learning Researcher in Generative Modeling
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
Your ProfileExpertise in Machine Learning: You possess a strong background in generative modeling and have contributed to influential machine learning projects, as evidenced by your work on widely adopted open-source libraries or impactful publications at prestigious venues such as NeurIPS, ICML, or Nature. Proficient ML Developer: Your code is robust, well-tested, and maintainable. You are adept at using version control and code review tools, and you excel at developing efficient prototypes as well as polished production code, having experience with cloud computing and model parallelization. Data Engineering Skills: You have a solid track record in building machine learning data pipelines for training and evaluation of deep learning models, including data analysis and effective dataset construction. Model Optimization Enthusiast: With a deep understanding of the interaction between ML libraries, hardware, and data, you are passionate about optimizing model performance for both training and inference speeds. Curious and Mission-Driven: You are dedicated to making a meaningful impact, adapting your methods to achieve your goals, and thriving in fast-paced environments.
About the job
Join latentlabs, a pioneering company at the forefront of biotechnology, as we seek a talented Machine Learning Researcher specializing in generative modeling. You will become part of a dynamic, interdisciplinary team comprising machine learning experts, protein engineers, and biologists, all committed to revolutionizing biological control and disease treatment. In this role, you will design innovative generative models aimed at creating new proteins that exhibit functionality in wet lab assays.
About latentlabs
At latentlabs, we are dedicated to advancing biotechnology through innovative research and development. Our interdisciplinary team leverages cutting-edge machine learning and biological expertise to create solutions that address critical health challenges. We are passionate about harnessing the power of technology to change lives.
Join latentlabs, a pioneering company at the forefront of biotechnology, as we seek a talented Machine Learning Researcher specializing in generative modeling. You will become part of a dynamic, interdisciplinary team comprising machine learning experts, protein engineers, and biologists, all committed to revolutionizing biological control and disease treatment. In this role, you will design innovative generative models aimed at creating new proteins that exhibit functionality in wet lab assays.
About UsAt Applied Compute, we are pioneering the development of Specific Intelligence for enterprises, creating agents that continuously learn from a company’s processes, data, expertise, and objectives. Our mission is to bridge the gap between isolated AI capabilities and their effective application within real business environments. Traditional AI systems often fall short as they lack the ability to adapt based on feedback. Our innovative continual learning layer captures context, memory, and decision-making processes across the enterprise, enabling specialized agents to engage in meaningful work.What Excites Us: We operate at the exciting intersection of product development and cutting-edge research. Our product team designs the platform that empowers a new generation of digital coworkers, while our research team drives advancements in post-training and reinforcement learning to enhance user experiences. As an applied research engineer, you will work directly with clients to implement models in production, combining robust product development with deep research insights to facilitate AI integration in enterprises.Meet Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have previously built reinforcement learning infrastructure at OpenAI, established data foundations at Scale AI, and contributed to significant systems at companies like Together, Two Sigma, and Watershed. We collaborate with Fortune 50 clients, including DoorDash, Mercor, and Cognition, and are proud to be backed by reputable investors such as Benchmark, Sequoia, and Lux.Who Thrives Here: We seek individuals who are passionate about applying innovative research and complex systems to solve real-world challenges. You should feel comfortable navigating new environments rapidly—be it a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment for customer interaction, empathy, and a deep understanding of their operational workflows are essential. Candidates with entrepreneurial backgrounds, extensive side projects, or a proven track record of end-to-end ownership typically excel in our environment.
Join Our Team at MacroscopeAt Macroscope, we are dedicated to being the definitive source of truth for any software development company. Our mission is to empower leaders with clarity and provide engineers with the time they need to innovate.We enable leaders to gain insights into the evolution of their products and codebases—tracking changes, understanding team contributions, and identifying progress—all grounded in the ultimate source of truth: the code itself.Founded by experienced entrepreneurs who have successfully built and sold multiple companies, and held executive positions in public tech firms, we are backed by top-tier venture capital firms such as Lightspeed Venture Partners, Thrive Capital, Google Ventures, and Adverb.The RoleWe are seeking a Senior Applied Machine Learning Engineer who will be responsible for designing, developing, and optimizing the ML and AI systems that drive our core offerings. You will have full ownership of the systems, overseeing everything from data collection and evaluation to model experimentation and large-scale production deployment.This cross-functional position entails leading the ML/AI lifecycle for one of our most vital features: AI Code Review. Collaborating closely with our co-founders, you will make pivotal decisions that shape our product's development—ranging from building high-quality datasets to interpreting experimental results and enhancing model performance architecture. Additionally, you will play a significant role in crafting and implementing software that seamlessly integrates our models with our backend applications and user experience, offering a unique opportunity to influence our product's evolution significantly.Technology Stack: Typescript/React (frontend), Golang (backend), Temporal, Google Cloud (GCP), Postgres, Terraform, and custom-built AST "code walkers" in several programming languages including Golang, Typescript, Swift, Python, and Rust.
About LightfieldAt Lightfield, we are pioneering the future of CRM with our AI-native platform that seamlessly integrates with your email, calendar, and meetings. Our innovative solution captures every interaction, transforming it into structured context, including accounts, tasks, follow-ups, and insights, ensuring that nothing is overlooked.We are fundamentally reimagining CRM by employing a flexible approach that adapts to how teams operate, rather than imposing rigid systems. Lightfield continuously learns, automates processes, and surfaces valuable insights that fuel growth. We are dedicated to creating a CRM platform that is not only fast and intelligent but also genuinely helpful.Our team is backed by prestigious investors like Greylock, Lightspeed, and Coatue, and has a rich history in building successful products, including Tome, a generative AI presentation tool utilized by over 25 million users. Our collective experience spans notable companies such as Llama, Instagram, Facebook Messenger, Pinterest, Google, and Salesforce.About the RoleJoin our dynamic AI/ML team at Lightfield, where we are developing the core experiences of our product through cutting-edge applications that amaze our customers. We are currently focused on creating a robust, domain-specific AI that surpasses conventional LLMs.We thrive on the challenge of crafting innovative AI solutions for professionals engaged in significant work, and we're eager to expand our AI/ML team to rise to this challenge.Your ResponsibilitiesDesign and deliver extraordinary AI experiences that empower sales teams.Collaborate closely with founders and executives to shape Lightfield's AI/ML strategy.Lead the training of new models utilizing both historical and synthetic training data.Develop and prototype innovative LLM-driven experiences, transforming them into robust product features.Contribute to building a top-tier AI/ML engineering team through recruitment and mentorship.Your Profile5+ years of industry experience in Natural Language Processing (NLP) with a strong portfolio of model training.Solid understanding of deep learning AI/ML frameworks and cloud services.Hands-on experience in ML Operations (ML Ops).Deep expertise in NLP and model training, particularly with Large Language Models (LLMs).Demonstrated ability to adapt open-source generative models for specific applications, with a comprehensive understanding of their architecture.
Join David AIAt David AI, we are pioneering the audio data research landscape. Our research and development approach to data ensures that we deliver datasets with the same precision and rigor that leading AI labs apply to their models. Our mission is to seamlessly integrate AI into everyday life, leveraging audio as a key channel. As we witness advancements in audio AI and the emergence of new use cases, we recognize that high-quality training data is the critical component. This is where David AI steps in.Founded in 2024 by a group of former engineers and operators from Scale AI, we have rapidly established partnerships with major FAANG companies and AI labs. Recently, we secured a $50M Series B funding round from prominent investors including Meritech, NVIDIA, Jack Altman (Alt Capital), Amplify Partners, and First Round Capital.Our team is sharp, humble, and ambitious. We are on the lookout for talented individuals in research, engineering, product management, and operations to join us in our mission to redefine the audio AI landscape.About Our Machine Learning TeamOur Machine Learning team operates at the forefront of innovative research and practical application, transforming raw audio into high-quality data for top AI labs and enterprises. We manage the entire machine learning lifecycle—from exploring novel speech processing algorithms to deploying models that handle terabytes of audio data daily.Your RoleAs an Applied ML Engineer at David AI, you will develop state-of-the-art speech and audio models, establish production inference systems, and create robust pipelines that demonstrate the true potential of high-quality data.Key ResponsibilitiesResearch and Design: Create solutions using advanced signal processing algorithms and cutting-edge ML models tailored for speech and audio applications.Development: Build production-grade inference algorithms, pipelines, and APIs in collaboration with cross-functional teams to extract valuable insights for our clients.Collaboration: Work alongside our Operations team to gather valuable training and evaluation datasets to enhance our model quality.Architecture: Design systems that ensure durable and resilient inference and evaluations.
Full-time|$160K/yr - $300K/yr|On-site|New York City; San Francisco, CA
About HebbiaHebbia is an innovative AI platform designed specifically for investors and bankers, empowering them to generate alpha and unlock new opportunities.Founded in 2020 by George Sivulka and backed by industry leaders like Peter Thiel and Andreessen Horowitz, Hebbia supports investment decisions for major firms including BlackRock, KKR, Carlyle, Centerview, and accounts for 40% of the world's largest asset managers. Our flagship product, Matrix, is recognized for its unparalleled accuracy, speed, and transparency in AI-driven analysis, managing assets exceeding $30 trillion globally.We provide critical insights that give finance professionals a competitive advantage by revealing signals that are invisible to the human eye and identifying hidden opportunities while expediting decision-making with remarkable speed and certainty. We aim to revolutionize the way capital is allocated, risk is mitigated, and value is generated across markets.Hebbia is not just a tool; it is the competitive edge that enhances performance, alpha, and market leadership.The TeamOur Agents team is dedicated to building sophisticated reasoning, copiloting, and retrieval capabilities that unlock significant insights for real-world applications. We develop everything from foundational document understanding features to co-piloting experiences for matrix and extensive, multi-source research. Our proprietary agentic frameworks are designed for scalability, utilizing distributed systems.We focus on creating systems that are not only successful but also reliable, explainable, and adaptable for the vast data our clients encounter. Our mission is to unveil the unknowable unknown for customers worldwide.Our goal is to create a product that becomes indispensable to our users, offering an experience as delightful as their favorite consumer products. We prioritize swift innovation and the development of first-of-their-kind systems.
Company OverviewEcho Neurotechnologies is a pioneering startup in the Brain-Computer Interface (BCI) sector, dedicated to revolutionizing the lives of individuals with disabilities through advanced hardware engineering and artificial intelligence solutions. Our vision is to develop innovative technologies that empower users, restoring autonomy and enhancing their quality of life.Team CultureWe pride ourselves on cultivating an inclusive and dynamic team of skilled professionals who are passionate about their work. Our startup environment encourages ownership of impactful decisions and fosters continuous learning and collaboration, where every contribution is essential to our collective success.Job SummaryWe are on the lookout for a talented Machine Learning Research Engineer specialized in speech modeling to join our innovative team. The successful candidate will leverage ML/AI methodologies to create and refine adaptable speech models aimed at brain-computer interface applications, ultimately making a difference in the lives of patients facing severe disabilities. Candidates should possess significant expertise in speech modeling, feature engineering, time-series analysis, and the development of custom ML models.Key ResponsibilitiesDesign and evaluate diverse model architectures and strategies to enhance the accuracy and resilience of models for interpreting speech from brain activity.Investigate and implement cutting-edge speech features and representations within neural-decoding frameworks, informed by speech science and functional neurophysiology.Create pipelines for generating personalized and naturalistic speech from both text and brain activity inputs.Develop algorithms to analyze both intact and compromised speech signals, identifying biomarkers linked to various diseases and disabilities.Collaborate within a tight-knit team to build models, define R&D workflows, and translate scientific discoveries into practical applications.Contribute to best practices ensuring reliability, observability, reproducibility, and scientific rigor across the R&D landscape.Maintain well-documented, versioned code, analysis pipelines, and results for maximum interpretability and reproducibility.
Join us at Foxglove, where we are revolutionizing the robotics industry by building robust data infrastructure for real-world applications.As robotics transitions from research environments to practical implementations in factories, warehouses, vehicles, and field operations, data becomes essential for engineers to troubleshoot failures, understand unexpected behaviors, and enhance robotic systems.At Foxglove, we provide the observability, visualization, and data infrastructure that enable robotics and autonomous systems teams to efficiently ingest, store, query, replay, and analyze extensive volumes of multimodal sensor data from live systems and production fleets.About the RoleWe are seeking a talented Applied Machine Learning Engineer with strong infrastructure insights to design, deploy, and scale the machine learning systems that power our data platform. In this impactful role, you will be responsible for optimizing production ML infrastructure—from enhancing inference pipeline throughput to establishing training and evaluation workflows. You will focus on high-priority challenges, such as developing retrieval applications for petabyte-scale multimodal robotics data, utilizing cutting-edge models to create high-performance search and data mining products, and fostering an internal ML flywheel for rapid iteration. This is a hands-on, application-driven position rather than a research-focused role.Key ResponsibilitiesDeploy and manage inference infrastructure for production ML workloads, focusing on model serving, scalability, and cost efficiency.Build and oversee vector database integrations and embedding applications to facilitate semantic search across various multimodal robotics data types (image, video, point cloud, and time series).Design and implement evaluation and training infrastructure to enhance model performance rapidly.Lead cloud architecture decisions and tools to optimize inference latency, throughput, cost, and reliability at scale.Collaborate closely with product engineers to deliver application-driven ML features that empower developers at the forefront of robotics and physical AI, steering clear of prototype experiments.Identify appropriate off-the-shelf solutions for production and determine when to build versus buy.
Join Our Innovative TeamAt OpenAI, we are pioneering the field of artificial intelligence, empowering innovation and shaping the future through transformative research. Our mission is to democratize AI, ensuring its benefits are accessible to all. We are on the lookout for forward-thinking Research Engineers to join our Applied Group, where you will convert groundbreaking research into practical applications that can revolutionize industries, enhance human creativity, and tackle complex challenges.Your Impactful RoleAs a Research Engineer within OpenAI's Applied Group, you will collaborate with some of the brightest minds in AI. Your work will involve deploying cutting-edge models in production settings, transforming theoretical breakthroughs into impactful solutions. If you are passionate about making AI technology accessible and effective, this is your opportunity to leave a significant impact.In this role, you will:Innovate and Deploy: Create and implement advanced machine learning models addressing real-world issues. Translate OpenAI's research from theory to practice, developing AI-driven applications that make a meaningful difference.Collaborate with Experts: Engage closely with researchers, software engineers, and product managers to comprehend intricate business challenges and deliver AI-based solutions. Become part of a vibrant team where creativity and ideas flourish.Optimize and Scale: Develop scalable data pipelines, fine-tune models for peak performance and precision, and ensure readiness for production. Contribute to projects that leverage state-of-the-art technology and innovative methodologies.Learn and Lead: Stay at the forefront of advancements in machine learning and AI. Participate in code reviews, share insights, and exemplify best practices to maintain high standards in engineering.Make a Difference: Oversee and maintain deployed models, ensuring they consistently deliver value. Your contributions will directly shape how AI benefits individuals, businesses, and society as a whole.You may excel in this position if you possess:A Master's or PhD in Computer Science, Machine Learning, Data Science, or a related discipline.Proven experience in deep learning and transformer models.Expertise with frameworks such as PyTorch or TensorFlow.A robust understanding of data structures, algorithms, and software engineering principles.Experience with cloud platforms and deploying machine learning models in production.
The OpportunityJoin us at ComfyOrg as a Senior/Staff Applied Machine Learning Engineer! We are on the hunt for a passionate innovator who is enthusiastic about optimizing model inference. You will play a pivotal role in developing the heart of ComfyUI, our cutting-edge visual AI platform. Your expertise will help us push the limits of AI model performance, making them run faster and more efficiently than ever before.Are You a Match?You are fascinated by model inference, memory management, and torch optimizations.You possess experience in writing production-level PyTorch code that challenges performance standards.You have a passion for understanding the inner workings of AI models.You thrive on developing highly optimized code that consistently delivers results.You believe that the current landscape of ML deployment holds significant room for improvement.Your Responsibilities:Develop and enhance the core inference engine that drives ComfyUI.Optimize large models for speed and memory efficiency.Collaborate with our core team to architect new features.Tackle complex technical challenges within the visual AI domain.Contribute to the future direction of our technology.Experience with diffusion or LLM models, as well as creating custom nodes for ComfyUI, is highly beneficial.
About Sygaldry TechnologiesSygaldry Technologies is at the forefront of innovation, developing quantum-accelerated AI servers designed to significantly enhance the speed of AI training and inference. By merging quantum computing with AI, we are navigating the challenges of increasing compute costs and energy constraints, paving the way towards superintelligence. Our AI servers leverage a diverse range of qubit types in a fault-tolerant architecture, achieving the necessary balance of cost, scalability, and speed for advanced AI applications. We are committed to pioneering new frontiers in physics, engineering, and AI, tackling the most complex challenges with a culture grounded in optimism and rigor. We seek individuals passionate about defining the convergence of quantum and AI and making a meaningful global impact.About the RoleGenerative AI is revolutionizing computational possibilities but reveals the limitations of classical hardware. While diffusion models yield remarkable outcomes, their iterative sampling and high-dimensional score estimation often lead to computational inefficiencies.We are convinced that quantum computing holds the key to overcoming these challenges. As an ML Research Scientist, you will operate at the intersection of generative modeling and quantum acceleration, formulating theoretical foundations and practical applications that merge these domains. Your focus will be on identifying areas where quantum methods can deliver substantial advantages in generative workflows, providing not just incremental enhancements but transformative improvements grounded in mathematical principles.Your ResponsibilitiesGenerative Model Architecture & EfficiencyInnovate state-of-the-art diffusion and score-based generative models.Investigate computational bottlenecks in sampling, denoising, and likelihood estimation.Design and evaluate novel solver techniques for diffusion ODEs/SDEs.Quantum-Classical IntegrationDiscover mathematical structures in generative models that are suitable for quantum acceleration.Prototype hybrid workflows that utilize quantum subroutines to enhance classical processes.Conduct rigorous benchmarks comparing theoretical advantages against practical benefits in realistic scenarios.Research to ProductionTransform research findings into scalable implementations.Collaborate with quantum hardware teams to guide architectural specifications.Facilitate the integration of research insights into production environments.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
About Scale AI At Scale AI, we are committed to propelling the advancement of AI technologies. For over eight years, we have been a pioneer in the AI data sector, supporting groundbreaking innovations in areas such as generative AI, defense solutions, and autonomous driving. Following our recent Series F funding round, we are enhancing access to premium data to accelerate the journey towards Artificial General Intelligence (AGI). Building on our legacy of model evaluation for both enterprise and governmental clients, we are expanding our capabilities to establish new benchmarks for evaluations in both public and private domains. About This Role This position is at the leading edge of AI research and practical implementation, concentrating on reasoning within large language models (LLMs). The successful candidate will investigate critical data types vital for evolving LLM-based agents, including browser and software engineering agents. You will significantly influence Scale’s data strategy by pinpointing optimal data sources and methodologies to enhance LLM reasoning. To excel in this role, you will require a profound understanding of LLMs, planning algorithms, and fresh approaches to agentic reasoning, alongside inventive solutions to challenges in data generation, model interaction, and evaluation. Your contributions will lead to transformative research on language model reasoning, facilitate collaboration with external researchers, and engage closely with engineering teams to translate cutting-edge advancements into scalable, real-world applications.
Full-time|$251.7K/yr - $330K/yr|On-site|San Francisco Bay Area, CA
Our MissionAt Altos Labs, we are dedicated to restoring cell health and resilience through innovative cell rejuvenation techniques aimed at reversing diseases, injuries, and disabilities that can arise throughout life.For further insights, please visit our website at altoslabs.com.Our ValueOur singular Altos Value is: Everyone Owns Achieving Our Inspiring Mission.Diversity at AltosWe firmly believe that diverse perspectives are crucial for scientific innovation. At Altos, exceptional scientists and industry leaders collaborate globally to further our shared mission. We prioritize Belonging, ensuring all employees feel valued for their unique perspectives, and we hold ourselves accountable for maintaining a diverse and inclusive environment.Your Contributions to AltosAs a member of our team, you will accelerate and enhance our efforts in developing unified, multi-modal generative foundation models tailored for multiscale biology. You will be a key player in multidisciplinary teams that create the computational platforms essential for Altos to fulfill its mission.In this position, you will collaborate with other scientists and engineers across the Institute of Computation to design, develop, and scale cutting-edge foundation models that address biological inquiries and assist in discovering novel interventions for aging and disease. Your focus will be on synthesizing unstructured multimodal signals with structured relational data and knowledge graphs that depict biological realities.The ideal candidate will excel in a dynamic environment that values teamwork, transparency, scientific excellence, originality, and integrity.
About UsAt Applied Compute, we specialize in creating Specific Intelligence solutions for enterprises, developing agents that learn continuously from an organization’s processes, data, expertise, and objectives. We recognize a significant gap between the capabilities of AI models in isolation and their practical applications in real-world business contexts. Our systems often fall short because they lack adaptability to feedback. To address this, we are building a continual learning infrastructure that captures context, memory, and decision-making processes throughout the enterprise, enabling specialized agents to effectively execute real tasks.What Excites Us: We operate at a unique intersection where our product team constructs the platform that fuels a new generation of digital coworkers. Our research team pushes the boundaries of post-training and reinforcement learning, creating innovative product experiences. Our applied research engineers collaborate closely with clients to deploy models into production. This blend of strong product focus, deep research, and hands-on customer engagement is crucial for integrating AI into the enterprise. We are product-driven, research-informed, and actively engaged with our clients.Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have built RL infrastructure at leading organizations like OpenAI and Scale AI, and developed systems at Together, Two Sigma, and Watershed. We proudly serve Fortune 50 clients alongside companies like DoorDash, Mercor, and Cognition. Our work is supported by renowned investors, including Benchmark, Sequoia, and Lux.Who Thrives in Our Environment: We seek individuals eager to apply cutting-edge research and complex systems to tackle real-world challenges. You should be adept at quickly adapting to new environments, whether it’s a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment of customer interactions—listening, empathizing, and understanding how tasks are accomplished within their organizations—is essential. Those with entrepreneurial backgrounds, extensive side projects, or demonstrated end-to-end ownership typically excel in our company.
Full-time|$176K/yr - $220K/yr|On-site|San Francisco, CA; New York, NY
About This Role Join Scale AI's Applied ML team as a Machine Learning Research Engineer, focusing on the development of advanced data infrastructure for leading agentic large language models (LLMs) such as ChatGPT, Gemini, and Llama. You will be responsible for architecting scalable multi-agent systems aimed at validating agentic reasoning and behaviors, enhancing human expertise, and conducting research to address real-world agent reliability failures, even in the face of strong benchmarks. Your contributions will directly impact the deployment of production fixes. This role is ideal for exceptional engineers who possess a deep research rigor and a strong commitment to creating practical, high-impact systems. You will iterate rapidly using data, leverage AI tools for accelerated development, and collaborate closely with engineering, product, and research teams. If you have a knack for transforming cutting-edge agent research into dependable deployed systems, we would love to hear from you.
Full-time|$166K/yr - $210.3K/yr|On-site|San Francisco, California
P-1380 Join Databricks as a Senior Applied AI Engineer, where you will harness the power of machine learning, scheduling, and optimization algorithms to enhance the efficiency and performance of our engineering systems and infrastructure. Our Applied AI team tackles some of the most challenging and fascinating issues in the industry, ensuring that Databricks infrastructure and products operate at peak performance and cost efficiency. This role is critical, as our customers depend on us to deliver the most optimized workloads. Your Impact: Develop comprehensive systems from the ground up within a dynamic team of seasoned professionals. Influence the direction of our applied machine learning investment areas by collaborating with engineering and product teams across the organization. Lead the design and implementation of advanced AI models and systems that enhance the capabilities and performance of Databricks' products, infrastructure, and services. Architect and deploy robust, scalable machine learning infrastructure, including data storage, processing, model training, serving components, and monitoring systems to facilitate seamless integration of AI/ML models into production environments. Explore innovative modeling techniques in the realm of machine learning for systems. Contribute to the wider AI community by publishing research, presenting at conferences, and actively engaging in open-source projects, thereby strengthening Databricks' reputation as an industry leader.
Why Join Achira?Become part of an elite team comprising scientists, machine learning researchers, and engineers dedicated to transforming the predictability of the physical microcosm and revolutionizing drug discovery.Explore uncharted territories: we are on a mission to innovate next-generation model architectures that merge AI with chemistry.Engage in large-scale operations: harness massive computational resources, extensive datasets, and ambitious objectives.Take ownership of significant projects from inception to deployment on large-scale infrastructures.Thrive in a culture that values precision, speed, execution, and a proactive mindset.About the PositionAt Achira, we are committed to developing state-of-the-art foundation models that tackle the most complex challenges in simulation for drug discovery and beyond. Our atomistic foundation simulation models (FSMs) serve as world models of the physical microcosm, incorporating machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and various generative models.We are seeking a Machine Learning Research Engineer (MLRE) who excels at the intersection of advanced machine learning and rigorous research methodologies. Collaborate closely with our research scientists to design and enhance intelligent training systems that propel us beyond contemporary architectures into a new era of ML-driven molecular modeling.Your mission is clear yet ambitious: to establish the foundational frameworks for training atomistic simulation models at scale. This entails a deep dive into architecture, data, optimizers, losses, training metrics, and representation learning, all while constructing high-performance systems that maximize the potential of our models. In this role, you will be instrumental in creating a blueprint for pretraining FSMs similar to today’s large-scale generative AI systems, making a significant impact on drug discovery.At Achira, you will have the chance to pioneer models that comprehend and simulate the physical world at an atomic level, achieving unprecedented speed and accuracy.
Join the Revolution in Behavioral IntelligenceAmplify Your InfluenceYou have achieved remarkable success in your career, creating robust behavioral or neuroscience models that have driven significant outcomes. You possess a talent for discerning patterns in user behavior, comprehending motivations, and optimizing end-to-end user experiences.Now, envision extending your impact across multiple products and organizations, enhancing the entire app ecosystem. Every application at your fingertips becomes smarter, more engaging, and indispensable to its users.Your expertise can empower product teams to innovate more rapidly, delight users, and boost revenue, all thanks to the behavioral intelligence you develop once and deploy universally.We share this vision: our team has accomplished this repeatedly at industry leaders like Uber, Apple, Google, and Chime, generating tens of billions of dollars in value for products vital to billions globally. We are poised to elevate our impact even further.Does this resonate with the next chapter you're seeking? If so, continue reading.Palladio: Pioneering BreakthroughsPalladio AI is an innovative AI platform aimed at transforming product-led growth and enhancing the value our clients provide in users’ daily lives.Our initial focus is on mobile gaming, where development is swift, user engagement is high, and experimentation yields immediate results—making it the perfect testing ground for our platform.Your ContributionsOur team is constructing foundational systems in behavioral modeling, causal inference, forecasting, and agentic platforms. You will play a pivotal role in extending these areas: creating machine learning and AI-driven behavioral models to identify and highlight product opportunities while deploying self-improving learning loops with each iteration. Your work will analyze user sentiments, thoughts, decisions, and actions—translating behavioral insights into opportunities that enhance product intuitiveness, engagement, and rewards. In essence, you will convert first-principles data science, neuroscience, cognitive science, and machine learning into scalable solutions across various industries.Your ProfileUser-Focused. You empathize with users' challenges, needs, and goals throughout their journeys, measure success through user outcomes, and convert insights into innovative and engaging product experiences.Scientific Innovator. You...
Company Overview:At Specter, we are pioneering a software-defined control plane for the physical realm, beginning with safeguarding American enterprises through comprehensive monitoring of their physical assets.Our innovative approach leverages a connected hardware-software ecosystem built on advanced multi-modal wireless mesh sensing technology. This breakthrough enables us to reduce the deployment costs and time for sensors by a factor of 10. Our ultimate goal is to establish a perception engine that provides real-time visibility of a company’s physical environment and facilitates autonomous operations management.Co-founders Xerxes and Philip are dedicated to empowering our partners in the rapidly evolving landscape of physical AI and robotics. Join our dynamic and rapidly expanding team comprised of talents from Anduril, Tesla, Uber, and the U.S. Special Forces.Position Overview:We are seeking a Perception AI Engineer who will be instrumental in transforming sensor data pipelines into actionable insights for our clients.Key Responsibilities:Implement and deploy a range of deep-learning models, including vision, vision-language, and large language models, within our sophisticated distributed perception system.Design and scale a production-ready data collection, labeling, and model retraining platform.Lead the design of a multimodal software user interface.
Full-time|$200K/yr - $240K/yr|Hybrid|United States
SentiLink is at the forefront of delivering cutting-edge identity and risk management solutions, providing both individuals and institutions the ability to transact with assurance. We are revolutionizing identity verification within the United States, replacing outdated, inefficient, and costly practices with solutions that are ten times faster, smarter, and more precise.Our rapid growth is a testament to our innovative approach; our real-time APIs have successfully verified hundreds of millions of identities, initially focusing on the financial sector and quickly expanding into various new markets. SentiLink enjoys the support of prestigious investors including Craft Ventures, Andreessen Horowitz, NYCA, and Max Levchin.We are proud to have received accolades from TechCrunch, CNBC, Bloomberg, Forbes, Business Insider, PYMNTS, and American Banker, and we have been featured in the Forbes Fintech 50 list every year since 2023. Notably, we made history as the first company to deploy the eCBSV and have testified before the United States House of Representatives regarding the future of identity verification.SentiLink accommodates a flexible work environment, ranging from fully remote positions to in-office roles. As a digital-first company, we emphasize collaboration across teams in the U.S. and India. We have physical locations in Austin, San Francisco, New York City, Seattle, Los Angeles, and Chicago in the U.S., alongside offices in Gurugram (Delhi) and Bengaluru in India. For those near our offices, we encourage regular office attendance. Certain roles, such as our engineering team in India, are designed to be primarily in-office.Role Overview:As a Senior Applied ML Scientist at SentiLink, you will be instrumental in developing our core products: advanced models aimed at identifying fraudulent activities while enhancing our expanding array of financial risk solutions. Your expertise as a seasoned researcher will be essential, making you the authoritative figure in your domain. You will frequently engage in high-impact projects that necessitate a profound understanding of the field, critical analytical skills, and robust technical capabilities. Collaboration with various teams across the organization will be key as you investigate new fraud types, innovate product offerings, and conduct analyses to support our sales and marketing efforts.
Join latentlabs, a pioneering company at the forefront of biotechnology, as we seek a talented Machine Learning Researcher specializing in generative modeling. You will become part of a dynamic, interdisciplinary team comprising machine learning experts, protein engineers, and biologists, all committed to revolutionizing biological control and disease treatment. In this role, you will design innovative generative models aimed at creating new proteins that exhibit functionality in wet lab assays.
About UsAt Applied Compute, we are pioneering the development of Specific Intelligence for enterprises, creating agents that continuously learn from a company’s processes, data, expertise, and objectives. Our mission is to bridge the gap between isolated AI capabilities and their effective application within real business environments. Traditional AI systems often fall short as they lack the ability to adapt based on feedback. Our innovative continual learning layer captures context, memory, and decision-making processes across the enterprise, enabling specialized agents to engage in meaningful work.What Excites Us: We operate at the exciting intersection of product development and cutting-edge research. Our product team designs the platform that empowers a new generation of digital coworkers, while our research team drives advancements in post-training and reinforcement learning to enhance user experiences. As an applied research engineer, you will work directly with clients to implement models in production, combining robust product development with deep research insights to facilitate AI integration in enterprises.Meet Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have previously built reinforcement learning infrastructure at OpenAI, established data foundations at Scale AI, and contributed to significant systems at companies like Together, Two Sigma, and Watershed. We collaborate with Fortune 50 clients, including DoorDash, Mercor, and Cognition, and are proud to be backed by reputable investors such as Benchmark, Sequoia, and Lux.Who Thrives Here: We seek individuals who are passionate about applying innovative research and complex systems to solve real-world challenges. You should feel comfortable navigating new environments rapidly—be it a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment for customer interaction, empathy, and a deep understanding of their operational workflows are essential. Candidates with entrepreneurial backgrounds, extensive side projects, or a proven track record of end-to-end ownership typically excel in our environment.
Join Our Team at MacroscopeAt Macroscope, we are dedicated to being the definitive source of truth for any software development company. Our mission is to empower leaders with clarity and provide engineers with the time they need to innovate.We enable leaders to gain insights into the evolution of their products and codebases—tracking changes, understanding team contributions, and identifying progress—all grounded in the ultimate source of truth: the code itself.Founded by experienced entrepreneurs who have successfully built and sold multiple companies, and held executive positions in public tech firms, we are backed by top-tier venture capital firms such as Lightspeed Venture Partners, Thrive Capital, Google Ventures, and Adverb.The RoleWe are seeking a Senior Applied Machine Learning Engineer who will be responsible for designing, developing, and optimizing the ML and AI systems that drive our core offerings. You will have full ownership of the systems, overseeing everything from data collection and evaluation to model experimentation and large-scale production deployment.This cross-functional position entails leading the ML/AI lifecycle for one of our most vital features: AI Code Review. Collaborating closely with our co-founders, you will make pivotal decisions that shape our product's development—ranging from building high-quality datasets to interpreting experimental results and enhancing model performance architecture. Additionally, you will play a significant role in crafting and implementing software that seamlessly integrates our models with our backend applications and user experience, offering a unique opportunity to influence our product's evolution significantly.Technology Stack: Typescript/React (frontend), Golang (backend), Temporal, Google Cloud (GCP), Postgres, Terraform, and custom-built AST "code walkers" in several programming languages including Golang, Typescript, Swift, Python, and Rust.
About LightfieldAt Lightfield, we are pioneering the future of CRM with our AI-native platform that seamlessly integrates with your email, calendar, and meetings. Our innovative solution captures every interaction, transforming it into structured context, including accounts, tasks, follow-ups, and insights, ensuring that nothing is overlooked.We are fundamentally reimagining CRM by employing a flexible approach that adapts to how teams operate, rather than imposing rigid systems. Lightfield continuously learns, automates processes, and surfaces valuable insights that fuel growth. We are dedicated to creating a CRM platform that is not only fast and intelligent but also genuinely helpful.Our team is backed by prestigious investors like Greylock, Lightspeed, and Coatue, and has a rich history in building successful products, including Tome, a generative AI presentation tool utilized by over 25 million users. Our collective experience spans notable companies such as Llama, Instagram, Facebook Messenger, Pinterest, Google, and Salesforce.About the RoleJoin our dynamic AI/ML team at Lightfield, where we are developing the core experiences of our product through cutting-edge applications that amaze our customers. We are currently focused on creating a robust, domain-specific AI that surpasses conventional LLMs.We thrive on the challenge of crafting innovative AI solutions for professionals engaged in significant work, and we're eager to expand our AI/ML team to rise to this challenge.Your ResponsibilitiesDesign and deliver extraordinary AI experiences that empower sales teams.Collaborate closely with founders and executives to shape Lightfield's AI/ML strategy.Lead the training of new models utilizing both historical and synthetic training data.Develop and prototype innovative LLM-driven experiences, transforming them into robust product features.Contribute to building a top-tier AI/ML engineering team through recruitment and mentorship.Your Profile5+ years of industry experience in Natural Language Processing (NLP) with a strong portfolio of model training.Solid understanding of deep learning AI/ML frameworks and cloud services.Hands-on experience in ML Operations (ML Ops).Deep expertise in NLP and model training, particularly with Large Language Models (LLMs).Demonstrated ability to adapt open-source generative models for specific applications, with a comprehensive understanding of their architecture.
Join David AIAt David AI, we are pioneering the audio data research landscape. Our research and development approach to data ensures that we deliver datasets with the same precision and rigor that leading AI labs apply to their models. Our mission is to seamlessly integrate AI into everyday life, leveraging audio as a key channel. As we witness advancements in audio AI and the emergence of new use cases, we recognize that high-quality training data is the critical component. This is where David AI steps in.Founded in 2024 by a group of former engineers and operators from Scale AI, we have rapidly established partnerships with major FAANG companies and AI labs. Recently, we secured a $50M Series B funding round from prominent investors including Meritech, NVIDIA, Jack Altman (Alt Capital), Amplify Partners, and First Round Capital.Our team is sharp, humble, and ambitious. We are on the lookout for talented individuals in research, engineering, product management, and operations to join us in our mission to redefine the audio AI landscape.About Our Machine Learning TeamOur Machine Learning team operates at the forefront of innovative research and practical application, transforming raw audio into high-quality data for top AI labs and enterprises. We manage the entire machine learning lifecycle—from exploring novel speech processing algorithms to deploying models that handle terabytes of audio data daily.Your RoleAs an Applied ML Engineer at David AI, you will develop state-of-the-art speech and audio models, establish production inference systems, and create robust pipelines that demonstrate the true potential of high-quality data.Key ResponsibilitiesResearch and Design: Create solutions using advanced signal processing algorithms and cutting-edge ML models tailored for speech and audio applications.Development: Build production-grade inference algorithms, pipelines, and APIs in collaboration with cross-functional teams to extract valuable insights for our clients.Collaboration: Work alongside our Operations team to gather valuable training and evaluation datasets to enhance our model quality.Architecture: Design systems that ensure durable and resilient inference and evaluations.
Full-time|$160K/yr - $300K/yr|On-site|New York City; San Francisco, CA
About HebbiaHebbia is an innovative AI platform designed specifically for investors and bankers, empowering them to generate alpha and unlock new opportunities.Founded in 2020 by George Sivulka and backed by industry leaders like Peter Thiel and Andreessen Horowitz, Hebbia supports investment decisions for major firms including BlackRock, KKR, Carlyle, Centerview, and accounts for 40% of the world's largest asset managers. Our flagship product, Matrix, is recognized for its unparalleled accuracy, speed, and transparency in AI-driven analysis, managing assets exceeding $30 trillion globally.We provide critical insights that give finance professionals a competitive advantage by revealing signals that are invisible to the human eye and identifying hidden opportunities while expediting decision-making with remarkable speed and certainty. We aim to revolutionize the way capital is allocated, risk is mitigated, and value is generated across markets.Hebbia is not just a tool; it is the competitive edge that enhances performance, alpha, and market leadership.The TeamOur Agents team is dedicated to building sophisticated reasoning, copiloting, and retrieval capabilities that unlock significant insights for real-world applications. We develop everything from foundational document understanding features to co-piloting experiences for matrix and extensive, multi-source research. Our proprietary agentic frameworks are designed for scalability, utilizing distributed systems.We focus on creating systems that are not only successful but also reliable, explainable, and adaptable for the vast data our clients encounter. Our mission is to unveil the unknowable unknown for customers worldwide.Our goal is to create a product that becomes indispensable to our users, offering an experience as delightful as their favorite consumer products. We prioritize swift innovation and the development of first-of-their-kind systems.
Company OverviewEcho Neurotechnologies is a pioneering startup in the Brain-Computer Interface (BCI) sector, dedicated to revolutionizing the lives of individuals with disabilities through advanced hardware engineering and artificial intelligence solutions. Our vision is to develop innovative technologies that empower users, restoring autonomy and enhancing their quality of life.Team CultureWe pride ourselves on cultivating an inclusive and dynamic team of skilled professionals who are passionate about their work. Our startup environment encourages ownership of impactful decisions and fosters continuous learning and collaboration, where every contribution is essential to our collective success.Job SummaryWe are on the lookout for a talented Machine Learning Research Engineer specialized in speech modeling to join our innovative team. The successful candidate will leverage ML/AI methodologies to create and refine adaptable speech models aimed at brain-computer interface applications, ultimately making a difference in the lives of patients facing severe disabilities. Candidates should possess significant expertise in speech modeling, feature engineering, time-series analysis, and the development of custom ML models.Key ResponsibilitiesDesign and evaluate diverse model architectures and strategies to enhance the accuracy and resilience of models for interpreting speech from brain activity.Investigate and implement cutting-edge speech features and representations within neural-decoding frameworks, informed by speech science and functional neurophysiology.Create pipelines for generating personalized and naturalistic speech from both text and brain activity inputs.Develop algorithms to analyze both intact and compromised speech signals, identifying biomarkers linked to various diseases and disabilities.Collaborate within a tight-knit team to build models, define R&D workflows, and translate scientific discoveries into practical applications.Contribute to best practices ensuring reliability, observability, reproducibility, and scientific rigor across the R&D landscape.Maintain well-documented, versioned code, analysis pipelines, and results for maximum interpretability and reproducibility.
Join us at Foxglove, where we are revolutionizing the robotics industry by building robust data infrastructure for real-world applications.As robotics transitions from research environments to practical implementations in factories, warehouses, vehicles, and field operations, data becomes essential for engineers to troubleshoot failures, understand unexpected behaviors, and enhance robotic systems.At Foxglove, we provide the observability, visualization, and data infrastructure that enable robotics and autonomous systems teams to efficiently ingest, store, query, replay, and analyze extensive volumes of multimodal sensor data from live systems and production fleets.About the RoleWe are seeking a talented Applied Machine Learning Engineer with strong infrastructure insights to design, deploy, and scale the machine learning systems that power our data platform. In this impactful role, you will be responsible for optimizing production ML infrastructure—from enhancing inference pipeline throughput to establishing training and evaluation workflows. You will focus on high-priority challenges, such as developing retrieval applications for petabyte-scale multimodal robotics data, utilizing cutting-edge models to create high-performance search and data mining products, and fostering an internal ML flywheel for rapid iteration. This is a hands-on, application-driven position rather than a research-focused role.Key ResponsibilitiesDeploy and manage inference infrastructure for production ML workloads, focusing on model serving, scalability, and cost efficiency.Build and oversee vector database integrations and embedding applications to facilitate semantic search across various multimodal robotics data types (image, video, point cloud, and time series).Design and implement evaluation and training infrastructure to enhance model performance rapidly.Lead cloud architecture decisions and tools to optimize inference latency, throughput, cost, and reliability at scale.Collaborate closely with product engineers to deliver application-driven ML features that empower developers at the forefront of robotics and physical AI, steering clear of prototype experiments.Identify appropriate off-the-shelf solutions for production and determine when to build versus buy.
Join Our Innovative TeamAt OpenAI, we are pioneering the field of artificial intelligence, empowering innovation and shaping the future through transformative research. Our mission is to democratize AI, ensuring its benefits are accessible to all. We are on the lookout for forward-thinking Research Engineers to join our Applied Group, where you will convert groundbreaking research into practical applications that can revolutionize industries, enhance human creativity, and tackle complex challenges.Your Impactful RoleAs a Research Engineer within OpenAI's Applied Group, you will collaborate with some of the brightest minds in AI. Your work will involve deploying cutting-edge models in production settings, transforming theoretical breakthroughs into impactful solutions. If you are passionate about making AI technology accessible and effective, this is your opportunity to leave a significant impact.In this role, you will:Innovate and Deploy: Create and implement advanced machine learning models addressing real-world issues. Translate OpenAI's research from theory to practice, developing AI-driven applications that make a meaningful difference.Collaborate with Experts: Engage closely with researchers, software engineers, and product managers to comprehend intricate business challenges and deliver AI-based solutions. Become part of a vibrant team where creativity and ideas flourish.Optimize and Scale: Develop scalable data pipelines, fine-tune models for peak performance and precision, and ensure readiness for production. Contribute to projects that leverage state-of-the-art technology and innovative methodologies.Learn and Lead: Stay at the forefront of advancements in machine learning and AI. Participate in code reviews, share insights, and exemplify best practices to maintain high standards in engineering.Make a Difference: Oversee and maintain deployed models, ensuring they consistently deliver value. Your contributions will directly shape how AI benefits individuals, businesses, and society as a whole.You may excel in this position if you possess:A Master's or PhD in Computer Science, Machine Learning, Data Science, or a related discipline.Proven experience in deep learning and transformer models.Expertise with frameworks such as PyTorch or TensorFlow.A robust understanding of data structures, algorithms, and software engineering principles.Experience with cloud platforms and deploying machine learning models in production.
The OpportunityJoin us at ComfyOrg as a Senior/Staff Applied Machine Learning Engineer! We are on the hunt for a passionate innovator who is enthusiastic about optimizing model inference. You will play a pivotal role in developing the heart of ComfyUI, our cutting-edge visual AI platform. Your expertise will help us push the limits of AI model performance, making them run faster and more efficiently than ever before.Are You a Match?You are fascinated by model inference, memory management, and torch optimizations.You possess experience in writing production-level PyTorch code that challenges performance standards.You have a passion for understanding the inner workings of AI models.You thrive on developing highly optimized code that consistently delivers results.You believe that the current landscape of ML deployment holds significant room for improvement.Your Responsibilities:Develop and enhance the core inference engine that drives ComfyUI.Optimize large models for speed and memory efficiency.Collaborate with our core team to architect new features.Tackle complex technical challenges within the visual AI domain.Contribute to the future direction of our technology.Experience with diffusion or LLM models, as well as creating custom nodes for ComfyUI, is highly beneficial.
About Sygaldry TechnologiesSygaldry Technologies is at the forefront of innovation, developing quantum-accelerated AI servers designed to significantly enhance the speed of AI training and inference. By merging quantum computing with AI, we are navigating the challenges of increasing compute costs and energy constraints, paving the way towards superintelligence. Our AI servers leverage a diverse range of qubit types in a fault-tolerant architecture, achieving the necessary balance of cost, scalability, and speed for advanced AI applications. We are committed to pioneering new frontiers in physics, engineering, and AI, tackling the most complex challenges with a culture grounded in optimism and rigor. We seek individuals passionate about defining the convergence of quantum and AI and making a meaningful global impact.About the RoleGenerative AI is revolutionizing computational possibilities but reveals the limitations of classical hardware. While diffusion models yield remarkable outcomes, their iterative sampling and high-dimensional score estimation often lead to computational inefficiencies.We are convinced that quantum computing holds the key to overcoming these challenges. As an ML Research Scientist, you will operate at the intersection of generative modeling and quantum acceleration, formulating theoretical foundations and practical applications that merge these domains. Your focus will be on identifying areas where quantum methods can deliver substantial advantages in generative workflows, providing not just incremental enhancements but transformative improvements grounded in mathematical principles.Your ResponsibilitiesGenerative Model Architecture & EfficiencyInnovate state-of-the-art diffusion and score-based generative models.Investigate computational bottlenecks in sampling, denoising, and likelihood estimation.Design and evaluate novel solver techniques for diffusion ODEs/SDEs.Quantum-Classical IntegrationDiscover mathematical structures in generative models that are suitable for quantum acceleration.Prototype hybrid workflows that utilize quantum subroutines to enhance classical processes.Conduct rigorous benchmarks comparing theoretical advantages against practical benefits in realistic scenarios.Research to ProductionTransform research findings into scalable implementations.Collaborate with quantum hardware teams to guide architectural specifications.Facilitate the integration of research insights into production environments.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
About Scale AI At Scale AI, we are committed to propelling the advancement of AI technologies. For over eight years, we have been a pioneer in the AI data sector, supporting groundbreaking innovations in areas such as generative AI, defense solutions, and autonomous driving. Following our recent Series F funding round, we are enhancing access to premium data to accelerate the journey towards Artificial General Intelligence (AGI). Building on our legacy of model evaluation for both enterprise and governmental clients, we are expanding our capabilities to establish new benchmarks for evaluations in both public and private domains. About This Role This position is at the leading edge of AI research and practical implementation, concentrating on reasoning within large language models (LLMs). The successful candidate will investigate critical data types vital for evolving LLM-based agents, including browser and software engineering agents. You will significantly influence Scale’s data strategy by pinpointing optimal data sources and methodologies to enhance LLM reasoning. To excel in this role, you will require a profound understanding of LLMs, planning algorithms, and fresh approaches to agentic reasoning, alongside inventive solutions to challenges in data generation, model interaction, and evaluation. Your contributions will lead to transformative research on language model reasoning, facilitate collaboration with external researchers, and engage closely with engineering teams to translate cutting-edge advancements into scalable, real-world applications.
Full-time|$251.7K/yr - $330K/yr|On-site|San Francisco Bay Area, CA
Our MissionAt Altos Labs, we are dedicated to restoring cell health and resilience through innovative cell rejuvenation techniques aimed at reversing diseases, injuries, and disabilities that can arise throughout life.For further insights, please visit our website at altoslabs.com.Our ValueOur singular Altos Value is: Everyone Owns Achieving Our Inspiring Mission.Diversity at AltosWe firmly believe that diverse perspectives are crucial for scientific innovation. At Altos, exceptional scientists and industry leaders collaborate globally to further our shared mission. We prioritize Belonging, ensuring all employees feel valued for their unique perspectives, and we hold ourselves accountable for maintaining a diverse and inclusive environment.Your Contributions to AltosAs a member of our team, you will accelerate and enhance our efforts in developing unified, multi-modal generative foundation models tailored for multiscale biology. You will be a key player in multidisciplinary teams that create the computational platforms essential for Altos to fulfill its mission.In this position, you will collaborate with other scientists and engineers across the Institute of Computation to design, develop, and scale cutting-edge foundation models that address biological inquiries and assist in discovering novel interventions for aging and disease. Your focus will be on synthesizing unstructured multimodal signals with structured relational data and knowledge graphs that depict biological realities.The ideal candidate will excel in a dynamic environment that values teamwork, transparency, scientific excellence, originality, and integrity.
About UsAt Applied Compute, we specialize in creating Specific Intelligence solutions for enterprises, developing agents that learn continuously from an organization’s processes, data, expertise, and objectives. We recognize a significant gap between the capabilities of AI models in isolation and their practical applications in real-world business contexts. Our systems often fall short because they lack adaptability to feedback. To address this, we are building a continual learning infrastructure that captures context, memory, and decision-making processes throughout the enterprise, enabling specialized agents to effectively execute real tasks.What Excites Us: We operate at a unique intersection where our product team constructs the platform that fuels a new generation of digital coworkers. Our research team pushes the boundaries of post-training and reinforcement learning, creating innovative product experiences. Our applied research engineers collaborate closely with clients to deploy models into production. This blend of strong product focus, deep research, and hands-on customer engagement is crucial for integrating AI into the enterprise. We are product-driven, research-informed, and actively engaged with our clients.Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have built RL infrastructure at leading organizations like OpenAI and Scale AI, and developed systems at Together, Two Sigma, and Watershed. We proudly serve Fortune 50 clients alongside companies like DoorDash, Mercor, and Cognition. Our work is supported by renowned investors, including Benchmark, Sequoia, and Lux.Who Thrives in Our Environment: We seek individuals eager to apply cutting-edge research and complex systems to tackle real-world challenges. You should be adept at quickly adapting to new environments, whether it’s a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment of customer interactions—listening, empathizing, and understanding how tasks are accomplished within their organizations—is essential. Those with entrepreneurial backgrounds, extensive side projects, or demonstrated end-to-end ownership typically excel in our company.
Full-time|$176K/yr - $220K/yr|On-site|San Francisco, CA; New York, NY
About This Role Join Scale AI's Applied ML team as a Machine Learning Research Engineer, focusing on the development of advanced data infrastructure for leading agentic large language models (LLMs) such as ChatGPT, Gemini, and Llama. You will be responsible for architecting scalable multi-agent systems aimed at validating agentic reasoning and behaviors, enhancing human expertise, and conducting research to address real-world agent reliability failures, even in the face of strong benchmarks. Your contributions will directly impact the deployment of production fixes. This role is ideal for exceptional engineers who possess a deep research rigor and a strong commitment to creating practical, high-impact systems. You will iterate rapidly using data, leverage AI tools for accelerated development, and collaborate closely with engineering, product, and research teams. If you have a knack for transforming cutting-edge agent research into dependable deployed systems, we would love to hear from you.
Full-time|$166K/yr - $210.3K/yr|On-site|San Francisco, California
P-1380 Join Databricks as a Senior Applied AI Engineer, where you will harness the power of machine learning, scheduling, and optimization algorithms to enhance the efficiency and performance of our engineering systems and infrastructure. Our Applied AI team tackles some of the most challenging and fascinating issues in the industry, ensuring that Databricks infrastructure and products operate at peak performance and cost efficiency. This role is critical, as our customers depend on us to deliver the most optimized workloads. Your Impact: Develop comprehensive systems from the ground up within a dynamic team of seasoned professionals. Influence the direction of our applied machine learning investment areas by collaborating with engineering and product teams across the organization. Lead the design and implementation of advanced AI models and systems that enhance the capabilities and performance of Databricks' products, infrastructure, and services. Architect and deploy robust, scalable machine learning infrastructure, including data storage, processing, model training, serving components, and monitoring systems to facilitate seamless integration of AI/ML models into production environments. Explore innovative modeling techniques in the realm of machine learning for systems. Contribute to the wider AI community by publishing research, presenting at conferences, and actively engaging in open-source projects, thereby strengthening Databricks' reputation as an industry leader.
Why Join Achira?Become part of an elite team comprising scientists, machine learning researchers, and engineers dedicated to transforming the predictability of the physical microcosm and revolutionizing drug discovery.Explore uncharted territories: we are on a mission to innovate next-generation model architectures that merge AI with chemistry.Engage in large-scale operations: harness massive computational resources, extensive datasets, and ambitious objectives.Take ownership of significant projects from inception to deployment on large-scale infrastructures.Thrive in a culture that values precision, speed, execution, and a proactive mindset.About the PositionAt Achira, we are committed to developing state-of-the-art foundation models that tackle the most complex challenges in simulation for drug discovery and beyond. Our atomistic foundation simulation models (FSMs) serve as world models of the physical microcosm, incorporating machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and various generative models.We are seeking a Machine Learning Research Engineer (MLRE) who excels at the intersection of advanced machine learning and rigorous research methodologies. Collaborate closely with our research scientists to design and enhance intelligent training systems that propel us beyond contemporary architectures into a new era of ML-driven molecular modeling.Your mission is clear yet ambitious: to establish the foundational frameworks for training atomistic simulation models at scale. This entails a deep dive into architecture, data, optimizers, losses, training metrics, and representation learning, all while constructing high-performance systems that maximize the potential of our models. In this role, you will be instrumental in creating a blueprint for pretraining FSMs similar to today’s large-scale generative AI systems, making a significant impact on drug discovery.At Achira, you will have the chance to pioneer models that comprehend and simulate the physical world at an atomic level, achieving unprecedented speed and accuracy.
Join the Revolution in Behavioral IntelligenceAmplify Your InfluenceYou have achieved remarkable success in your career, creating robust behavioral or neuroscience models that have driven significant outcomes. You possess a talent for discerning patterns in user behavior, comprehending motivations, and optimizing end-to-end user experiences.Now, envision extending your impact across multiple products and organizations, enhancing the entire app ecosystem. Every application at your fingertips becomes smarter, more engaging, and indispensable to its users.Your expertise can empower product teams to innovate more rapidly, delight users, and boost revenue, all thanks to the behavioral intelligence you develop once and deploy universally.We share this vision: our team has accomplished this repeatedly at industry leaders like Uber, Apple, Google, and Chime, generating tens of billions of dollars in value for products vital to billions globally. We are poised to elevate our impact even further.Does this resonate with the next chapter you're seeking? If so, continue reading.Palladio: Pioneering BreakthroughsPalladio AI is an innovative AI platform aimed at transforming product-led growth and enhancing the value our clients provide in users’ daily lives.Our initial focus is on mobile gaming, where development is swift, user engagement is high, and experimentation yields immediate results—making it the perfect testing ground for our platform.Your ContributionsOur team is constructing foundational systems in behavioral modeling, causal inference, forecasting, and agentic platforms. You will play a pivotal role in extending these areas: creating machine learning and AI-driven behavioral models to identify and highlight product opportunities while deploying self-improving learning loops with each iteration. Your work will analyze user sentiments, thoughts, decisions, and actions—translating behavioral insights into opportunities that enhance product intuitiveness, engagement, and rewards. In essence, you will convert first-principles data science, neuroscience, cognitive science, and machine learning into scalable solutions across various industries.Your ProfileUser-Focused. You empathize with users' challenges, needs, and goals throughout their journeys, measure success through user outcomes, and convert insights into innovative and engaging product experiences.Scientific Innovator. You...
Company Overview:At Specter, we are pioneering a software-defined control plane for the physical realm, beginning with safeguarding American enterprises through comprehensive monitoring of their physical assets.Our innovative approach leverages a connected hardware-software ecosystem built on advanced multi-modal wireless mesh sensing technology. This breakthrough enables us to reduce the deployment costs and time for sensors by a factor of 10. Our ultimate goal is to establish a perception engine that provides real-time visibility of a company’s physical environment and facilitates autonomous operations management.Co-founders Xerxes and Philip are dedicated to empowering our partners in the rapidly evolving landscape of physical AI and robotics. Join our dynamic and rapidly expanding team comprised of talents from Anduril, Tesla, Uber, and the U.S. Special Forces.Position Overview:We are seeking a Perception AI Engineer who will be instrumental in transforming sensor data pipelines into actionable insights for our clients.Key Responsibilities:Implement and deploy a range of deep-learning models, including vision, vision-language, and large language models, within our sophisticated distributed perception system.Design and scale a production-ready data collection, labeling, and model retraining platform.Lead the design of a multimodal software user interface.
Full-time|$200K/yr - $240K/yr|Hybrid|United States
SentiLink is at the forefront of delivering cutting-edge identity and risk management solutions, providing both individuals and institutions the ability to transact with assurance. We are revolutionizing identity verification within the United States, replacing outdated, inefficient, and costly practices with solutions that are ten times faster, smarter, and more precise.Our rapid growth is a testament to our innovative approach; our real-time APIs have successfully verified hundreds of millions of identities, initially focusing on the financial sector and quickly expanding into various new markets. SentiLink enjoys the support of prestigious investors including Craft Ventures, Andreessen Horowitz, NYCA, and Max Levchin.We are proud to have received accolades from TechCrunch, CNBC, Bloomberg, Forbes, Business Insider, PYMNTS, and American Banker, and we have been featured in the Forbes Fintech 50 list every year since 2023. Notably, we made history as the first company to deploy the eCBSV and have testified before the United States House of Representatives regarding the future of identity verification.SentiLink accommodates a flexible work environment, ranging from fully remote positions to in-office roles. As a digital-first company, we emphasize collaboration across teams in the U.S. and India. We have physical locations in Austin, San Francisco, New York City, Seattle, Los Angeles, and Chicago in the U.S., alongside offices in Gurugram (Delhi) and Bengaluru in India. For those near our offices, we encourage regular office attendance. Certain roles, such as our engineering team in India, are designed to be primarily in-office.Role Overview:As a Senior Applied ML Scientist at SentiLink, you will be instrumental in developing our core products: advanced models aimed at identifying fraudulent activities while enhancing our expanding array of financial risk solutions. Your expertise as a seasoned researcher will be essential, making you the authoritative figure in your domain. You will frequently engage in high-impact projects that necessitate a profound understanding of the field, critical analytical skills, and robust technical capabilities. Collaboration with various teams across the organization will be key as you investigate new fraud types, innovate product offerings, and conduct analyses to support our sales and marketing efforts.
Mar 13, 2026
Sign in to browse more jobs
Create account — see all 7,798 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.