Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
The ideal candidate should possess a strong background in machine learning and software engineering, with experience in designing and implementing offline infrastructure solutions. You should have a proven track record of working with large datasets, proficiency in programming languages such as Python or C++, and familiarity with cloud-based technologies. A Master's degree in a related field is preferred, along with excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment.
About the job
Unity Technologies is looking for a Staff Machine Learning Engineer with a focus on offline infrastructure. Based in San Francisco, this position centers on building and refining systems that underpin the performance and scalability of machine learning workflows.
What you will do
Design and develop offline infrastructure to support machine learning projects
Work closely with a team to improve system scalability and reliability
Lead efforts to advance machine learning capabilities within Unity
The team
This group combines technical skill with creative problem-solving to expand what machine learning can accomplish at Unity.
About Unity Technologies
Unity Technologies is a leading platform for creating and operating interactive, real-time 3D content. Our mission is to empower creators to build and connect immersive experiences across various industries, including gaming, film, automotive, and architecture. Join us as we continue to shape the future of interactive entertainment.
Similar jobs
1 - 20 of 5,770 Jobs
Search for Machine Learning Platform Infrastructure Engineer
Join us in creating the backbone of data infrastructure for real-world robotic operations.As robotics transitions from research labs to real-world applications across factories, warehouses, vehicles, and field deployments, understanding the intricacies of robotic performance becomes critical. When robots encounter failures or unexpected behaviors, data analysis is key to deciphering the underlying issues.At Foxglove, we are at the forefront of building tools for observability, visualization, and data infrastructure that empower robotics and autonomous systems teams to manage, analyze, and derive insights from vast amounts of multimodal sensor data collected from operational systems and production fleets.Role OverviewWe are seeking a passionate ML Platform Engineer with robust infrastructure expertise to design, deploy, and scale our data platform systems. This platform-centric role will allow you to take charge of the infrastructure layer that facilitates machine learning in production environments, going beyond just the models themselves.Your responsibilities will encompass ensuring the reliability, scalability, and performance of the ML platform, including areas such as inference serving, pipeline orchestration, training infrastructure, and evaluation frameworks. You will be tackling substantial challenges such as managing petabyte-scale multimodal robotics data and optimizing high-throughput retrieval and embedding pipelines in a hands-on infrastructure capacity.Key ResponsibilitiesDesign and operationalize production inference infrastructure, focusing on model serving, autoscaling, load balancing, and cost efficiency across cloud environments.Own the platform architecture for embedding and retrieval pipelines that enable semantic search across multimodal robotics data (image, video, point cloud, and time series).Develop and sustain the training and evaluation infrastructure that supports rapid model performance iteration, including job orchestration, experiment tracking, and dataset versioning.Lead decisions on cloud infrastructure (AWS/GCP) that affect latency, throughput, reliability, and scalability.Establish platform abstractions and internal tools that empower product engineers to deliver ML-enhanced features without managing infrastructure directly.Assess, integrate, and operationalize third-party ML infrastructure components while establishing clear build vs. buy frameworks for the team.
Be a Part of the Revolution in E-Commerce with Whatnot!Whatnot stands as the leading live shopping platform across North America and Europe, where you can buy, sell, and explore the items you cherish. We are transforming the landscape of e-commerce by merging community engagement, shopping, and entertainment into a unique experience tailored just for you. As a remote-first team, we are driven by innovation and firmly rooted in our core values. With operational hubs in the US, UK, Germany, Ireland, and Poland, we are collaboratively crafting the future of online marketplaces.From fashion and beauty to electronics and collectibles like trading cards, comic books, and live plants, our live auctions cater to a diverse audience.And this is just the beginning! As one of the fastest-growing marketplaces, we are on the lookout for innovative, forward-thinking problem solvers in all areas of our business. Stay updated with the latest from Whatnot through our news and engineering blogs, and join us in empowering individuals to transform their passions into successful ventures while fostering community through commerce. The RoleWe are seeking passionate builders—intellectually curious, entrepreneurial engineers who are ready to pioneer the future of AI and ML at Whatnot. You will be responsible for designing and scaling the foundational infrastructure that supports machine learning and self-hosted large language model applications throughout the organization. Collaborating closely with machine learning scientists, you will facilitate the deployment of cutting-edge models into production, creating entirely new product experiences. Your work will involve constructing systems that ensure advanced machine learning is reliable and efficient at scale—from low-latency model serving to distributed training and high-throughput GPU inference.Your Responsibilities:Lead the infrastructure that powers AI and ML models across vital business domains—enhancing growth, trust and safety, fraud detection, seller tools, and more.Prototype, deploy, and operationalize innovative ML architectures that significantly influence user experience and marketplace dynamics.Design and scale inference infrastructure capable of managing large models with minimal latency and maximal throughput.Construct distributed training and inference pipelines utilizing GPUs, as well as model and data parallelism.Push the boundaries of your expertise and explore new technologies and methodologies.
Company Overview At Specter, we are pioneering a software-defined "control plane" designed to enhance the real-world perception of physical assets. Our mission begins with safeguarding American businesses by providing them with comprehensive insights into their physical environments.To achieve this, we are developing a robust hardware-software ecosystem leveraging multi-modal wireless mesh sensing technology. This innovation allows us to significantly reduce the cost and time involved in sensor deployment by a factor of ten. Ultimately, our platform aims to serve as the perception engine for businesses, facilitating real-time visibility and autonomous management of their operational perimeters.Our co-founders, Xerxes and Philip, are deeply committed to empowering our partners in the rapidly evolving landscape of physical AI and robotics. We are a dynamic, rapidly expanding team comprised of talent from Anduril, Tesla, Uber, and the U.S. Special Forces.Position Overview Specter is seeking a dedicated Machine Learning Infrastructure Engineer to construct and optimize the ML systems that drive real-time perception and inference capabilities across our edge-cloud platform. This position will involve overseeing the training, deployment, and enhancement of computer vision and sensor fusion models, aimed at enabling autonomous monitoring and decision-making for our clients' physical assets.Key Responsibilities Include:Design and implement scalable ML training pipelines for computer vision applications, including object detection, tracking, classification, and segmentation.Develop efficient model serving infrastructures to facilitate real-time inference on edge devices with limited computational and power resources.Optimize models for deployment on embedded hardware, employing techniques such as quantization, pruning, TensorRT, ONNX, and CoreML.Create continuous training and evaluation systems to enhance model performance through feedback loops derived from production data.Establish data pipelines for the ingestion, labeling, versioning, and management of extensive multi-modal sensor datasets, including video, radar, lidar, and thermal data.Implement model monitoring frameworks, A/B testing methodologies, and performance analytics for deployed perception systems.Collaborate with perception researchers to transition models from research environments to scalable production across thousands of edge nodes.Construct tools and infrastructure for distributed training, hyperparameter optimization, and experiment tracking.
Full-time|$123.7K/yr - $254.7K/yr|Remote|San Francisco, CA, US; Remote, US
tvScientific, powered by Pinterest, develops a connected TV (CTV) advertising platform designed for performance marketers. The platform combines media buying, optimization, measurement, and attribution to automate and improve TV advertising. Built by professionals in programmatic advertising, digital media, and ad verification, tvScientific aims to deliver measurable results for advertisers. Role overview As a Machine Learning Platform Engineer, you will join a team that operates where Site Reliability Engineering meets low-latency distributed systems. This team advances Pinterest’s real-time machine learning and measurement infrastructure, focusing on sub-millisecond decision-making and high-throughput data access. Seamless integration with Pinterest’s core stack is central to the work. What you will do Design and build systems to keep queries and RPCs fast and reliable, even during periods of heavy demand. Develop and enhance the foundation of the machine learning training and serving stack. Address challenges in storage, indexing, streaming, fan-out, and managing backpressure and failures across services and regions. Collaborate with software engineering, data infrastructure, and SRE teams to ensure systems are observable, debuggable, and ready for production. Key areas of focus I/O scheduling and batching Lock-free or low-contention data structures Connection pooling and query planning Kernel and network tuning On-disk layout and indexing strategies Circuit-breaking and autoscaling Incident response and failure management NixOS Defining and maintaining SLIs and SLOs This position is a strong fit for engineers interested in building and operating large-scale infrastructure, particularly those who enjoy working on real-time systems, observability, and reliability.
Mach9’s Machine Learning Infrastructure Engineers create and maintain the backbone for production AI models used in civil engineering and surveying. The team manages a machine learning pipeline that processes over 10,000 miles of labeled survey data, supports image segmentation networks, and runs 3D prediction models. These systems deliver real-time inference capabilities directly to surveyors and engineers working in the field. Role overview This position is designed for mid-career engineers with a strong background in both training and inference aspects of machine learning infrastructure. The work involves handling large-scale data and ensuring reliable performance for demanding, real-world applications. What you will do Build and improve training pipelines for deep transformer models using hundreds of terabytes of 3D point cloud and image data. Design and implement inference infrastructure to support both offline detection algorithms and responsive, real-time inference integrated with CAD software. Location Based in San Francisco.
At Physical Intelligence, we are pioneering general-purpose AI applications for the physical world. Our innovative approach involves orchestrating thousands of accelerators across a diverse ecosystem of GPU and TPU clusters, which encompass various hardware generations, cloud platforms, and cluster configurations.Researchers frequently encounter challenges in identifying the optimal cluster for their tasks, understanding resource availability, and configuring their workloads efficiently. This process is not scalable. To enhance productivity, we require an intelligent scheduling and compute system that can automatically determine the best job placements based on availability, hardware compatibility, cost considerations, and priority levels, allowing researchers to concentrate on their scientific endeavors.This position encompasses the complete ownership of this challenge: the development of scheduling systems, placement logic, cluster management frameworks, and operational tools essential for seamless operations.This role is distinct from traditional cloud DevOps; it focuses on resource allocation intelligence, utilization efficiency, fault tolerance, and ensuring a smooth experience for large-scale distributed training.About the TeamThe ML Infrastructure team is dedicated to bolstering and accelerating Physical Intelligence’s fundamental modeling initiatives by creating systems that ensure large-scale training is reliable, reproducible, and efficient. You will collaborate closely with the ML Infrastructure, data platform, and research teams to eliminate compute scheduling as a bottleneck.Key Responsibilities- Lead Intelligent Job Scheduling and Placement: Design and implement multi-tenant scheduling systems that automatically allocate training jobs to the most suitable cluster based on hardware specifications, topology, availability, cost, and priority. Facilitate equitable resource sharing across teams and projects through quota management, priority tiers, and preemption policies. Simplify cluster discrepancies so researchers can submit jobs without needing detailed knowledge of cluster specifics.- Enhance Multi-cluster Orchestration: Develop the control plane responsible for overseeing the job lifecycle across various clusters (including mixed GPU/TPU setups, multi-generational hardware, both on-premises and cloud-based) and enable effortless job migration, failover, and rescheduling.- Optimize Accelerator Utilization and Performance: Continuously monitor and enhance GPU/TPU usage across the entire fleet. Apply priority, preemption, queuing, and fairness strategies that balance research momentum with cost efficiency.- Guarantee Scalability and Stability: Implement fault detection, automatic recovery mechanisms, and resilience strategies for long-running multi-node training tasks. Oversee health checks, node management, and scaling strategies to ensure optimal performance.
Role overview Whatnot seeks a Software Engineer specializing in Machine Learning Infrastructure to develop and maintain the systems powering its machine learning applications. This position is based in San Francisco, CA and centers on building the technical backbone that supports machine learning efforts across the company. What you will do Develop and improve frameworks that enable machine learning throughout Whatnot’s platforms. Collaborate with teams from multiple disciplines to design infrastructure that can scale as needs grow. Support seamless integration of machine learning models into existing products.
As a Machine Learning Infrastructure Engineer at Physical Intelligence, you will play a vital role in enhancing and optimizing our training systems and core model code. You will take ownership of critical infrastructure for large-scale training, which includes managing GPU/TPU compute, orchestrating jobs, and developing reusable and efficient JAX training pipelines. Collaborating closely with researchers and model engineers, you will help transform innovative ideas into experiments and subsequently into production training runs.This position is hands-on and offers significant leverage at the intersection of machine learning, software engineering, and scalable infrastructure.The TeamOur ML Infrastructure team is dedicated to supporting and accelerating Physical Intelligence's core modeling initiatives by building systems that ensure large-scale training is reliable, reproducible, and efficient. The team collaborates with research, data, and platform engineers to guarantee that models can seamlessly transition from prototype to production-grade training runs.Key Responsibilities- Manage training/inference infrastructure: Design, implement, and maintain systems for large-scale model training, which includes scheduling, job management, checkpointing, and performance metrics/logging.- Expand distributed training: Collaborate with researchers to efficiently scale JAX-based training across TPU and GPU clusters.- Enhance performance: Profile and optimize memory usage, device utilization, throughput, and distributed synchronization to maximize efficiency.- Facilitate rapid iteration: Develop abstractions for launching, monitoring, debugging, and reproducing experiments.- Oversee compute resources: Ensure optimal allocation and utilization of cloud-based GPU/TPU compute resources while managing costs effectively.- Collaborate with researchers: Translate research requirements into infrastructure capabilities and promote best practices for large-scale training.- Contribute to core training code: Evolve the JAX model and training code to accommodate new architectures, modalities, and evaluation metrics.
Innovate Boldly. Shape Tomorrow. Our VisionCrafting everyday AGI. Reliable, consumer-friendly agents that transform human-AI synergy for millions. Our software is designed to act as a collaborator, enhancing your daily capabilities.Why Choose AGI, Inc.?We are a discreet collective of exceptional founders and AI pioneers, whose expertise spans Stanford, OpenAI, and DeepMind. Our team leads the way in mobile and computer-based agents, scaling these innovations for consumer use.With a foundation rooted in extensive research on agents, our AI prioritizes trustworthiness and reliability as fundamental principles.Backed by top-tier investors who previously supported the first wave of AI leaders, we are now positioned to create the next generation: everyday AGI. (Check out the demo)If you envision possibilities where others perceive restrictions, continue reading.Your RoleTraining Automation: Design and execute robust CI/CD pipelines tailored for machine learning workflows. Automate nightly and on-demand training sessions encompassing data ingestion, job orchestration, checkpointing, and artifact management, with a focus on reliability.Evaluation Infrastructure: Develop scalable evaluation frameworks that automatically benchmark models with each merge. Enhance latency and resource efficiency to ensure quick experimentation and immediate detection of performance regressions.Research Tooling: Create internal SDKs, CLIs, and lightweight UIs (e.g., Streamlit, Retool) empowering researchers to:Examine trajectories and tracesVisualize model failuresOrganize and oversee datasetsIterate seamlesslyYou'll facilitate a user-friendly experimentation process.Observability & Performance: Enforce comprehensive tracking for:Model latency, throughput, and error ratesGPU utilization, and more.
Full-time|$292K/yr - $417.2K/yr|Hybrid|San Francisco, CA; Los Angeles, CA; New York, NY (Hybrid); USA - Remote
About the Role:The Machine Learning team at Tubi is at the forefront of transforming user experiences through cutting-edge technology. With the industry's largest inventory and a vast audience of millions, we are dedicated to solving complex challenges in recommendations, search, content understanding, and ad optimization, shaping the future of streaming.We are on the lookout for a Director of Machine Learning Engineering and Infrastructure to spearhead a hybrid team that merges advanced ML engineering with exceptional infrastructure design. In this pivotal role, you will define the strategic vision and implementation for scaling our machine learning capabilities, ensuring our distributed systems and infrastructure can foster innovation on a grand scale. You will blend technical expertise with outstanding leadership to guide teams in delivering robust ML systems and high-performance distributed services.
At Causal Labs, we are on a groundbreaking mission to develop general causal intelligence, harnessing AI to (1) forecast future events and (2) pinpoint optimal actions to influence that future.To realize this vision, we are constructing a Large Physics foundation Model (LPM), as the domains governed by physics inherently feature cause-and-effect relationships, which is distinct from visual or textual data.Weather serves as the perfect training environment for our LPM, being the most extensively observed physical system and providing rapid, objective ground truth feedback from sensory data at an unprecedented scale, far exceeding what is utilized for current large language models (LLMs).Our team comprises elite researchers and engineers with backgrounds in self-driving technology, drug discovery, and robotics, including talents from Google DeepMind, Cruise, Waymo, Meta, Nabla Bio, and Apple. We believe that achieving general causal intelligence will be a pivotal technological advancement for humanity.We are searching for infrastructure engineers who are eager to tackle formidable challenges and contribute to our mission.Your expertise in distributed training clusters and performance optimization for large models will be crucial as we address our training and inference challenges. If you possess experience in developing large-scale ML infrastructure within fields like language models, vision systems, robotics, or biology, we invite you to join us.
About AbridgeAbridge, established in 2018, is dedicated to enhancing the understanding of healthcare through advanced AI technology. Our platform is specifically designed for medical conversations, streamlining clinical documentation processes and allowing healthcare professionals to prioritize patient care.Our robust technology converts patient-clinician dialogues into structured clinical notes in real-time, integrating seamlessly with electronic medical records (EMR). With our unique Linked Evidence approach and auditable AI framework, we are the sole entity that aligns AI-generated summaries with verified ground truths, fostering trust among healthcare providers. As leaders in generative AI within the healthcare sector, we are committed to setting benchmarks for the ethical implementation of AI across health systems.Our dynamic team comprises practicing MDs, AI researchers, PhDs, creative thinkers, technologists, and engineers, all collaborating to empower individuals and enhance the healthcare experience. We have offices located in San Francisco's Mission District, New York's SoHo, and Pittsburgh's East Liberty.The RoleAs a Senior Machine Learning Infrastructure Engineer at Abridge, you will be essential in constructing and refining the core infrastructure that supports our machine learning models. Your contributions will be crucial in boosting the scalability, efficiency, and performance of our AI solutions. You will collaborate with the Infrastructure and Research teams to build, deploy, optimize, and orchestrate our AI models.What You'll DoDesign, deploy, and maintain scalable Kubernetes clusters for AI model training and inference.Develop, optimize, and maintain high-performance ML serving and training infrastructure, ensuring minimal latency.Work alongside ML and product teams to enhance backend infrastructure for AI-driven applications, focusing on model deployment and efficiency.Improve compute-intensive workflows and maximize GPU utilization for ML tasks.Create a robust orchestration system for model APIs.Partner with leadership to formulate and execute strategies for scaling infrastructure as the company expands, guaranteeing sustained efficiency and performance.
Company OverviewEcho Neurotechnologies is a dynamic startup revolutionizing the Brain-Computer Interface (BCI) sector. We are committed to creating innovative hardware solutions powered by artificial intelligence, with the goal of enhancing the lives of individuals with disabilities and promoting independence through advanced technology.Team CultureBecome part of a close-knit group of passionate professionals in a fast-paced environment. As part of our early-stage team, you will have the chance to influence important decisions that yield substantial, lasting results. We prioritize continuous learning and collaboration, ensuring your contributions are integral to our collective success.Job SummaryWe are on the lookout for a Senior Machine Learning Infrastructure Engineer to join our talented team. In this pivotal role, you will be responsible for designing, constructing, and scaling infrastructure that supports large-scale data processing, modeling, and analysis. You will play an essential role in developing a high-performance, production-ready ML ecosystem that facilitates swift experimentation across diverse datasets, including neural signals and behavioral data. You'll have substantial ownership of our ML R&D platform, collaborating closely with domain experts to develop new cloud infrastructure, data pipelines, and modeling workflows, ultimately leading to the creation of state-of-the-art models for neuroscientific breakthroughs and neural decoding, thereby improving the lives of patients with severe neurological disorders.Key ResponsibilitiesCreate adaptable and efficient ML infrastructure:Design and implement ML cloud infrastructure for extensive modeling and analytics.Facilitate diverse model exploration, hyperparameter tuning, pretraining, fine-tuning, and evaluation.Develop and refine scalable distributed training pipelines, incorporating model sharding, cross-GPU communication, and real-time training monitoring.Manage and sustain robust ML platforms and services throughout the model lifecycle.Make strategic architecture decisions balancing performance, cost, reliability, and scalability.Build flexible and scalable data platforms:Design and optimize large-scale databases and data pipelines to ensure reliable data access.
Full-time|$220K/yr - $220K/yr|On-site|San Francisco, CA
Join Us in Building a Safer Financial System.At TRM Labs, we are at the forefront of blockchain analytics and AI technology, dedicated to empowering law enforcement, national security, financial institutions, and cryptocurrency businesses in the fight against crypto-related fraud and financial crime. Our advanced platforms leverage blockchain intelligence and AI to trace the flow of funds, identify illicit activities, build robust cases, and provide a comprehensive understanding of threats. Trusted globally, TRM Labs is committed to creating a safer and more secure environment for everyone.Our mission is to develop an innovative financial system that benefits billions around the globe. By integrating threat intelligence with machine learning, our next-generation platform enables institutions and governments to detect cryptocurrency fraud and financial crimes on an unmatched scale.As a Machine Learning Infrastructure Engineer at TRM Labs, you will collaborate with a talented team of data scientists, engineers, and product managers. Your role will involve designing and maintaining scalable GPU-powered infrastructure that supports our AI systems. You will work at the intersection of distributed systems, cloud infrastructure, and applied machine learning, laying the groundwork for high-throughput, production-level ML workloads.
Join Matter Intelligence as a Data and Machine Learning Infrastructure Engineer, where you will play a pivotal role in shaping the future of data-driven decision-making. You will be part of a dynamic team focused on building and optimizing infrastructure that supports innovative machine learning applications. Your expertise will help us enhance our data pipelines and ensure seamless integration of machine learning models into production.
About UsAt Applied Compute, we specialize in creating Specific Intelligence solutions for enterprises, developing agents that learn continuously from an organization’s processes, data, expertise, and objectives. We recognize a significant gap between the capabilities of AI models in isolation and their practical applications in real-world business contexts. Our systems often fall short because they lack adaptability to feedback. To address this, we are building a continual learning infrastructure that captures context, memory, and decision-making processes throughout the enterprise, enabling specialized agents to effectively execute real tasks.What Excites Us: We operate at a unique intersection where our product team constructs the platform that fuels a new generation of digital coworkers. Our research team pushes the boundaries of post-training and reinforcement learning, creating innovative product experiences. Our applied research engineers collaborate closely with clients to deploy models into production. This blend of strong product focus, deep research, and hands-on customer engagement is crucial for integrating AI into the enterprise. We are product-driven, research-informed, and actively engaged with our clients.Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have built RL infrastructure at leading organizations like OpenAI and Scale AI, and developed systems at Together, Two Sigma, and Watershed. We proudly serve Fortune 50 clients alongside companies like DoorDash, Mercor, and Cognition. Our work is supported by renowned investors, including Benchmark, Sequoia, and Lux.Who Thrives in Our Environment: We seek individuals eager to apply cutting-edge research and complex systems to tackle real-world challenges. You should be adept at quickly adapting to new environments, whether it’s a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment of customer interactions—listening, empathizing, and understanding how tasks are accomplished within their organizations—is essential. Those with entrepreneurial backgrounds, extensive side projects, or demonstrated end-to-end ownership typically excel in our company.
Full-time|$268K/yr - $368.5K/yr|On-site|San Francisco, CA
About FaireFaire is a transformative online wholesale marketplace, driven by the conviction that local businesses are the future. Independent retailers around the globe generate more revenue than massive corporations like Walmart and Amazon combined, yet individually, they remain small. At Faire, we harness technology, data, and machine learning to connect this vibrant community of entrepreneurs. Think of your favorite local boutique — we empower them to discover and sell the best products from around the world. With our innovative tools and insights, we aim to level the playing field, enabling small businesses to thrive against larger competitors.By championing the growth of independent businesses, Faire positively impacts local economies on a global scale. We’re in search of intelligent, resourceful, and passionate individuals to join us in fueling the shop local movement. If you value community, we invite you to be part of ours.About this RoleAs the Senior Staff Machine Learning Platform Engineer, you will spearhead the technical vision and evolution of Faire's ML platform. You will establish standards, influence organization-wide architecture, and lead intricate, cross-functional initiatives that enhance data science velocity at scale. This position is crucial for adapting ML workflows to leverage modern AI productivity tools. You will not only develop models but also design the systems that enable those models to empower tens of thousands of small retailers in competing and growing their local businesses.
Join Decagon as a Staff Software Engineer specializing in Machine Learning Infrastructure. In this role, you will play a crucial part in enhancing and optimizing our machine learning systems. You will collaborate with a talented team of engineers to build scalable and efficient infrastructure that supports our AI-driven initiatives.As a key contributor, you will leverage your expertise in software engineering and machine learning to solve complex challenges and drive innovation. Your work will impact various projects and help shape the future of our technology.
About GridwareGridware is an innovative technology firm based in San Francisco, committed to safeguarding and enhancing the electrical grid. We have pioneered an advanced class of grid management known as Active Grid Response (AGR), which focuses on monitoring the electrical, physical, and environmental aspects of the grid to improve reliability and safety. Our cutting-edge AGR platform utilizes high-precision sensors to identify potential issues early, enabling proactive maintenance and fault mitigation. This all-encompassing strategy enhances safety, minimizes outages, and promotes efficient grid operations. Supported by climate-tech and Silicon Valley investors, we are at the forefront of transforming grid management. For further details, visit www.Gridware.io.Role OverviewIn the role of Senior Machine Learning Infrastructure Engineer, you will collaborate closely with the Automation organization and the core ML, Operations, and Analytics teams to enhance and develop the infrastructure surrounding model deployment and monitoring. This position is crucial for amplifying the time-saving benefits that Gridware provides to its customers.
Role overview Unity Technologies is looking for a Staff Machine Learning Engineer with a focus on offline infrastructure. Based in San Francisco, this position centers on building and refining systems that underpin the performance and scalability of machine learning workflows. What you will do Design and develop offline infrastructure to support machine learning projects Work closely with a team to improve system scalability and reliability Lead efforts to advance machine learning capabilities within Unity The team This group combines technical skill with creative problem-solving to expand what machine learning can accomplish at Unity.
Apr 24, 2026
Sign in to browse more jobs
Create account — see all 5,770 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.