Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in performance modeling and analysis, preferably in AI or machine learning contexts. Strong programming skills in Python, C++, or similar languages. Familiarity with machine learning frameworks and performance optimization techniques. Excellent problem-solving abilities and a keen analytical mindset.
About the job
The Performance Modeling Engineer II position at OpenAI centers on building and applying performance models to enhance the efficiency of advanced AI systems. Based in San Francisco, this role contributes to the reliability and speed of OpenAI’s technologies.
What you will do
Develop and implement performance models for AI systems
Collaborate with data scientists and engineers to refine performance metrics
Support the efficiency and rigorous standards of OpenAI’s technologies
About OpenAI
OpenAI is at the forefront of artificial intelligence research and deployment, dedicated to ensuring that AI benefits humanity as a whole. With a mission to advance digital intelligence in a manner that is safe and beneficial, we foster an inclusive environment where innovative ideas thrive.
Role overview The Performance Modeling Engineer II position at OpenAI centers on building and applying performance models to enhance the efficiency of advanced AI systems. Based in San Francisco, this role contributes to the reliability and speed of OpenAI’s technologies. What you will do Develop and implement performance models for AI systems Collaborate with data scientists and engineers to refine performance metrics Support the efficiency and rigorous standards of OpenAI’s technologies
OpenAI is seeking a Performance Modeling Engineer based in San Francisco. This role centers on building and improving models that enhance the performance and efficiency of AI systems. The work directly supports the technical backbone of OpenAI’s products. Key responsibilities Develop and refine models aimed at optimizing the performance of AI systems. Collaborate with engineers and data scientists to tackle technical challenges as they arise. Contribute to projects that improve the efficiency of large-scale AI infrastructure. Role overview This position offers the chance to work on foundational technology that underpins OpenAI’s products. The focus is on practical improvements and close teamwork with technical colleagues to advance the capabilities and efficiency of AI at scale.
OpenAI is seeking a Software Engineer in San Francisco to focus on improving productivity by optimizing model performance. This position centers on developing solutions that make machine learning models more efficient and effective. Role overview This role involves working closely with teams across different functions to identify and address areas where model performance can be improved. The aim is to deliver changes that have a measurable impact on both systems and workflows. What you will do Collaborate with engineers and other specialists to enhance model efficiency Develop and implement solutions that improve the effectiveness of machine learning systems Contribute to projects that streamline processes and drive productivity gains Impact Your work will help shape improvements in how models operate and how teams at OpenAI achieve their goals. The changes you help deliver will support more effective use of resources and better outcomes for the organization.
Role overview The Performance Modeling Lead at OpenAI works from San Francisco and takes on both technical and leadership responsibilities. This position centers on developing new modeling methods that enhance performance across a variety of applications. Alongside direct technical contributions, the role involves guiding a team and shaping project direction. What you will do Develop and improve modeling strategies to raise performance metrics for multiple projects. Use expertise in data analysis, machine learning, and optimization to address complex problems. Lead and mentor a team, supporting their technical development and ensuring strong project outcomes.
ABOUT BASETENAt Baseten, we empower the most innovative AI companies—such as Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer—by providing a robust platform for mission-critical inference. Our unique combination of applied AI research, adaptable infrastructure, and cutting-edge developer tools allows companies at the forefront of AI to deploy state-of-the-art models seamlessly. Having recently secured a $300M Series E funding round from notable investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we are poised for rapid growth. Join us in creating the essential platform for engineers to launch AI products.THE ROLEAre you driven to push the boundaries of artificial intelligence while leading a team of talented engineers? We are seeking a Technical Lead Manager with a focus on machine learning performance and inference. This position is perfect for an individual with a strong engineering foundation who is eager to guide and mentor a team while remaining actively engaged in hands-on technology work. If you excel in a dynamic startup atmosphere and are excited to tackle both leadership and technical challenges, we invite you to apply.EXAMPLE INITIATIVESAs a member of our Model Performance team, you will work on projects such as:Baseten Embeddings Inference: The fastest embeddings solution availableThe Baseten Inference StackDriving model performance optimizationRESPONSIBILITIESLead, mentor, and manage a team of engineers dedicated to developing and optimizing ML model inference and performance.Oversee technical strategy and architectural decisions, fostering improvements across our engineering organization.Collaborate with cross-functional teams to ensure the seamless integration and scalability of ML models in production settings.Drive innovation in model performance and advocate for best practices within the team.
ABOUT BASETENBaseten is at the forefront of AI technology, empowering leading-edge companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer to seamlessly integrate advanced AI models into their operations. Our unique blend of applied AI research, adaptable infrastructure, and intuitive developer tools enables innovators to bring their most ambitious AI products to life. With our recent $300M Series E funding from top-tier investors such as BOND, IVP, Spark Capital, Greylock, and Conviction, we are poised for rapid growth. Join us in shaping the platform that engineers rely on to deploy transformative AI solutions.THE ROLEAre you driven by a passion for enhancing artificial intelligence applications? We are seeking a proactive Software Engineer specializing in ML performance to join our energetic team. This position is perfect for backend engineers who thrive in a fast-paced startup environment and are eager to make substantial contributions to the realm of Large Language Model (LLM) Inference. If you're enthusiastic about optimizing open-source ML models, we can't wait to hear from you!EXAMPLE INITIATIVESAs a member of our Model Performance team, you will have the opportunity to work on exciting projects, including:Baseten Embeddings Inference: The quickest embeddings solution availableThe Baseten Inference StackDriving model performance optimizationRESPONSIBILITIESDevelop, refine, and implement advanced techniques (quantization, speculative decoding, kv cache reuse, chunked prefill, and LoRA) for ML model inference and infrastructure.Conduct thorough investigations into the codebases of TensorRT, PyTorch, TensorRT-LLM, vllm, sglang, CUDA, and other libraries to troubleshoot and resolve ML performance issues.Scale and apply optimization techniques across a diverse array of ML models, with a focus on large language models.
Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.
Full-time|$160K/yr - $230K/yr|On-site|San Francisco
About MeterAt Meter, we believe that networking is at the heart of technological advancement. We have innovatively unified the entire networking stack and are now on a mission to make it autonomous.Our team is developing a cutting-edge neural network-driven system designed to analyze raw computer networks, enabling us to address all networking challenges. As outlined on Meter.ai, we are creating models within a closed-loop system that utilizes real-time telemetry, logs, and network events to autonomously troubleshoot issues, enhance performance, and resolve challenges.To achieve this, we require not only exceptional models but also robust infrastructure that ensures our models have clean, versioned, and low-latency access to the necessary data throughout training, evaluation, and deployment phases.Why this Role is EssentialEach Meter network deployed in the field serves as a valuable data source for our Models team. However, without meticulous infrastructure design, this data risks becoming fragmented, outdated, or inconsistent. In this role, you will ensure that such pitfalls are avoided. You will be responsible for the core data interface that drives our model development, experimentation, evaluation, and real-time inference.This position is fundamental and offers a significant impact. Your contributions will shape the speed at which we can train new models, the reliability of their evaluations, and their seamless operation across hundreds of real-world networks. You will collaborate closely with modelers to deliver systems that are elegant, scalable, and robust.Your ResponsibilitiesDesign and implement the Models API: a unified interface for accessing training, evaluation, and deployment data across raw, transformed, and feature-engineered layers.Ensure backward compatibility and feature versioning across continually evolving schemas.Develop scalable pipelines to ingest, transform, and serve petabytes of data across Kafka, Postgres, and Clickhouse.Create CI/CD workflows that evolve the API in tandem with changes to the underlying data schema.Facilitate fine-grained querying of historical and real-time data for any network, at any point in time.Help establish and promote the principle of 'smart data, dumb functions': maximizing operations in the data layer to minimize downstream code complexity.Collaborate with modelers to co-design training frameworks that optimize performance.
Join Crusoe as a Senior Systems Performance Engineer, where you will play a crucial role in optimizing and enhancing our systems for superior performance. You will be responsible for diagnosing performance bottlenecks, implementing solutions, and ensuring that our infrastructure can scale efficiently. Work in a dynamic environment that encourages innovation and professional growth.
Role OverviewAt Mariana Minerals, we are on a mission to revolutionize refining processes for critical minerals, playing a pivotal role in the global energy transition. We are in search of a dynamic and driven Process Modeling Engineer who will be integral to this endeavor.In this position, you will take charge of developing, validating, and optimizing heat and material balance models utilizing advanced software such as ASPEN Plus/HYSYS, SysCAD, OLI Studio, or METSIM. You will collaborate closely with R&D, pilot operations, and project execution teams to transform lab and pilot data into robust, scalable process models that are essential for the design of groundbreaking mineral refining facilities.Key ResponsibilitiesCreate both steady-state and dynamic process models to determine heat and material balances for integrated mineral refinery systems using ASPEN, SysCAD, OLI, or METSIM.Automate the sizing of equipment and processes (including reactors, heat exchangers, filters, crystallizers, evaporators, and separators) based on model outputs, linking models to datasheets and other engineering tools.Develop and maintain comprehensive process simulation databases to ensure consistency and traceability among modeling assumptions, test data, and engineering outputs.Calibrate and reconcile models using operational data from pilot plants to ensure model accuracy and predictive validity.Conduct optimization studies to enhance energy recovery, recycling strategies, and material efficiency.Develop dynamic models for validating PLC and DCS programming while assessing buffer sizing throughout the design process.Integrate process models with CAPEX and OPEX estimation tools to streamline techno-economic model development.Document modeling methodologies and results, ensuring clear technical communication for design reviews, techno-economic assessments, and regulatory submissions.
About UsAt Lemurian Labs, we are dedicated to democratizing AI technology while prioritizing sustainability. Our mission is to create solutions that minimize environmental impact, ensuring that artificial intelligence serves humanity positively. We are committed to responsible innovation and the sustainable growth of AI.We are in the process of developing a state-of-the-art, portable compiler that empowers developers to 'build once, deploy anywhere.' This technology ensures seamless cross-platform integration, allowing for model training in the cloud and deployment at the edge, all while maximizing resource efficiency and scalability.If you are passionate about scaling AI sustainably and are eager to make AI development more powerful and accessible, we invite you to join our team at Lemurian Labs. Together, we can build a future that is innovative and responsible.The RoleWe are seeking a Senior ML Performance Engineer to take charge of designing and leading our Performance Testing Platform from inception. In this pivotal role, you will be recognized as the technical expert in measuring, validating, and enhancing the performance of large language models (including Llama 3.2 70B, DeepSeek, and others) prior to and following compiler optimization on cutting-edge GPU architectures.This is a critical position that will significantly impact our product quality and customer success. You will work at the intersection of Machine Learning systems, GPU architecture, and performance engineering, constructing the infrastructure that substantiates the value of our compiler.
Role Overview OpenAI is hiring a ChatGPT Performance Engineer in San Francisco. This role focuses on improving the performance and efficiency of ChatGPT’s advanced AI models. The position works closely with cross-functional teams to identify and implement solutions that make ChatGPT faster and more reliable for users around the world. What You Will Do Optimize the speed, reliability, and scalability of ChatGPT’s platforms. Collaborate with engineers and other teams to solve technical challenges. Develop and refine systems to support a seamless user experience globally. Impact This work directly shapes the future of AI at OpenAI, helping deliver a dependable and efficient ChatGPT experience to millions of users.
We are seeking a talented Performance Engineer to join our dynamic team at usm2. This is an exciting opportunity for local professionals who are passionate about optimizing system performance and enhancing user experience. As a Performance Engineer, you will play a crucial role in analyzing performance metrics, identifying bottlenecks, and implementing solutions to ensure our applications run smoothly and efficiently.
ABOUT BASETENAt Baseten, we are at the forefront of AI innovation, providing critical inference solutions for leading AI companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our platform combines advanced AI research, adaptable infrastructure, and intuitive developer tools, empowering organizations to deploy state-of-the-art models effectively. With rapid growth and a recent $300M Series E funding round backed by top-tier investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we invite you to join our mission in building the platform of choice for engineers delivering AI products.THE ROLE:As a member of Baseten’s Model Performance (MP) team, you will play a pivotal role in ensuring our platform’s model APIs are not only fast and reliable but also cost-effective. Your primary focus will be on developing and optimizing the infrastructure that supports our hosted API endpoints for cutting-edge open-source models. This role involves working with distributed systems, model serving, and enhancing the developer experience. You will collaborate with a small, dynamic team at the intersection of product development, model performance, and infrastructure, defining how developers interact with AI models on a large scale.RESPONSIBILITIES:Design, develop, and maintain the Model APIs surface, focusing on advanced inference features such as structured outputs (JSON mode, grammar-constrained generation), tool/function calling, and multi-modal serving.Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, create custom CUDA operators, and enhance memory allocation patterns for maximum efficiency across multi-GPU setups.Implement performance improvements across various runtimes based on a deep understanding of their internals, including speculative decoding, guided generation for structured outputs, and custom scheduling algorithms for high-performance serving.Develop robust benchmarking frameworks to evaluate real-world performance across diverse model architectures, batch sizes, sequence lengths, and hardware configurations.Enhance performance across runtimes (e.g., TensorRT, TensorRT-LLM) through techniques such as speculative decoding, quantization, batching, and KV-cache reuse.Integrate deep observability mechanisms (metrics, traces, logs) and establish repeatable benchmarks to assess speed, reliability, and quality.
About Our TeamJoin the Inference team at OpenAI, where we leverage cutting-edge research and technology to deliver exceptional AI products to consumers, enterprises, and developers. Our mission is to empower users to harness the full potential of our advanced AI models, enabling unprecedented capabilities. We prioritize efficient and high-performance model inference while accelerating research advancements.About the RoleWe are seeking a passionate Software Engineer to optimize some of the world's largest and most sophisticated AI models for deployment in high-volume, low-latency, and highly available production and research environments.Key ResponsibilitiesCollaborate with machine learning researchers, engineers, and product managers to transition our latest technologies into production.Work closely with researchers to enable advanced research initiatives through innovative engineering solutions.Implement new techniques, tools, and architectures that enhance the performance, latency, throughput, and effectiveness of our model inference stack.Develop tools to identify bottlenecks and instability sources, designing and implementing solutions for priority issues.Optimize our code and Azure VM fleet to maximize every FLOP and GB of GPU RAM available.You Will Excel in This Role If You:Possess a solid understanding of modern machine learning architectures and an intuitive grasp of performance optimization strategies, especially for inference.Take ownership of problems end-to-end, demonstrating a willingness to acquire any necessary knowledge to achieve results.Bring at least 5 years of professional software engineering experience.Have or can quickly develop expertise in PyTorch, NVidia GPUs, and relevant optimization software stacks (such as NCCL, CUDA), along with HPC technologies like InfiniBand, MPI, and NVLink.Have experience in architecting, building, monitoring, and debugging production distributed systems, with bonus points for working on performance-critical systems.Have successfully rebuilt or significantly refactored production systems multiple times to accommodate rapid scaling.Are self-driven, enjoying the challenge of identifying and addressing the most critical problems.
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY
Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel. Key responsibilities Design and implement evaluations for Anthropic's AI models Collaborate with team members to enhance model performance Contribute to research that pushes the boundaries of AI systems Location Remote-friendly (travel required) San Francisco, CA New York City, NY
At Genmo, we are at the forefront of advancing artificial intelligence through innovative research in video generation. Our mission is to construct open, cutting-edge models that will ultimately contribute to the realization of Artificial General Intelligence (AGI). As part of our dynamic team, you will play a pivotal role in redefining the future of AI and expanding the horizons of video creation.We are looking for a skilled GPU Performance Engineer who can extract maximum performance from our H100 infrastructure and fine-tune our model serving stack to achieve unparalleled efficiency. If you are passionate about optimizing performance, particularly at the microsecond level, and thrive on pushing hardware to its limits, this is the perfect opportunity for you.Key ResponsibilitiesUtilize advanced profiling tools such as Nsight Systems and nvprof to analyze and enhance GPU workloads.Develop high-performance CUDA and Triton kernels to optimize essential model functions.Reduce cold start latency from seconds to mere milliseconds in our serving infrastructure.Optimize memory access patterns, implement kernel fusion, and maximize GPU utilization.Collaborate closely with machine learning engineers to optimize model implementations.Diagnose and resolve performance issues throughout the application and hardware stack.Implement custom memory pooling and allocation strategies to enhance performance.Promote performance optimization techniques and foster a culture of excellence across teams.
About Our TeamThe Training Runtime team is at the forefront of developing a cutting-edge distributed machine learning training runtime, enabling everything from pioneering research to large-scale model deployments. Our mission is to empower researchers while facilitating growth into frontier-scale operations. We are crafting a cohesive, modular runtime that adapts to researchers’ evolving needs as they progress along the scaling curve.Our focus is anchored in three key areas: optimizing high-performance, asynchronous data movement that is aware of tensor and optimizer states; building robust, fault-tolerant training frameworks that incorporate comprehensive state management, resilient checkpointing, deterministic orchestration, and advanced observability; and managing distributed processes for enduring, job-specific, and user-defined workflows.We aim to seamlessly integrate proven large-scale capabilities into a developer-friendly runtime, enabling teams to iterate rapidly and operate reliably across various scales. Our success is gauged by both the enhancement of training throughput (the speed of model training) and researcher throughput (the pace at which ideas transform into experiments and products).About the RoleAs a Training Performance Engineer, you will be instrumental in driving efficiency enhancements throughout our distributed training architecture. Your responsibilities will include analyzing extensive training runs, pinpointing utilization gaps, and engineering optimizations that maximize throughput and system uptime. This position merges a profound understanding of systems with practical performance engineering—analyzing GPU kernel performance, collective communication throughput, and investigating I/O bottlenecks, while also implementing model sharding techniques for large-scale training.Your efforts will ensure our clusters operate at peak performance, enabling OpenAI to develop larger and more sophisticated models within existing compute budgets.This position is located in San Francisco, CA, utilizing a hybrid work model with three days in the office each week, and we offer relocation assistance for new hires.Key Responsibilities:Analyze end-to-end training runs to detect performance bottlenecks across computation, communication, and storage.Enhance GPU utilization and throughput for large-scale distributed model training.Collaborate with runtime and systems engineers to boost kernel efficiency, scheduling, and collective communication performance.Implement model graph transformations to enhance overall throughput.Develop tools for monitoring and visualizing metrics such as MFU, throughput, and uptime across clusters.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
At Databricks, we are dedicated to empowering data teams to tackle the most challenging problems in the world — from realizing the future of transportation to fast-tracking medical innovations. We accomplish this by developing and operating the premier data and AI infrastructure platform, enabling our customers to harness profound data insights for business enhancement. Our Model Serving product equips organizations with a cohesive, scalable, and governed solution for deploying and managing AI/ML models — ranging from traditional machine learning to intricate proprietary large language models. It ensures real-time, low-latency inference, governance, monitoring, and lineage. As the adoption of AI surges, Model Serving stands as a fundamental component of the Databricks platform, allowing customers to operationalize models at scale with robust SLAs and cost efficiency. In the role of Staff Engineer, you will significantly influence both the product experience and the core infrastructure of Model Serving. Your responsibilities will include designing and constructing systems that facilitate high-throughput, low-latency inference across CPU and GPU workloads, steering architectural strategies, and collaborating extensively with platform, product, infrastructure, and research teams to create an exceptional serving platform.
ABOUT BASETENAt Baseten, we are at the forefront of enabling transformative AI solutions for some of the world's leading companies, including Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our innovative platform combines cutting-edge AI research, adaptable infrastructure, and developer-friendly tools to facilitate the production of advanced models. Recently, we celebrated our rapid growth with a successful $300M Series E funding round from notable investors like BOND, IVP, Spark Capital, Greylock, and Conviction. We invite you to join our dynamic team and contribute to the evolution of AI product deployment.THE ROLEAs a Senior Software Engineer specializing in Model Training at Baseten, you will play a pivotal role in constructing the infrastructure essential for the large-scale training and fine-tuning of foundational AI models. Your responsibilities will include designing and implementing distributed training systems, optimizing GPU utilization, and establishing scalable pipelines that empower Baseten and our clientele to adapt models with efficiency and reliability. This role demands a high level of technical expertise and hands-on involvement: you will be responsible for critical components of our training stack, collaborate with product and infrastructure teams to identify customer needs, and drive advancements in scalable training infrastructure.EXAMPLE WORK:Training open-source models that surpass GPT-5 capabilities for a leading digital insurerExploring specialized, continuously learning models as the future of AIOverview of our training documentationResearch initiatives we've undertakenRESPONSIBILITIESDesign, construct, and sustain distributed training infrastructures for large foundation modelsDevelop scalable pipelines for fine-tuning and training across diverse GPU/accelerator clustersEnhance training performance through optimization of algorithms and infrastructureCollaborate closely with cross-functional teams to align technical solutions with business objectivesStay abreast of advancements in the field of machine learning and AI to continually improve our training processes
Role overview The Performance Modeling Engineer II position at OpenAI centers on building and applying performance models to enhance the efficiency of advanced AI systems. Based in San Francisco, this role contributes to the reliability and speed of OpenAI’s technologies. What you will do Develop and implement performance models for AI systems Collaborate with data scientists and engineers to refine performance metrics Support the efficiency and rigorous standards of OpenAI’s technologies
OpenAI is seeking a Performance Modeling Engineer based in San Francisco. This role centers on building and improving models that enhance the performance and efficiency of AI systems. The work directly supports the technical backbone of OpenAI’s products. Key responsibilities Develop and refine models aimed at optimizing the performance of AI systems. Collaborate with engineers and data scientists to tackle technical challenges as they arise. Contribute to projects that improve the efficiency of large-scale AI infrastructure. Role overview This position offers the chance to work on foundational technology that underpins OpenAI’s products. The focus is on practical improvements and close teamwork with technical colleagues to advance the capabilities and efficiency of AI at scale.
OpenAI is seeking a Software Engineer in San Francisco to focus on improving productivity by optimizing model performance. This position centers on developing solutions that make machine learning models more efficient and effective. Role overview This role involves working closely with teams across different functions to identify and address areas where model performance can be improved. The aim is to deliver changes that have a measurable impact on both systems and workflows. What you will do Collaborate with engineers and other specialists to enhance model efficiency Develop and implement solutions that improve the effectiveness of machine learning systems Contribute to projects that streamline processes and drive productivity gains Impact Your work will help shape improvements in how models operate and how teams at OpenAI achieve their goals. The changes you help deliver will support more effective use of resources and better outcomes for the organization.
Role overview The Performance Modeling Lead at OpenAI works from San Francisco and takes on both technical and leadership responsibilities. This position centers on developing new modeling methods that enhance performance across a variety of applications. Alongside direct technical contributions, the role involves guiding a team and shaping project direction. What you will do Develop and improve modeling strategies to raise performance metrics for multiple projects. Use expertise in data analysis, machine learning, and optimization to address complex problems. Lead and mentor a team, supporting their technical development and ensuring strong project outcomes.
ABOUT BASETENAt Baseten, we empower the most innovative AI companies—such as Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer—by providing a robust platform for mission-critical inference. Our unique combination of applied AI research, adaptable infrastructure, and cutting-edge developer tools allows companies at the forefront of AI to deploy state-of-the-art models seamlessly. Having recently secured a $300M Series E funding round from notable investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we are poised for rapid growth. Join us in creating the essential platform for engineers to launch AI products.THE ROLEAre you driven to push the boundaries of artificial intelligence while leading a team of talented engineers? We are seeking a Technical Lead Manager with a focus on machine learning performance and inference. This position is perfect for an individual with a strong engineering foundation who is eager to guide and mentor a team while remaining actively engaged in hands-on technology work. If you excel in a dynamic startup atmosphere and are excited to tackle both leadership and technical challenges, we invite you to apply.EXAMPLE INITIATIVESAs a member of our Model Performance team, you will work on projects such as:Baseten Embeddings Inference: The fastest embeddings solution availableThe Baseten Inference StackDriving model performance optimizationRESPONSIBILITIESLead, mentor, and manage a team of engineers dedicated to developing and optimizing ML model inference and performance.Oversee technical strategy and architectural decisions, fostering improvements across our engineering organization.Collaborate with cross-functional teams to ensure the seamless integration and scalability of ML models in production settings.Drive innovation in model performance and advocate for best practices within the team.
ABOUT BASETENBaseten is at the forefront of AI technology, empowering leading-edge companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer to seamlessly integrate advanced AI models into their operations. Our unique blend of applied AI research, adaptable infrastructure, and intuitive developer tools enables innovators to bring their most ambitious AI products to life. With our recent $300M Series E funding from top-tier investors such as BOND, IVP, Spark Capital, Greylock, and Conviction, we are poised for rapid growth. Join us in shaping the platform that engineers rely on to deploy transformative AI solutions.THE ROLEAre you driven by a passion for enhancing artificial intelligence applications? We are seeking a proactive Software Engineer specializing in ML performance to join our energetic team. This position is perfect for backend engineers who thrive in a fast-paced startup environment and are eager to make substantial contributions to the realm of Large Language Model (LLM) Inference. If you're enthusiastic about optimizing open-source ML models, we can't wait to hear from you!EXAMPLE INITIATIVESAs a member of our Model Performance team, you will have the opportunity to work on exciting projects, including:Baseten Embeddings Inference: The quickest embeddings solution availableThe Baseten Inference StackDriving model performance optimizationRESPONSIBILITIESDevelop, refine, and implement advanced techniques (quantization, speculative decoding, kv cache reuse, chunked prefill, and LoRA) for ML model inference and infrastructure.Conduct thorough investigations into the codebases of TensorRT, PyTorch, TensorRT-LLM, vllm, sglang, CUDA, and other libraries to troubleshoot and resolve ML performance issues.Scale and apply optimization techniques across a diverse array of ML models, with a focus on large language models.
Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.
Full-time|$160K/yr - $230K/yr|On-site|San Francisco
About MeterAt Meter, we believe that networking is at the heart of technological advancement. We have innovatively unified the entire networking stack and are now on a mission to make it autonomous.Our team is developing a cutting-edge neural network-driven system designed to analyze raw computer networks, enabling us to address all networking challenges. As outlined on Meter.ai, we are creating models within a closed-loop system that utilizes real-time telemetry, logs, and network events to autonomously troubleshoot issues, enhance performance, and resolve challenges.To achieve this, we require not only exceptional models but also robust infrastructure that ensures our models have clean, versioned, and low-latency access to the necessary data throughout training, evaluation, and deployment phases.Why this Role is EssentialEach Meter network deployed in the field serves as a valuable data source for our Models team. However, without meticulous infrastructure design, this data risks becoming fragmented, outdated, or inconsistent. In this role, you will ensure that such pitfalls are avoided. You will be responsible for the core data interface that drives our model development, experimentation, evaluation, and real-time inference.This position is fundamental and offers a significant impact. Your contributions will shape the speed at which we can train new models, the reliability of their evaluations, and their seamless operation across hundreds of real-world networks. You will collaborate closely with modelers to deliver systems that are elegant, scalable, and robust.Your ResponsibilitiesDesign and implement the Models API: a unified interface for accessing training, evaluation, and deployment data across raw, transformed, and feature-engineered layers.Ensure backward compatibility and feature versioning across continually evolving schemas.Develop scalable pipelines to ingest, transform, and serve petabytes of data across Kafka, Postgres, and Clickhouse.Create CI/CD workflows that evolve the API in tandem with changes to the underlying data schema.Facilitate fine-grained querying of historical and real-time data for any network, at any point in time.Help establish and promote the principle of 'smart data, dumb functions': maximizing operations in the data layer to minimize downstream code complexity.Collaborate with modelers to co-design training frameworks that optimize performance.
Join Crusoe as a Senior Systems Performance Engineer, where you will play a crucial role in optimizing and enhancing our systems for superior performance. You will be responsible for diagnosing performance bottlenecks, implementing solutions, and ensuring that our infrastructure can scale efficiently. Work in a dynamic environment that encourages innovation and professional growth.
Role OverviewAt Mariana Minerals, we are on a mission to revolutionize refining processes for critical minerals, playing a pivotal role in the global energy transition. We are in search of a dynamic and driven Process Modeling Engineer who will be integral to this endeavor.In this position, you will take charge of developing, validating, and optimizing heat and material balance models utilizing advanced software such as ASPEN Plus/HYSYS, SysCAD, OLI Studio, or METSIM. You will collaborate closely with R&D, pilot operations, and project execution teams to transform lab and pilot data into robust, scalable process models that are essential for the design of groundbreaking mineral refining facilities.Key ResponsibilitiesCreate both steady-state and dynamic process models to determine heat and material balances for integrated mineral refinery systems using ASPEN, SysCAD, OLI, or METSIM.Automate the sizing of equipment and processes (including reactors, heat exchangers, filters, crystallizers, evaporators, and separators) based on model outputs, linking models to datasheets and other engineering tools.Develop and maintain comprehensive process simulation databases to ensure consistency and traceability among modeling assumptions, test data, and engineering outputs.Calibrate and reconcile models using operational data from pilot plants to ensure model accuracy and predictive validity.Conduct optimization studies to enhance energy recovery, recycling strategies, and material efficiency.Develop dynamic models for validating PLC and DCS programming while assessing buffer sizing throughout the design process.Integrate process models with CAPEX and OPEX estimation tools to streamline techno-economic model development.Document modeling methodologies and results, ensuring clear technical communication for design reviews, techno-economic assessments, and regulatory submissions.
About UsAt Lemurian Labs, we are dedicated to democratizing AI technology while prioritizing sustainability. Our mission is to create solutions that minimize environmental impact, ensuring that artificial intelligence serves humanity positively. We are committed to responsible innovation and the sustainable growth of AI.We are in the process of developing a state-of-the-art, portable compiler that empowers developers to 'build once, deploy anywhere.' This technology ensures seamless cross-platform integration, allowing for model training in the cloud and deployment at the edge, all while maximizing resource efficiency and scalability.If you are passionate about scaling AI sustainably and are eager to make AI development more powerful and accessible, we invite you to join our team at Lemurian Labs. Together, we can build a future that is innovative and responsible.The RoleWe are seeking a Senior ML Performance Engineer to take charge of designing and leading our Performance Testing Platform from inception. In this pivotal role, you will be recognized as the technical expert in measuring, validating, and enhancing the performance of large language models (including Llama 3.2 70B, DeepSeek, and others) prior to and following compiler optimization on cutting-edge GPU architectures.This is a critical position that will significantly impact our product quality and customer success. You will work at the intersection of Machine Learning systems, GPU architecture, and performance engineering, constructing the infrastructure that substantiates the value of our compiler.
Role Overview OpenAI is hiring a ChatGPT Performance Engineer in San Francisco. This role focuses on improving the performance and efficiency of ChatGPT’s advanced AI models. The position works closely with cross-functional teams to identify and implement solutions that make ChatGPT faster and more reliable for users around the world. What You Will Do Optimize the speed, reliability, and scalability of ChatGPT’s platforms. Collaborate with engineers and other teams to solve technical challenges. Develop and refine systems to support a seamless user experience globally. Impact This work directly shapes the future of AI at OpenAI, helping deliver a dependable and efficient ChatGPT experience to millions of users.
We are seeking a talented Performance Engineer to join our dynamic team at usm2. This is an exciting opportunity for local professionals who are passionate about optimizing system performance and enhancing user experience. As a Performance Engineer, you will play a crucial role in analyzing performance metrics, identifying bottlenecks, and implementing solutions to ensure our applications run smoothly and efficiently.
ABOUT BASETENAt Baseten, we are at the forefront of AI innovation, providing critical inference solutions for leading AI companies like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our platform combines advanced AI research, adaptable infrastructure, and intuitive developer tools, empowering organizations to deploy state-of-the-art models effectively. With rapid growth and a recent $300M Series E funding round backed by top-tier investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we invite you to join our mission in building the platform of choice for engineers delivering AI products.THE ROLE:As a member of Baseten’s Model Performance (MP) team, you will play a pivotal role in ensuring our platform’s model APIs are not only fast and reliable but also cost-effective. Your primary focus will be on developing and optimizing the infrastructure that supports our hosted API endpoints for cutting-edge open-source models. This role involves working with distributed systems, model serving, and enhancing the developer experience. You will collaborate with a small, dynamic team at the intersection of product development, model performance, and infrastructure, defining how developers interact with AI models on a large scale.RESPONSIBILITIES:Design, develop, and maintain the Model APIs surface, focusing on advanced inference features such as structured outputs (JSON mode, grammar-constrained generation), tool/function calling, and multi-modal serving.Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, create custom CUDA operators, and enhance memory allocation patterns for maximum efficiency across multi-GPU setups.Implement performance improvements across various runtimes based on a deep understanding of their internals, including speculative decoding, guided generation for structured outputs, and custom scheduling algorithms for high-performance serving.Develop robust benchmarking frameworks to evaluate real-world performance across diverse model architectures, batch sizes, sequence lengths, and hardware configurations.Enhance performance across runtimes (e.g., TensorRT, TensorRT-LLM) through techniques such as speculative decoding, quantization, batching, and KV-cache reuse.Integrate deep observability mechanisms (metrics, traces, logs) and establish repeatable benchmarks to assess speed, reliability, and quality.
About Our TeamJoin the Inference team at OpenAI, where we leverage cutting-edge research and technology to deliver exceptional AI products to consumers, enterprises, and developers. Our mission is to empower users to harness the full potential of our advanced AI models, enabling unprecedented capabilities. We prioritize efficient and high-performance model inference while accelerating research advancements.About the RoleWe are seeking a passionate Software Engineer to optimize some of the world's largest and most sophisticated AI models for deployment in high-volume, low-latency, and highly available production and research environments.Key ResponsibilitiesCollaborate with machine learning researchers, engineers, and product managers to transition our latest technologies into production.Work closely with researchers to enable advanced research initiatives through innovative engineering solutions.Implement new techniques, tools, and architectures that enhance the performance, latency, throughput, and effectiveness of our model inference stack.Develop tools to identify bottlenecks and instability sources, designing and implementing solutions for priority issues.Optimize our code and Azure VM fleet to maximize every FLOP and GB of GPU RAM available.You Will Excel in This Role If You:Possess a solid understanding of modern machine learning architectures and an intuitive grasp of performance optimization strategies, especially for inference.Take ownership of problems end-to-end, demonstrating a willingness to acquire any necessary knowledge to achieve results.Bring at least 5 years of professional software engineering experience.Have or can quickly develop expertise in PyTorch, NVidia GPUs, and relevant optimization software stacks (such as NCCL, CUDA), along with HPC technologies like InfiniBand, MPI, and NVLink.Have experience in architecting, building, monitoring, and debugging production distributed systems, with bonus points for working on performance-critical systems.Have successfully rebuilt or significantly refactored production systems multiple times to accommodate rapid scaling.Are self-driven, enjoying the challenge of identifying and addressing the most critical problems.
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY
Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel. Key responsibilities Design and implement evaluations for Anthropic's AI models Collaborate with team members to enhance model performance Contribute to research that pushes the boundaries of AI systems Location Remote-friendly (travel required) San Francisco, CA New York City, NY
At Genmo, we are at the forefront of advancing artificial intelligence through innovative research in video generation. Our mission is to construct open, cutting-edge models that will ultimately contribute to the realization of Artificial General Intelligence (AGI). As part of our dynamic team, you will play a pivotal role in redefining the future of AI and expanding the horizons of video creation.We are looking for a skilled GPU Performance Engineer who can extract maximum performance from our H100 infrastructure and fine-tune our model serving stack to achieve unparalleled efficiency. If you are passionate about optimizing performance, particularly at the microsecond level, and thrive on pushing hardware to its limits, this is the perfect opportunity for you.Key ResponsibilitiesUtilize advanced profiling tools such as Nsight Systems and nvprof to analyze and enhance GPU workloads.Develop high-performance CUDA and Triton kernels to optimize essential model functions.Reduce cold start latency from seconds to mere milliseconds in our serving infrastructure.Optimize memory access patterns, implement kernel fusion, and maximize GPU utilization.Collaborate closely with machine learning engineers to optimize model implementations.Diagnose and resolve performance issues throughout the application and hardware stack.Implement custom memory pooling and allocation strategies to enhance performance.Promote performance optimization techniques and foster a culture of excellence across teams.
About Our TeamThe Training Runtime team is at the forefront of developing a cutting-edge distributed machine learning training runtime, enabling everything from pioneering research to large-scale model deployments. Our mission is to empower researchers while facilitating growth into frontier-scale operations. We are crafting a cohesive, modular runtime that adapts to researchers’ evolving needs as they progress along the scaling curve.Our focus is anchored in three key areas: optimizing high-performance, asynchronous data movement that is aware of tensor and optimizer states; building robust, fault-tolerant training frameworks that incorporate comprehensive state management, resilient checkpointing, deterministic orchestration, and advanced observability; and managing distributed processes for enduring, job-specific, and user-defined workflows.We aim to seamlessly integrate proven large-scale capabilities into a developer-friendly runtime, enabling teams to iterate rapidly and operate reliably across various scales. Our success is gauged by both the enhancement of training throughput (the speed of model training) and researcher throughput (the pace at which ideas transform into experiments and products).About the RoleAs a Training Performance Engineer, you will be instrumental in driving efficiency enhancements throughout our distributed training architecture. Your responsibilities will include analyzing extensive training runs, pinpointing utilization gaps, and engineering optimizations that maximize throughput and system uptime. This position merges a profound understanding of systems with practical performance engineering—analyzing GPU kernel performance, collective communication throughput, and investigating I/O bottlenecks, while also implementing model sharding techniques for large-scale training.Your efforts will ensure our clusters operate at peak performance, enabling OpenAI to develop larger and more sophisticated models within existing compute budgets.This position is located in San Francisco, CA, utilizing a hybrid work model with three days in the office each week, and we offer relocation assistance for new hires.Key Responsibilities:Analyze end-to-end training runs to detect performance bottlenecks across computation, communication, and storage.Enhance GPU utilization and throughput for large-scale distributed model training.Collaborate with runtime and systems engineers to boost kernel efficiency, scheduling, and collective communication performance.Implement model graph transformations to enhance overall throughput.Develop tools for monitoring and visualizing metrics such as MFU, throughput, and uptime across clusters.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
At Databricks, we are dedicated to empowering data teams to tackle the most challenging problems in the world — from realizing the future of transportation to fast-tracking medical innovations. We accomplish this by developing and operating the premier data and AI infrastructure platform, enabling our customers to harness profound data insights for business enhancement. Our Model Serving product equips organizations with a cohesive, scalable, and governed solution for deploying and managing AI/ML models — ranging from traditional machine learning to intricate proprietary large language models. It ensures real-time, low-latency inference, governance, monitoring, and lineage. As the adoption of AI surges, Model Serving stands as a fundamental component of the Databricks platform, allowing customers to operationalize models at scale with robust SLAs and cost efficiency. In the role of Staff Engineer, you will significantly influence both the product experience and the core infrastructure of Model Serving. Your responsibilities will include designing and constructing systems that facilitate high-throughput, low-latency inference across CPU and GPU workloads, steering architectural strategies, and collaborating extensively with platform, product, infrastructure, and research teams to create an exceptional serving platform.
ABOUT BASETENAt Baseten, we are at the forefront of enabling transformative AI solutions for some of the world's leading companies, including Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, and Writer. Our innovative platform combines cutting-edge AI research, adaptable infrastructure, and developer-friendly tools to facilitate the production of advanced models. Recently, we celebrated our rapid growth with a successful $300M Series E funding round from notable investors like BOND, IVP, Spark Capital, Greylock, and Conviction. We invite you to join our dynamic team and contribute to the evolution of AI product deployment.THE ROLEAs a Senior Software Engineer specializing in Model Training at Baseten, you will play a pivotal role in constructing the infrastructure essential for the large-scale training and fine-tuning of foundational AI models. Your responsibilities will include designing and implementing distributed training systems, optimizing GPU utilization, and establishing scalable pipelines that empower Baseten and our clientele to adapt models with efficiency and reliability. This role demands a high level of technical expertise and hands-on involvement: you will be responsible for critical components of our training stack, collaborate with product and infrastructure teams to identify customer needs, and drive advancements in scalable training infrastructure.EXAMPLE WORK:Training open-source models that surpass GPT-5 capabilities for a leading digital insurerExploring specialized, continuously learning models as the future of AIOverview of our training documentationResearch initiatives we've undertakenRESPONSIBILITIESDesign, construct, and sustain distributed training infrastructures for large foundation modelsDevelop scalable pipelines for fine-tuning and training across diverse GPU/accelerator clustersEnhance training performance through optimization of algorithms and infrastructureCollaborate closely with cross-functional teams to align technical solutions with business objectivesStay abreast of advancements in the field of machine learning and AI to continually improve our training processes
Aug 29, 2025
Sign in to browse more jobs
Create account — see all 5,328 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.