Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
We are looking for candidates with a strong background in performance modeling, data analysis, and software engineering. Ideal candidates will have:Proficiency in programming languages such as Python and C++. Experience with machine learning frameworks and performance optimization techniques. A degree in Computer Science, Engineering, or a related field. Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment.
About the job
OpenAI is seeking a Performance Modeling Engineer based in San Francisco. This role centers on building and improving models that enhance the performance and efficiency of AI systems. The work directly supports the technical backbone of OpenAI’s products.
Key responsibilities
Develop and refine models aimed at optimizing the performance of AI systems.
Collaborate with engineers and data scientists to tackle technical challenges as they arise.
Contribute to projects that improve the efficiency of large-scale AI infrastructure.
Role overview
This position offers the chance to work on foundational technology that underpins OpenAI’s products. The focus is on practical improvements and close teamwork with technical colleagues to advance the capabilities and efficiency of AI at scale.
About OpenAI
OpenAI is a leading research organization dedicated to advancing artificial intelligence in a safe and beneficial manner. Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. Join us to work at the forefront of AI technology and contribute to projects that make a difference.
Full-time|$166K/yr - $225K/yr|On-site|San Francisco, California
At Databricks, we are dedicated to empowering data teams to tackle some of the most challenging issues of our time—from realizing the future of transportation to speeding up medical innovations. We achieve this by developing and maintaining the premier data and AI infrastructure platform, allowing our clients to leverage profound data insights to enhance their operations. Our Model Serving product equips organizations with a cohesive, scalable, and governed platform for deploying and overseeing AI/ML models, spanning traditional ML to specialized large language models. It provides real-time, low-latency inference, governance, monitoring, and lineage capabilities. With the rapid rise of AI adoption, Model Serving stands as a fundamental component of the Databricks platform, enabling clients to operationalize models efficiently and cost-effectively at scale. As a Senior Engineer, your role will be pivotal in transforming both the product experience and the underlying infrastructure of Model Serving. You will design and create systems enabling high-throughput, low-latency inference across CPU and GPU workloads, influence architectural strategies, and work closely with platform, product, infrastructure, and research teams to deliver an exceptional serving platform.
Full-time|$217K/yr - $312.2K/yr|On-site|San Francisco, California
At Databricks, we are dedicated to empowering data teams to tackle the most challenging global issues—whether it's transforming transportation or speeding up medical advancements. We achieve this by constructing and managing the world's leading data and AI infrastructure platform, enabling our clients to leverage deep data insights for business enhancement. The Model Serving product at Databricks offers enterprises a cohesive, scalable, and governed platform for deploying and managing AI/ML models—from conventional ML to sophisticated, proprietary large language models. It facilitates real-time, low-latency inference while providing governance, monitoring, and lineage capabilities. As AI adoption surges, Model Serving becomes a central component of the Databricks platform, allowing customers to operationalize models efficiently and cost-effectively. As a Senior Engineering Manager, you will lead a team responsible for both the product experience and the underlying infrastructure of Model Serving. This role involves shaping user-facing features while architecting for scalability, extensibility, and performance across CPU and GPU inference. You will collaborate closely with various teams across the platform, product, infrastructure, and research domains.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
At Databricks, we are dedicated to empowering data teams to tackle the most challenging problems in the world — from realizing the future of transportation to fast-tracking medical innovations. We accomplish this by developing and operating the premier data and AI infrastructure platform, enabling our customers to harness profound data insights for business enhancement. Our Model Serving product equips organizations with a cohesive, scalable, and governed solution for deploying and managing AI/ML models — ranging from traditional machine learning to intricate proprietary large language models. It ensures real-time, low-latency inference, governance, monitoring, and lineage. As the adoption of AI surges, Model Serving stands as a fundamental component of the Databricks platform, allowing customers to operationalize models at scale with robust SLAs and cost efficiency. In the role of Staff Engineer, you will significantly influence both the product experience and the core infrastructure of Model Serving. Your responsibilities will include designing and constructing systems that facilitate high-throughput, low-latency inference across CPU and GPU workloads, steering architectural strategies, and collaborating extensively with platform, product, infrastructure, and research teams to create an exceptional serving platform.
Full-time|$192K/yr - $260K/yr|On-site|San Francisco, California
At Databricks, we are driven by our commitment to empower data teams in tackling the world's most challenging problems — from transforming transportation solutions to accelerating medical advancements. Our mission revolves around constructing and maintaining the world's premier data and AI infrastructure platform, enabling our clients to harness deep data insights for enhanced business outcomes.Foundation Model Serving represents the API product designed for hosting and serving advanced AI model inference, catering to both open-source models like Llama, Qwen, and GPT OSS, as well as proprietary models such as Claude and OpenAI GPT. We welcome engineers who have experience managing high-scale operational systems, including customer-facing APIs, Edge Gateways, or ML Inference services, even if they do not have a background in ML or AI. A passion for developing LLM APIs and runtimes at scale is essential.As a Staff Engineer, you will play a pivotal role in defining both the product experience and the underlying infrastructure. You will be tasked with designing and building systems that facilitate high-throughput, low-latency inference on GPU workloads with cutting-edge models. Your influence will extend to architectural direction, working closely with platform, product, infrastructure, and research teams to deliver an exceptional foundation model API product.The impact you will have:Design and implement core systems and APIs that drive Databricks Foundation Model Serving, ensuring scalability, reliability, and operational excellence.Collaborate with product and engineering leaders to outline the technical roadmap and long-term architecture for workload serving.Make architectural decisions to enhance performance, throughput, autoscaling, and operational efficiency for GPU serving workloads.Contribute directly to critical components within the serving infrastructure, from systems like vLLM and SGLang to developing token-based rate limiters and optimizers, ensuring seamless and efficient operations at scale.Work cross-functionally with product, platform, and research teams to transform customer requirements into dependable and high-performing systems.Establish best practices for code quality, testing, and operational readiness while mentoring fellow engineers through design reviews and technical support.Represent the team in inter-departmental technical discussions, influencing Databricks’ wider AI platform strategy.
About SesameAt Sesame, we envision a transformative future where technology is seamlessly integrated into our lives, enabling computers to perceive, interact, and collaborate in ways that feel genuinely human. Our mission is to create innovative voice agents that become an integral part of daily experiences. Our talented team comprises pioneers from Oculus and Ubiquity6, alongside industry leaders from Meta, Google, and Apple, all bringing extensive expertise in both hardware and software. Join us in pioneering a world where computers are truly alive.Key Responsibilities:Enhance our model serving infrastructure, integrating a diverse range of LLM, speech, and vision models.Collaborate with ML infrastructure and training engineers to develop a fast, cost-efficient, and reliable serving layer for our groundbreaking consumer product.Adapt and extend existing LLM serving frameworks such as VLLM and SGLang, leveraging cutting-edge techniques for high-performance model serving.Partner with the training team to uncover opportunities for accelerating model performance without compromising quality.Implement strategies like in-flight batching, caching, and custom kernels to optimize inference speed.Discover methods to minimize model initialization times while maintaining excellence in quality.
Full-time|$216.2K/yr - $270.3K/yr|On-site|San Francisco, CA; New York, NY
Join our dynamic Machine Learning Infrastructure team as a Senior AI Infrastructure Engineer, where you will play a pivotal role in designing and constructing platforms that ensure the scalable, reliable, and efficient serving of Large Language Models (LLMs). Our innovative platform supports a range of cutting-edge research and production systems, catering to both internal and external applications across diverse environments.The ideal candidate will possess a solid foundation in machine learning principles coupled with extensive experience in backend system architecture. You will thrive in a collaborative environment that bridges research and engineering, working diligently to provide seamless experiences for our customers and accelerating innovation across the organization.
Full-time|$208.6K/yr - $429.5K/yr|Remote|San Francisco, CA, US; Remote, US
About Pinterest:At Pinterest, our platform inspires millions of people around the globe to explore creative ideas, envision new possibilities, and create lasting memories. We are dedicated to providing the inspiration needed to build a fulfilling life, starting with the talented individuals who drive our product development.Join us in a career that sparks innovation for millions, transforms passion into opportunities for growth, and celebrates the diverse experiences of our team members, all while enjoying the flexibility to perform at your best. Building a career you love is within reach.Position Overview:We are looking for a Senior Engineering Manager to spearhead our AI/ML Serving Platform team, which develops the core tools and infrastructure utilized by numerous AI/ML engineers across Pinterest. This includes systems for recommendations, advertisements, visual search, notifications, and trust and safety. Our goal is to enhance the efficiency, quality, and speed of AI/ML systems, ensuring they are production-ready and reliable for iterative model development.Key Responsibilities:Lead the team in driving continuous improvements in advanced model architectures, optimizing resource usage, and boosting AI/ML developer productivity.Establish the technical vision for the team aligned with company and organizational priorities.Mentor and cultivate talent within the team.Qualifications:Proven experience in managing engineering teams with diverse cross-organizational clients.Expertise in developing large-scale distributed serving systems.Familiarity with AI/ML inference technologies (e.g., PyTorch, TensorFlow) for web-scale online serving.Bachelor's degree in Computer Science or a related field, or equivalent professional experience.
Full-time|$172.2K/yr - $258.4K/yr|On-site|San Francisco, CA, USA
About the OpportunityAt Unity, we are dedicated to fostering a culture of collaboration and innovation. Our dynamic environment allows us to tackle intricate challenges that create significant value for creators and users within our ecosystem.The Vector team is at the forefront of this mission, creating cutting-edge conversion rate (CVR) prediction and market price models that enhance our ad ranking and recommendation systems. These models enable advertisers to engage the right users at optimal moments by accurately assessing engagement and conversion probabilities. By harnessing extensive behavioral data, creative features, and contextual signals, we continually refine our predictions’ relevance and accuracy. This leads to crucial outcomes such as increased user engagement, improved conversion rates, and a better return on ad spend—empowering advertisers to meet their objectives while enhancing user experience.We are on the lookout for an experienced Senior Machine Learning Engineer to spearhead advanced bidding optimization systems that facilitate efficient budget management, goal-driven automated strategies, ongoing enhancements through experimentation, and sustainable growth for Unity Ads.
Full-time|$185K/yr - $222K/yr|On-site|San Francisco, CA
Lyft’s Self-Serve Intelligence team builds the systems that help riders and drivers resolve issues on their own. Part of the Safety & Customer Care organization, this group focuses on backend services, APIs, and AI-powered products that let customers get help without waiting for an agent. The team’s work includes AI Assist (such as AI Agents), automations, and self-service workflows, all designed to make support fast and reliable. Role overview As a Senior Software Engineer on this team, the main responsibility is to design, build, deploy, and maintain backend systems and AI-driven tools that handle customer problems automatically. These solutions use Generative AI and automation to deliver scalable, dependable self-service experiences for millions of Lyft riders and drivers. What you will do Design and develop backend services and APIs for AI-powered self-service products Build and maintain AI Agents and automation tools that resolve customer issues without agent involvement Oversee the full development lifecycle: system design, prototyping, deployment, and ongoing operations Work closely with product managers, designers, data scientists, and operations teams to deliver robust solutions Focus on reliability, scalability, and operational excellence in all systems Location This role is based in San Francisco, CA.
At Runway ML, we are revolutionizing the intersection of art and science through innovative AI technology. Our mission is to build sophisticated world models that transcend traditional artificial intelligence limitations. We believe that to tackle the most pressing challenges—such as robotics, disease, and scientific breakthroughs—we need systems that can learn from experiences just like humans do. By simulating these experiences, we can expedite progress in ways that were previously unimaginable.Our diverse and driven team consists of creative thinkers who are passionate about pushing boundaries and achieving the extraordinary. If you share this ambition and are eager to contribute to our groundbreaking work, we invite you to join us.About the Role*We are open to hiring remotely across North America. We also have offices in NYC, San Francisco, and Seattle.We are on the lookout for a highly skilled and intellectually inquisitive Technical Accounting Manager to be our go-to authority on intricate accounting issues. This position offers significant visibility and is ideal for a professional adept at interpreting complex accounting guidelines, formulating sound conclusions, and translating technical insights into practical accounting practices.
Join Our Team at Air AppsAt Air Apps, we are on a mission to revolutionize resource management through innovative technology. Founded in 2018 in Lisbon, Portugal, we have expanded our reach with offices in both Lisbon and San Francisco, boasting over 100 million downloads globally. Our vision is to create the world’s first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we are looking for passionate individuals to help us achieve this goal.Our commitment to challenging the status quo drives us to push the boundaries of AI-driven solutions that make a real impact. Here, you will have the opportunity to be a creative force, developing products that empower individuals worldwide.Join us as we embark on this journey to redefine how people plan, work, and live.
About Sygaldry Technologies Sygaldry Technologies develops quantum-accelerated AI servers in San Francisco, focusing on faster AI training and inference. By combining quantum technology with artificial intelligence, the team addresses challenges in computing costs and energy efficiency. Their AI servers integrate multiple qubit types within a fault-tolerant system, aiming for a balance of cost, scalability, and speed. The company values optimism, rigor, and a drive to solve complex problems in physics, engineering, and AI. Role Overview: ML Infrastructure Engineer The ML Infrastructure Engineer joins the AI & Algorithms team, which includes research scientists, applied mathematicians, and quantum algorithm specialists. This role centers on building and maintaining the compute infrastructure that powers advanced research. The systems you build will support reliable GPU access, reproducible experiments, and scalable workloads, so researchers can focus on their core work without needing deep cloud expertise. Expect to design and manage compute platforms for a range of tasks, including quantum circuit simulation, large-scale numerical optimization, model training, tensor network contractions, and high-throughput data generation. These workloads span multiple cloud providers and on-premises GPU servers. Key Responsibilities Develop compute abstractions for diverse workloads, such as GPU-accelerated simulations, distributed training, high-throughput CPU jobs, and interactive analyses using frameworks like PyTorch and JAX. Set up infrastructure to support experiment tracking and reproducibility. Create developer tools that make cloud computing feel local, streamlining environment setup, job submission, monitoring, and artifact management. Scale experiments from single-GPU prototypes to large, multi-node production runs. Multi-Cloud GPU Orchestration Design orchestration strategies for workloads across multiple cloud providers, optimizing job routing for cost, availability, and capability. Monitor and improve cloud spending, keeping track of credit balances, burn rates, and expiration dates.
At Sciforium, we are at the forefront of AI infrastructure, dedicated to the development of advanced multimodal AI models and an innovative serving platform that emphasizes high efficiency. With substantial funding and direct collaboration from AMD, our team is rapidly expanding to create the complete stack for pioneering AI models and dynamic real-time applications.Role OverviewThis position provides a distinct opportunity to engage with the fundamental systems that drive Sciforium's multimodal AI models. You will play a crucial role in constructing the model serving platform, working with C++, Python, runtime execution, and distributed infrastructure to design a swift, dependable engine for real-time AI applications.You will acquire practical experience in performance engineering, discover how large AI models are optimized and deployed at scale, and collaborate closely with ML researchers and seasoned systems engineers. If you thrive in low-level programming and are passionate about performance, this role offers both impactful contributions and significant growth opportunities.
Zyphra is an innovative artificial intelligence company located in the heart of San Francisco, California.The Opportunity:Join our dynamic team as a Research Engineer - Audio & Speech Models, where you will play a pivotal role in advancing Zyphra’s Audio Team. You will be instrumental in developing cutting-edge open-source text-to-speech and audio models. Your contributions will span the full spectrum of the model training process, from data collection and processing to the design of innovative architectures and training approaches.Your Responsibilities:Conduct large-scale audio training operationsOptimize the performance of our training infrastructureCollect, process, and evaluate audio datasetsImplement architectural and methodological improvements through rigorous testingWhat We Seek:A strong research mindset with the ability to navigate projects from ideation to implementation and documentation.Proficiency in rapid prototyping and implementation, allowing for swift experimentation.Effective collaboration skills in a fast-paced research environment.A quick learner who is eager to embrace and implement new concepts.Excellent communication abilities, enabling you to contribute to both research and engineering tasks at scale.Preferred Qualifications:Expertise in training audio models, such as text-to-speech, ASR, speech-to-speech, or emotion recognition.Experience with training audio autoencoders.Solid understanding of signal processing, particularly in audio.Familiarity with diffusion models, consistency models, or GANs.Experience with large-scale (multi-node) GPU training environments.Strong understanding of experimental methodologies for conducting rigorous tests and ablations.Interest in large-scale, parallel data processing pipelines.Competence in PyTorch and Python programming.Experience contributing to large, established codebases with rapid adaptation.
Full-time|$155.6K/yr - $320.3K/yr|Remote|San Francisco, CA, US; Remote, US
About tvScientific tvScientific is the premier CTV advertising platform exclusively tailored for performance marketers. Our innovative approach harnesses vast data and state-of-the-art science to automate and enhance TV advertising, ultimately driving impactful business results. Our platform seamlessly integrates media buying, optimization, measurement, and attribution into one powerful, efficient solution. Developed by industry veterans with extensive backgrounds in programmatic advertising, digital media, and ad verification, our CTV performance platform is designed to help advertisers confidently scale their business. We are currently seeking a Senior MLOps Engineer to join our dynamic, distributed engineering team focused on our Connected TV ad-buying platform, as we expand our Machine Learning capabilities. Having successfully optimized TV ad campaigns, we are poised for massive growth, and we need your expertise to ensure our scalability is both sustainable and effective. As a proud member of Idealab, tvScientific was co-founded by leaders deeply rooted in programmatic advertising and digital media. We empower our clients to purchase ads across the expansive CTV landscape, including platforms such as Hulu, PlutoTV, and the ad-supported tiers of Disney+ and HBO Max. Following our acquisition by Pinterest, we are intensifying our focus on CTV to enhance the performance of search and social advertising.
OpenAI is seeking a Performance Modeling Engineer based in San Francisco. This role centers on building and improving models that enhance the performance and efficiency of AI systems. The work directly supports the technical backbone of OpenAI’s products. Key responsibilities Develop and refine models aimed at optimizing the performance of AI systems. Collaborate with engineers and data scientists to tackle technical challenges as they arise. Contribute to projects that improve the efficiency of large-scale AI infrastructure. Role overview This position offers the chance to work on foundational technology that underpins OpenAI’s products. The focus is on practical improvements and close teamwork with technical colleagues to advance the capabilities and efficiency of AI at scale.
About UsAt Lemurian Labs, we are dedicated to democratizing AI technology while prioritizing sustainability. Our mission is to create solutions that minimize environmental impact, ensuring that artificial intelligence serves humanity positively. We are committed to responsible innovation and the sustainable growth of AI.We are in the process of developing a state-of-the-art, portable compiler that empowers developers to 'build once, deploy anywhere.' This technology ensures seamless cross-platform integration, allowing for model training in the cloud and deployment at the edge, all while maximizing resource efficiency and scalability.If you are passionate about scaling AI sustainably and are eager to make AI development more powerful and accessible, we invite you to join our team at Lemurian Labs. Together, we can build a future that is innovative and responsible.The RoleWe are seeking a Senior ML Performance Engineer to take charge of designing and leading our Performance Testing Platform from inception. In this pivotal role, you will be recognized as the technical expert in measuring, validating, and enhancing the performance of large language models (including Llama 3.2 70B, DeepSeek, and others) prior to and following compiler optimization on cutting-edge GPU architectures.This is a critical position that will significantly impact our product quality and customer success. You will work at the intersection of Machine Learning systems, GPU architecture, and performance engineering, constructing the infrastructure that substantiates the value of our compiler.
Full-time|$308K/yr - $423.5K/yr|On-site|San Francisco, CA
About FaireFaire is a cutting-edge online wholesale marketplace driven by the belief that the future is local. Independent retailers around the world generate more revenue than giants like Walmart and Amazon combined, yet individually, they often struggle against these behemoths. At Faire, we harness the power of technology, data, and machine learning to connect this vibrant community of entrepreneurs globally. Imagine your favorite local boutique; we empower them to discover and sell exceptional products from around the world. With the right tools and insights, we aim to level the playing field, allowing small businesses to compete effectively with large retail chains and e-commerce platforms.By fostering the growth of independent businesses, Faire is making a positive economic impact in local communities worldwide. We’re in search of intelligent, resourceful, and passionate individuals to join us in driving the shop-local movement. If you share our belief in community, we would love to welcome you to ours.About this Role:We are on the lookout for a Principal ML / AI Engineer to serve as a company-wide technical thought leader and practitioner in shaping the future of Data and AI at Faire. This unique opportunity allows you to influence broad technical strategies across data, engineering, and product while engaging directly with pioneering AI research and applications. This role will report directly to the CTO of Faire.Your Responsibilities:Shape the AI Vision – Collaborate with product, design, strategy & analytics, machine learning, and the wider engineering leadership to define how AI can unlock transformational value for Faire’s retailers and brands. Provide thought leadership to guide company-wide priorities, particularly focusing on product strategy and key investment areas.Prototype and Unblock – Lead the development and implementation of AI systems (such as LLM fine-tuning, RLHF, agent frameworks, etc.) that illustrate what’s achievable and promote adoption across teams. Act as a “super individual contributor” who can delve deeply into technical challenges, enabling the engineering organization to advance quickly with AI and amplify both development and impact.Architect the AI-Ready Stack – Design Faire’s technical ecosystem, encompassing event logging, data warehouses, feature stores, and model serving, to ensure our infrastructure is AI-ready, scalable, and optimized for rapid experimentation.
Full-time|$160K/yr - $230K/yr|On-site|San Francisco
About MeterAt Meter, we believe that networking is at the heart of technological advancement. We have innovatively unified the entire networking stack and are now on a mission to make it autonomous.Our team is developing a cutting-edge neural network-driven system designed to analyze raw computer networks, enabling us to address all networking challenges. As outlined on Meter.ai, we are creating models within a closed-loop system that utilizes real-time telemetry, logs, and network events to autonomously troubleshoot issues, enhance performance, and resolve challenges.To achieve this, we require not only exceptional models but also robust infrastructure that ensures our models have clean, versioned, and low-latency access to the necessary data throughout training, evaluation, and deployment phases.Why this Role is EssentialEach Meter network deployed in the field serves as a valuable data source for our Models team. However, without meticulous infrastructure design, this data risks becoming fragmented, outdated, or inconsistent. In this role, you will ensure that such pitfalls are avoided. You will be responsible for the core data interface that drives our model development, experimentation, evaluation, and real-time inference.This position is fundamental and offers a significant impact. Your contributions will shape the speed at which we can train new models, the reliability of their evaluations, and their seamless operation across hundreds of real-world networks. You will collaborate closely with modelers to deliver systems that are elegant, scalable, and robust.Your ResponsibilitiesDesign and implement the Models API: a unified interface for accessing training, evaluation, and deployment data across raw, transformed, and feature-engineered layers.Ensure backward compatibility and feature versioning across continually evolving schemas.Develop scalable pipelines to ingest, transform, and serve petabytes of data across Kafka, Postgres, and Clickhouse.Create CI/CD workflows that evolve the API in tandem with changes to the underlying data schema.Facilitate fine-grained querying of historical and real-time data for any network, at any point in time.Help establish and promote the principle of 'smart data, dumb functions': maximizing operations in the data layer to minimize downstream code complexity.Collaborate with modelers to co-design training frameworks that optimize performance.
Role OverviewAt Mariana Minerals, we are on a mission to revolutionize refining processes for critical minerals, playing a pivotal role in the global energy transition. We are in search of a dynamic and driven Process Modeling Engineer who will be integral to this endeavor.In this position, you will take charge of developing, validating, and optimizing heat and material balance models utilizing advanced software such as ASPEN Plus/HYSYS, SysCAD, OLI Studio, or METSIM. You will collaborate closely with R&D, pilot operations, and project execution teams to transform lab and pilot data into robust, scalable process models that are essential for the design of groundbreaking mineral refining facilities.Key ResponsibilitiesCreate both steady-state and dynamic process models to determine heat and material balances for integrated mineral refinery systems using ASPEN, SysCAD, OLI, or METSIM.Automate the sizing of equipment and processes (including reactors, heat exchangers, filters, crystallizers, evaporators, and separators) based on model outputs, linking models to datasheets and other engineering tools.Develop and maintain comprehensive process simulation databases to ensure consistency and traceability among modeling assumptions, test data, and engineering outputs.Calibrate and reconcile models using operational data from pilot plants to ensure model accuracy and predictive validity.Conduct optimization studies to enhance energy recovery, recycling strategies, and material efficiency.Develop dynamic models for validating PLC and DCS programming while assessing buffer sizing throughout the design process.Integrate process models with CAPEX and OPEX estimation tools to streamline techno-economic model development.Document modeling methodologies and results, ensuring clear technical communication for design reviews, techno-economic assessments, and regulatory submissions.
Feb 26, 2026
Sign in to browse more jobs
Create account — see all 5,271 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.