Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Entry Level
Qualifications
Candidates should possess strong analytical and programming skills, ideally with a background in economics or a related field. Proficiency in data analysis tools and a passion for research are essential.
About the job
Join Anthropic as a Research Engineer focusing on Economic Research. In this role, you will leverage your analytical skills to conduct in-depth economic analysis and contribute to innovative projects aimed at enhancing our understanding of economic models and their implications.
About Anthropic
Anthropic is a leading research organization committed to advancing AI in a safe and beneficial manner. Our team is dedicated to fostering a collaborative environment where innovative ideas thrive.
Join Anthropic as a Research Engineer focusing on Economic Research. In this role, you will leverage your analytical skills to conduct in-depth economic analysis and contribute to innovative projects aimed at enhancing our understanding of economic models and their implications.
About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.
About Our TeamJoin the forefront of AI innovation with the RL and Reasoning team at OpenAI. Our team is dedicated to advancing reinforcement learning research and has pioneered transformative projects, including o1 and o3. We are committed to pushing the limits of generative models while ensuring their scalable deployment.About the RoleAs a Research Engineer/Research Scientist at OpenAI, you will play a pivotal role in enhancing AI alignment and capabilities through state-of-the-art reinforcement learning techniques. Your contributions will be essential in training intelligent, aligned, and versatile agents that power various AI models.We seek individuals with a solid foundation in reinforcement learning research, agile coding skills, and a passion for rapid iteration.This position is located in San Francisco, CA, and follows a hybrid work model of three days in the office per week. We also provide relocation assistance for new hires.You may excel in this role if:You are enthusiastic about being at the cutting edge of RL and language model research.You take initiative, owning ideas and driving them to fruition.You value principled methodologies, conducting simple experiments in controlled environments to draw trustworthy conclusions.You thrive in a fast-paced, complex technical environment where rapid iteration is essential.You are adept at navigating extensive ML codebases to troubleshoot and enhance them.You possess a profound understanding of machine learning and its applications.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves the greater good for humanity. We strive to push the boundaries of AI system capabilities while prioritizing safe deployment through our innovative products. We recognize AI as a powerful tool that must be developed with safety and human-centric principles, embracing diverse perspectives to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination based on race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or any other legally protected characteristic.
Join Gridware as a Mechanical Research Engineer, where your innovative spirit and engineering expertise will contribute to groundbreaking projects in the energy sector. You will be responsible for conducting research, developing prototypes, and collaborating with a team of skilled engineers to advance our technology solutions.
Join our innovative team at Gridware as an Electrical Research Engineer, where you will play a crucial role in advancing our cutting-edge technology. In this position, you will be responsible for conducting research, developing new electrical systems, and optimizing current technologies to enhance our product offerings.
OpenAI's research infrastructure group creates and maintains the backbone systems for advanced machine learning model training. This team often goes beyond conventional training methods, developing new infrastructure to support novel research at scale. Their work closely connects systems engineering with research progress, making it possible to run experiments that would otherwise be too slow or complex. Role overview The Research Infrastructure Engineer for Training Systems designs and improves the platforms that power large-scale ML training. This role bridges research concepts and the practical systems that make large model training possible. The work has a direct impact on model release timelines and requires building systems that perform reliably in demanding, real-world scenarios. What you will do Build and maintain infrastructure for large-scale model training and experimentation Design APIs and interfaces to simplify complex training workflows and prevent misuse Enhance reliability, debuggability, and performance across training and data pipelines Troubleshoot issues involving Python, PyTorch, distributed systems, GPUs, networking, and storage Create tests, benchmarks, and diagnostic tools to catch regressions early Requirements Interest in building systems that support new training methods, not just optimizing existing ones Strong instincts in systems engineering, especially regarding performance, reliability, and clean abstractions Experience designing APIs and interfaces for researchers and engineers Ability to work across ML research code and production infrastructure Enjoys evidence-based debugging using profiles, traces, logs, tests, and reproducible cases
Join our dynamic team at Cognition as a Research Engineer specializing in Infrastructure. In this role, you will be at the forefront of cutting-edge research, contributing to innovative solutions that shape the future of our infrastructure projects.Your responsibilities will include conducting thorough research, analyzing data, and collaborating with cross-functional teams to implement effective strategies. We are looking for an individual who is passionate about technology and infrastructure, eager to solve complex problems, and ready to drive impactful results.
About Our TeamJoin the Foundations Research team, where we tackle ambitious and innovative projects that could redefine the future of AI. Our mission is to enhance the science behind our training and scaling initiatives, focusing on pioneering frontier models. We are dedicated to advancing data utilization, scaling methodologies, optimization strategies, model architectures, and efficiency enhancements to accelerate our scientific breakthroughs.About the PositionWe are on the lookout for a dynamic technical research lead to spearhead our embeddings-focused retrieval initiatives. You will oversee a talented team of research scientists and engineers committed to developing foundational technologies that enable models to access and utilize the right information precisely when needed. This includes crafting innovative embedding training objectives, architecting scalable vector storage, and implementing adaptive indexing techniques.This pivotal role will contribute to various OpenAI products and internal research initiatives, offering opportunities for scientific publication and significant technical influence.This position is located in San Francisco, CA, where we embrace a hybrid work model, requiring three days in the office weekly, and we provide relocation assistance for new hires.Your ResponsibilitiesLead cutting-edge research on embedding models and retrieval systems optimized for grounding, relevance, and adaptive reasoning.Supervise a team of researchers and engineers in building an end-to-end infrastructure for training, evaluating, and integrating embeddings into advanced models.Drive advancements in dense, sparse, and hybrid representation techniques, metric learning, and retrieval systems.Work collaboratively with Pretraining, Inference, and other Research teams to seamlessly integrate retrieval throughout the model lifecycle.Contribute to OpenAI's ambitious vision of developing AI systems with robust memory and knowledge access capabilities rooted in learned representations.You Will Excel in This Role If You PossessA proven track record of leading high-performance teams of researchers or engineers within ML infrastructure or foundational research.In-depth technical knowledge in representation learning, embedding models, or vector retrieval systems.Familiarity with transformer-based large language models and their interaction with embedding spaces and objectives.Research experience in areas such as contrastive learning and retrieval-augmented generation.
Full-time|$200K/yr - $250K/yr|On-site|San Francisco, California, United States
Join fuku as an Applied Research Engineer in San Francisco, CA, where you will be at the forefront of AI video data research. As a crucial member of our team, your mission will involve building robust, high-performance frameworks and extensive pipelines to process and decode video data with exceptional accuracy. You will tackle complex research challenges, refine machine learning models and APIs, and deliver comprehensive solutions across computer vision, audio, and text processing domains. This role is designed for engineers who thrive in both research and production environments and are eager to spearhead the evolution of video understanding from research to deployment.
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY
Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel. Key responsibilities Design and implement evaluations for Anthropic's AI models Collaborate with team members to enhance model performance Contribute to research that pushes the boundaries of AI systems Location Remote-friendly (travel required) San Francisco, CA New York City, NY
Join us at OpenAI as a Research Engineer, where your innovative ideas will shape the future of artificial intelligence.About the RoleIn this pivotal position, you will be instrumental in developing cutting-edge AI systems that tackle challenges previously deemed insurmountable. We are seeking individuals with exceptional engineering capabilities, particularly in designing and enhancing large-scale distributed machine learning systems, writing efficient machine learning code, and advancing the scientific foundations of our algorithms.The most remarkable outcomes in deep learning are increasingly achieved at scale, necessitating engineers who thrive in expansive distributed systems. Your engineering expertise will be vital to driving significant advancements in AI technology.Key Responsibilities:Demonstrate strong programming and coding proficiencyPossess experience in managing and optimizing large distributed systemsExpress enthusiasm for OpenAI's innovative research methodologiesPreferred Qualifications:Exhibit a thoughtful perspective on the societal impacts of AI technologyBring prior experience in developing high-performance implementations of deep learning algorithmsAbout OpenAIOpenAI is at the forefront of AI research and application, committed to ensuring that general-purpose artificial intelligence serves the greater good of humanity. We strive to extend the limits of AI capabilities while prioritizing safety and human-centric design in our products. Our mission is to embrace diverse perspectives and experiences that enrich our understanding of humanity in the pursuit of our goals.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination. For more information, please refer to OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks will be conducted in accordance with applicable laws.
Full-time|$225K/yr - $275K/yr|Hybrid|London, England, United Kingdom; New York, New York, United States; San Francisco, California, United States
Who We AreLightning AI, the innovative force behind PyTorch Lightning, was established in 2019 to create a seamless end-to-end platform for developing, training, and deploying artificial intelligence systems. Our mission is to facilitate the transition from research to production effortlessly.In partnership with Voltage Park, a leading neocloud and AI Factory, Lightning AI merges developer-centric software with optimized, large-scale computing solutions. We empower teams with the necessary tools for experimentation, training, and production inference while ensuring built-in security, observability, and control.We cater to individual researchers, emerging startups, and large enterprises alike. With a global presence, our offices are located in New York City, San Francisco, Seattle, and London, backed by top-tier investors including Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.
About Our TeamJoin the innovative Frontier Evaluations & Environments team at OpenAI, where we are dedicated to building transformative model environments that pave the way for safe artificial general intelligence (AGI) and artificial superintelligence (ASI). Our team constructs ambitious evaluation environments that not only measure but also enhance the capabilities of our models, creating self-improvement loops that inform our training, safety, and deployment strategies. Some of our notable open-source evaluations include GDPval, SWE-bench Verified, MLE-bench, PaperBench, and SWE-Lancer. We have also executed frontier evaluations for groundbreaking models like GPT4o, o1, o3, GPT 4.5, ChatGPT Agent, and GPT5. If you are passionate about experiencing firsthand the rapid advancements of our models and guiding them toward a positive impact, this is the opportunity for you.Your RoleWe are in search of exceptional research engineers who are eager to push the limits of our frontier models. Our ideal candidates will play a vital role in shaping our empirical understanding of AI capabilities across a broad spectrum and will take ownership of specific projects from conception to execution.Key Responsibilities:Design and implement ambitious reinforcement learning environments to maximize our models' potential.Conduct assessments of frontier model capabilities, skills, and behaviors.Create innovative methodologies for the automated exploration of model behaviors.Guide training processes for our most extensive model training initiatives, gaining insights into the future of AI.Collaborate with cross-functional teams to align model evaluations with organizational objectives.
Join Composio as we revolutionize the way agents communicate with the tools you rely on, such as GitHub, Gmail, Notion, Salesforce, and more. As part of our dynamic team of engineers, you will tackle challenges from context to search, creating a seamless connection between agents and their essential tools.We've successfully raised a $25M Series A from Lightspeed, backed by visionary investors like Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of Hubspot), and Gokul Rajaram. This year, we have tripled our ARR, serving a diverse clientele that includes startups from Y Combinator to established companies like Wabi, Glean, and Zoom.Your ResponsibilitiesDevelop large evaluations using real tool-calling data to assess model performance in long-term tool execution.Address search challenges by identifying semantically similar tools and optimizing cached tool execution paths and plans.Train expansive agentic harness systems to enhance session accuracy using millions of real tool calls as baseline data.Essential QualificationsIf you're exceptionally skilled, nothing is a strict requirement.Research ExpertiseAbility to independently advance research objectives.Skilled in rapid prototyping and testing of experiments.Collaborate with product and engineering teams to transition research concepts into production swiftly.Strong Writing Skills — Capable of documenting effectively and articulating complex ideas clearly.Interpersonal Skills — Foster trust and acknowledge areas for growth.
Sieve is a 15-person AI research lab in San Francisco focused on video data. The team builds exabyte-scale video infrastructure and develops new approaches for video understanding, drawing from diverse data sources to create advanced datasets. With video now accounting for most internet traffic, Sieve aims to solve the challenge of delivering high-quality training data for applications in creativity, communication, gaming, AR/VR, and robotics. The company partners with leading AI labs and has achieved strong financial results, backed by Series A funding from Matrix Partners, Swift Ventures, Y Combinator, and AI Grant. Internship overview The Applied Research Engineering Intern will help build high-performance components and large-scale pipelines to advance video understanding at internet scale. This role involves tackling ambiguous research problems and turning them into practical solutions. Projects often cover computer vision, audio processing, and text processing. What you will do Develop and optimize models and APIs for video, audio, and text data Improve performance through pre- and post-processing, parallelism, pipelining, and inference optimization Occasionally fine-tune models for specific tasks Work through open-ended research challenges with a small, focused team Who succeeds here Comfortable working with machine learning models and APIs Skilled at optimizing systems for speed and accuracy Enjoys solving ambiguous technical problems across computer vision, audio, and text domains
About Our TeamJoin the Privacy Engineering Team at OpenAI, where we are dedicated to embedding privacy as a core principle within our mission to develop Artificial General Intelligence (AGI). We focus on ensuring that all OpenAI products and systems that process user data adhere to the highest standards of privacy and security.Our team engineers essential production solutions, innovates privacy-preserving methodologies, and provides cross-functional engineering and research teams with the tools necessary for responsible data management. Our commitment to ethical data utilization is a cornerstone of OpenAI's vision for safely advancing AGI for the benefit of everyone.About the PositionAs a valued member of the Privacy Engineering Team, you will be instrumental in protecting user data while enhancing the usability and effectiveness of our AI systems. You will engage with cutting-edge research on privacy-enhancing technologies, including differential privacy, federated learning, and data memorization techniques. Your role will also entail exploring the intersection of privacy and machine learning, innovating methods for better data anonymization, and mitigating risks associated with model inversion and membership inference attacks.This position is based in San Francisco, and we offer relocation assistance.Key Responsibilities:Design and prototype scalable privacy-preserving machine learning algorithms (e.g., differential privacy, secure aggregation, federated learning) for deployment at OpenAI.Evaluate and enhance model resilience against privacy threats such as membership inference, model inversion, and data memorization leaks, ensuring a balance between utility and security assurances.Create internal libraries, evaluation frameworks, and documentation to make advanced privacy techniques accessible to engineering and research teams.Conduct comprehensive investigations into the privacy-performance trade-offs of large models, sharing findings that guide model training and product safety protocols.Establish and document privacy standards, threat models, and audit procedures to govern the entire machine learning lifecycle, from dataset curation to post-deployment oversight.Work collaboratively with Security, Policy, Product, and Legal teams to translate evolving regulatory frameworks into actionable technical safeguards and tools.
About Our TeamThe Codex team at OpenAI is at the forefront of creating cutting-edge AI systems designed to empower users by writing code, understanding software logic, and functioning as intelligent agents for both developers and non-developers. Our mission is to redefine the landscape of code generation and intelligent reasoning, deploying these innovations into real-world applications such as ChatGPT and our API, as well as future tools tailored for intelligent coding. We engage deeply in research, engineering, product development, and infrastructure management, overseeing the entire lifecycle of experimentation, deployment, and iterative improvements on advanced coding functionalities.About the PositionAs a key member of the Codex team, you will enhance the capabilities, performance, and reliability of AI coding models through rigorous research, innovative experimentation, and systematic optimization. Collaborating with top-tier researchers and engineers, you will develop and deploy robust systems that enable millions to code more efficiently and effectively, ensuring these systems are not only powerful but also cost-efficient and ready for production.We seek individuals who possess a blend of deep curiosity, strong technical skills, and a commitment to impactful work. Whether your expertise lies in machine learning research, systems engineering, or performance optimization, you will be instrumental in advancing the state-of-the-art and translating these breakthroughs into user-friendly applications.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days per week in the office. We also provide relocation assistance for new hires.In This Role, You Will:Design and conduct experiments aimed at enhancing code generation, reasoning, and agentic behaviors in Codex models.Generate research insights to improve model training, alignment, and evaluation processes.Identify and rectify inefficiencies throughout the Codex system stack—from agent behavior to large language model inference to container orchestration—paving the way for significant performance enhancements. Develop tools to measure, profile, and optimize system performance on a large scale.Collaborate across the technical stack to prototype new features, troubleshoot complex issues, and deliver improvements to production environments.You Will Excel in This Role If You:Are enthusiastic about exploring and advancing the capabilities of large language models, particularly in software reasoning and code generation.Possess robust software engineering skills and enjoy rapidly transforming concepts into functional prototypes.Take a holistic view of performance, effectively balancing speed, efficiency, and quality.
Exa is at the forefront of technology, developing an innovative search engine tailored for AI applications. Our team is dedicated to creating robust infrastructure that enables us to crawl the web, build cutting-edge embedding models for indexing, and develop high-performance vector databases in Rust. We proudly operate a $5M H200 GPU cluster capable of powering tens of thousands of machines.As a Generalist Research Engineer, you will collaborate across our search and retrieval stack, focusing on crawling, parsing, machine learning performance, and retrieval algorithms. Your contributions will directly enhance the quality of search endpoints for our users.
OpenAI is hiring a Software Engineer for Post-Training Research in San Francisco. This position centers on improving the performance and capabilities of advanced machine learning models after their initial training phase. Role overview Work closely with a skilled team to explore new ways of strengthening AI systems. The focus is on researching and developing methods that push the boundaries of what these models can achieve once training is complete. Collaboration Expect to contribute to ongoing research efforts and share insights with colleagues who are passionate about advancing AI. Teamwork and knowledge exchange are key parts of this role. Location This position is based in San Francisco.
OverviewPluralis Research is at the forefront of Protocol Learning, innovating a decentralized approach to train and deploy AI models that democratizes access beyond just well-funded corporations. By aggregating computational resources from diverse participants, we incentivize collaboration while safeguarding against centralized control of model weights, paving the way for a truly open and cooperative environment for advanced AI.We are seeking a talented Machine Learning Training Platform Engineer to design, develop, and scale the core infrastructure that powers our decentralized ML training platform. In this role, you will have ownership over essential systems including infrastructure orchestration, distributed computing, and service integration, facilitating ongoing experimentation and large-scale model training.ResponsibilitiesMulti-Cloud Infrastructure: Create resource management systems that provision and orchestrate computing resources across AWS, GCP, and Azure using infrastructure-as-code tools like Pulumi or Terraform. Manage dynamic scaling, state synchronization, and concurrent operations across hundreds of diverse nodes.Distributed Training Systems: Design fault-tolerant infrastructure for distributed machine learning, including GPU clusters, NVIDIA runtime, S3 checkpointing, large dataset management and streaming, health monitoring, and resilient retry strategies.Real-World Networking: Develop systems that simulate and manage real-world network conditions—such as bandwidth shaping, latency injection, and packet loss—while accommodating dynamic node churn and ensuring efficient data flow across workers with varying connectivity, as our training occurs on consumer nodes and non-co-located infrastructure.
Join Anthropic as a Research Engineer focusing on Economic Research. In this role, you will leverage your analytical skills to conduct in-depth economic analysis and contribute to innovative projects aimed at enhancing our understanding of economic models and their implications.
About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.
About Our TeamJoin the forefront of AI innovation with the RL and Reasoning team at OpenAI. Our team is dedicated to advancing reinforcement learning research and has pioneered transformative projects, including o1 and o3. We are committed to pushing the limits of generative models while ensuring their scalable deployment.About the RoleAs a Research Engineer/Research Scientist at OpenAI, you will play a pivotal role in enhancing AI alignment and capabilities through state-of-the-art reinforcement learning techniques. Your contributions will be essential in training intelligent, aligned, and versatile agents that power various AI models.We seek individuals with a solid foundation in reinforcement learning research, agile coding skills, and a passion for rapid iteration.This position is located in San Francisco, CA, and follows a hybrid work model of three days in the office per week. We also provide relocation assistance for new hires.You may excel in this role if:You are enthusiastic about being at the cutting edge of RL and language model research.You take initiative, owning ideas and driving them to fruition.You value principled methodologies, conducting simple experiments in controlled environments to draw trustworthy conclusions.You thrive in a fast-paced, complex technical environment where rapid iteration is essential.You are adept at navigating extensive ML codebases to troubleshoot and enhance them.You possess a profound understanding of machine learning and its applications.About OpenAIOpenAI is a pioneering AI research and deployment organization committed to ensuring that general-purpose artificial intelligence serves the greater good for humanity. We strive to push the boundaries of AI system capabilities while prioritizing safe deployment through our innovative products. We recognize AI as a powerful tool that must be developed with safety and human-centric principles, embracing diverse perspectives to reflect the full spectrum of humanity.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination based on race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or any other legally protected characteristic.
Join Gridware as a Mechanical Research Engineer, where your innovative spirit and engineering expertise will contribute to groundbreaking projects in the energy sector. You will be responsible for conducting research, developing prototypes, and collaborating with a team of skilled engineers to advance our technology solutions.
Join our innovative team at Gridware as an Electrical Research Engineer, where you will play a crucial role in advancing our cutting-edge technology. In this position, you will be responsible for conducting research, developing new electrical systems, and optimizing current technologies to enhance our product offerings.
OpenAI's research infrastructure group creates and maintains the backbone systems for advanced machine learning model training. This team often goes beyond conventional training methods, developing new infrastructure to support novel research at scale. Their work closely connects systems engineering with research progress, making it possible to run experiments that would otherwise be too slow or complex. Role overview The Research Infrastructure Engineer for Training Systems designs and improves the platforms that power large-scale ML training. This role bridges research concepts and the practical systems that make large model training possible. The work has a direct impact on model release timelines and requires building systems that perform reliably in demanding, real-world scenarios. What you will do Build and maintain infrastructure for large-scale model training and experimentation Design APIs and interfaces to simplify complex training workflows and prevent misuse Enhance reliability, debuggability, and performance across training and data pipelines Troubleshoot issues involving Python, PyTorch, distributed systems, GPUs, networking, and storage Create tests, benchmarks, and diagnostic tools to catch regressions early Requirements Interest in building systems that support new training methods, not just optimizing existing ones Strong instincts in systems engineering, especially regarding performance, reliability, and clean abstractions Experience designing APIs and interfaces for researchers and engineers Ability to work across ML research code and production infrastructure Enjoys evidence-based debugging using profiles, traces, logs, tests, and reproducible cases
Join our dynamic team at Cognition as a Research Engineer specializing in Infrastructure. In this role, you will be at the forefront of cutting-edge research, contributing to innovative solutions that shape the future of our infrastructure projects.Your responsibilities will include conducting thorough research, analyzing data, and collaborating with cross-functional teams to implement effective strategies. We are looking for an individual who is passionate about technology and infrastructure, eager to solve complex problems, and ready to drive impactful results.
About Our TeamJoin the Foundations Research team, where we tackle ambitious and innovative projects that could redefine the future of AI. Our mission is to enhance the science behind our training and scaling initiatives, focusing on pioneering frontier models. We are dedicated to advancing data utilization, scaling methodologies, optimization strategies, model architectures, and efficiency enhancements to accelerate our scientific breakthroughs.About the PositionWe are on the lookout for a dynamic technical research lead to spearhead our embeddings-focused retrieval initiatives. You will oversee a talented team of research scientists and engineers committed to developing foundational technologies that enable models to access and utilize the right information precisely when needed. This includes crafting innovative embedding training objectives, architecting scalable vector storage, and implementing adaptive indexing techniques.This pivotal role will contribute to various OpenAI products and internal research initiatives, offering opportunities for scientific publication and significant technical influence.This position is located in San Francisco, CA, where we embrace a hybrid work model, requiring three days in the office weekly, and we provide relocation assistance for new hires.Your ResponsibilitiesLead cutting-edge research on embedding models and retrieval systems optimized for grounding, relevance, and adaptive reasoning.Supervise a team of researchers and engineers in building an end-to-end infrastructure for training, evaluating, and integrating embeddings into advanced models.Drive advancements in dense, sparse, and hybrid representation techniques, metric learning, and retrieval systems.Work collaboratively with Pretraining, Inference, and other Research teams to seamlessly integrate retrieval throughout the model lifecycle.Contribute to OpenAI's ambitious vision of developing AI systems with robust memory and knowledge access capabilities rooted in learned representations.You Will Excel in This Role If You PossessA proven track record of leading high-performance teams of researchers or engineers within ML infrastructure or foundational research.In-depth technical knowledge in representation learning, embedding models, or vector retrieval systems.Familiarity with transformer-based large language models and their interaction with embedding spaces and objectives.Research experience in areas such as contrastive learning and retrieval-augmented generation.
Full-time|$200K/yr - $250K/yr|On-site|San Francisco, California, United States
Join fuku as an Applied Research Engineer in San Francisco, CA, where you will be at the forefront of AI video data research. As a crucial member of our team, your mission will involve building robust, high-performance frameworks and extensive pipelines to process and decode video data with exceptional accuracy. You will tackle complex research challenges, refine machine learning models and APIs, and deliver comprehensive solutions across computer vision, audio, and text processing domains. This role is designed for engineers who thrive in both research and production environments and are eager to spearhead the evolution of video understanding from research to deployment.
Full-time|Remote|Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY
Anthropic is looking for a Research Engineer focused on model evaluations. This position involves research and development to assess and strengthen the performance of AI models. Teams are based in San Francisco and New York City, and the role supports remote work with required travel. Key responsibilities Design and implement evaluations for Anthropic's AI models Collaborate with team members to enhance model performance Contribute to research that pushes the boundaries of AI systems Location Remote-friendly (travel required) San Francisco, CA New York City, NY
Join us at OpenAI as a Research Engineer, where your innovative ideas will shape the future of artificial intelligence.About the RoleIn this pivotal position, you will be instrumental in developing cutting-edge AI systems that tackle challenges previously deemed insurmountable. We are seeking individuals with exceptional engineering capabilities, particularly in designing and enhancing large-scale distributed machine learning systems, writing efficient machine learning code, and advancing the scientific foundations of our algorithms.The most remarkable outcomes in deep learning are increasingly achieved at scale, necessitating engineers who thrive in expansive distributed systems. Your engineering expertise will be vital to driving significant advancements in AI technology.Key Responsibilities:Demonstrate strong programming and coding proficiencyPossess experience in managing and optimizing large distributed systemsExpress enthusiasm for OpenAI's innovative research methodologiesPreferred Qualifications:Exhibit a thoughtful perspective on the societal impacts of AI technologyBring prior experience in developing high-performance implementations of deep learning algorithmsAbout OpenAIOpenAI is at the forefront of AI research and application, committed to ensuring that general-purpose artificial intelligence serves the greater good of humanity. We strive to extend the limits of AI capabilities while prioritizing safety and human-centric design in our products. Our mission is to embrace diverse perspectives and experiences that enrich our understanding of humanity in the pursuit of our goals.We are proud to be an equal opportunity employer, welcoming applicants from all backgrounds without discrimination. For more information, please refer to OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks will be conducted in accordance with applicable laws.
Full-time|$225K/yr - $275K/yr|Hybrid|London, England, United Kingdom; New York, New York, United States; San Francisco, California, United States
Who We AreLightning AI, the innovative force behind PyTorch Lightning, was established in 2019 to create a seamless end-to-end platform for developing, training, and deploying artificial intelligence systems. Our mission is to facilitate the transition from research to production effortlessly.In partnership with Voltage Park, a leading neocloud and AI Factory, Lightning AI merges developer-centric software with optimized, large-scale computing solutions. We empower teams with the necessary tools for experimentation, training, and production inference while ensuring built-in security, observability, and control.We cater to individual researchers, emerging startups, and large enterprises alike. With a global presence, our offices are located in New York City, San Francisco, Seattle, and London, backed by top-tier investors including Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.
About Our TeamJoin the innovative Frontier Evaluations & Environments team at OpenAI, where we are dedicated to building transformative model environments that pave the way for safe artificial general intelligence (AGI) and artificial superintelligence (ASI). Our team constructs ambitious evaluation environments that not only measure but also enhance the capabilities of our models, creating self-improvement loops that inform our training, safety, and deployment strategies. Some of our notable open-source evaluations include GDPval, SWE-bench Verified, MLE-bench, PaperBench, and SWE-Lancer. We have also executed frontier evaluations for groundbreaking models like GPT4o, o1, o3, GPT 4.5, ChatGPT Agent, and GPT5. If you are passionate about experiencing firsthand the rapid advancements of our models and guiding them toward a positive impact, this is the opportunity for you.Your RoleWe are in search of exceptional research engineers who are eager to push the limits of our frontier models. Our ideal candidates will play a vital role in shaping our empirical understanding of AI capabilities across a broad spectrum and will take ownership of specific projects from conception to execution.Key Responsibilities:Design and implement ambitious reinforcement learning environments to maximize our models' potential.Conduct assessments of frontier model capabilities, skills, and behaviors.Create innovative methodologies for the automated exploration of model behaviors.Guide training processes for our most extensive model training initiatives, gaining insights into the future of AI.Collaborate with cross-functional teams to align model evaluations with organizational objectives.
Join Composio as we revolutionize the way agents communicate with the tools you rely on, such as GitHub, Gmail, Notion, Salesforce, and more. As part of our dynamic team of engineers, you will tackle challenges from context to search, creating a seamless connection between agents and their essential tools.We've successfully raised a $25M Series A from Lightspeed, backed by visionary investors like Guillermo Rauch (CEO of Vercel), Dharmesh Shah (CTO of Hubspot), and Gokul Rajaram. This year, we have tripled our ARR, serving a diverse clientele that includes startups from Y Combinator to established companies like Wabi, Glean, and Zoom.Your ResponsibilitiesDevelop large evaluations using real tool-calling data to assess model performance in long-term tool execution.Address search challenges by identifying semantically similar tools and optimizing cached tool execution paths and plans.Train expansive agentic harness systems to enhance session accuracy using millions of real tool calls as baseline data.Essential QualificationsIf you're exceptionally skilled, nothing is a strict requirement.Research ExpertiseAbility to independently advance research objectives.Skilled in rapid prototyping and testing of experiments.Collaborate with product and engineering teams to transition research concepts into production swiftly.Strong Writing Skills — Capable of documenting effectively and articulating complex ideas clearly.Interpersonal Skills — Foster trust and acknowledge areas for growth.
Sieve is a 15-person AI research lab in San Francisco focused on video data. The team builds exabyte-scale video infrastructure and develops new approaches for video understanding, drawing from diverse data sources to create advanced datasets. With video now accounting for most internet traffic, Sieve aims to solve the challenge of delivering high-quality training data for applications in creativity, communication, gaming, AR/VR, and robotics. The company partners with leading AI labs and has achieved strong financial results, backed by Series A funding from Matrix Partners, Swift Ventures, Y Combinator, and AI Grant. Internship overview The Applied Research Engineering Intern will help build high-performance components and large-scale pipelines to advance video understanding at internet scale. This role involves tackling ambiguous research problems and turning them into practical solutions. Projects often cover computer vision, audio processing, and text processing. What you will do Develop and optimize models and APIs for video, audio, and text data Improve performance through pre- and post-processing, parallelism, pipelining, and inference optimization Occasionally fine-tune models for specific tasks Work through open-ended research challenges with a small, focused team Who succeeds here Comfortable working with machine learning models and APIs Skilled at optimizing systems for speed and accuracy Enjoys solving ambiguous technical problems across computer vision, audio, and text domains
About Our TeamJoin the Privacy Engineering Team at OpenAI, where we are dedicated to embedding privacy as a core principle within our mission to develop Artificial General Intelligence (AGI). We focus on ensuring that all OpenAI products and systems that process user data adhere to the highest standards of privacy and security.Our team engineers essential production solutions, innovates privacy-preserving methodologies, and provides cross-functional engineering and research teams with the tools necessary for responsible data management. Our commitment to ethical data utilization is a cornerstone of OpenAI's vision for safely advancing AGI for the benefit of everyone.About the PositionAs a valued member of the Privacy Engineering Team, you will be instrumental in protecting user data while enhancing the usability and effectiveness of our AI systems. You will engage with cutting-edge research on privacy-enhancing technologies, including differential privacy, federated learning, and data memorization techniques. Your role will also entail exploring the intersection of privacy and machine learning, innovating methods for better data anonymization, and mitigating risks associated with model inversion and membership inference attacks.This position is based in San Francisco, and we offer relocation assistance.Key Responsibilities:Design and prototype scalable privacy-preserving machine learning algorithms (e.g., differential privacy, secure aggregation, federated learning) for deployment at OpenAI.Evaluate and enhance model resilience against privacy threats such as membership inference, model inversion, and data memorization leaks, ensuring a balance between utility and security assurances.Create internal libraries, evaluation frameworks, and documentation to make advanced privacy techniques accessible to engineering and research teams.Conduct comprehensive investigations into the privacy-performance trade-offs of large models, sharing findings that guide model training and product safety protocols.Establish and document privacy standards, threat models, and audit procedures to govern the entire machine learning lifecycle, from dataset curation to post-deployment oversight.Work collaboratively with Security, Policy, Product, and Legal teams to translate evolving regulatory frameworks into actionable technical safeguards and tools.
About Our TeamThe Codex team at OpenAI is at the forefront of creating cutting-edge AI systems designed to empower users by writing code, understanding software logic, and functioning as intelligent agents for both developers and non-developers. Our mission is to redefine the landscape of code generation and intelligent reasoning, deploying these innovations into real-world applications such as ChatGPT and our API, as well as future tools tailored for intelligent coding. We engage deeply in research, engineering, product development, and infrastructure management, overseeing the entire lifecycle of experimentation, deployment, and iterative improvements on advanced coding functionalities.About the PositionAs a key member of the Codex team, you will enhance the capabilities, performance, and reliability of AI coding models through rigorous research, innovative experimentation, and systematic optimization. Collaborating with top-tier researchers and engineers, you will develop and deploy robust systems that enable millions to code more efficiently and effectively, ensuring these systems are not only powerful but also cost-efficient and ready for production.We seek individuals who possess a blend of deep curiosity, strong technical skills, and a commitment to impactful work. Whether your expertise lies in machine learning research, systems engineering, or performance optimization, you will be instrumental in advancing the state-of-the-art and translating these breakthroughs into user-friendly applications.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days per week in the office. We also provide relocation assistance for new hires.In This Role, You Will:Design and conduct experiments aimed at enhancing code generation, reasoning, and agentic behaviors in Codex models.Generate research insights to improve model training, alignment, and evaluation processes.Identify and rectify inefficiencies throughout the Codex system stack—from agent behavior to large language model inference to container orchestration—paving the way for significant performance enhancements. Develop tools to measure, profile, and optimize system performance on a large scale.Collaborate across the technical stack to prototype new features, troubleshoot complex issues, and deliver improvements to production environments.You Will Excel in This Role If You:Are enthusiastic about exploring and advancing the capabilities of large language models, particularly in software reasoning and code generation.Possess robust software engineering skills and enjoy rapidly transforming concepts into functional prototypes.Take a holistic view of performance, effectively balancing speed, efficiency, and quality.
Exa is at the forefront of technology, developing an innovative search engine tailored for AI applications. Our team is dedicated to creating robust infrastructure that enables us to crawl the web, build cutting-edge embedding models for indexing, and develop high-performance vector databases in Rust. We proudly operate a $5M H200 GPU cluster capable of powering tens of thousands of machines.As a Generalist Research Engineer, you will collaborate across our search and retrieval stack, focusing on crawling, parsing, machine learning performance, and retrieval algorithms. Your contributions will directly enhance the quality of search endpoints for our users.
OpenAI is hiring a Software Engineer for Post-Training Research in San Francisco. This position centers on improving the performance and capabilities of advanced machine learning models after their initial training phase. Role overview Work closely with a skilled team to explore new ways of strengthening AI systems. The focus is on researching and developing methods that push the boundaries of what these models can achieve once training is complete. Collaboration Expect to contribute to ongoing research efforts and share insights with colleagues who are passionate about advancing AI. Teamwork and knowledge exchange are key parts of this role. Location This position is based in San Francisco.
OverviewPluralis Research is at the forefront of Protocol Learning, innovating a decentralized approach to train and deploy AI models that democratizes access beyond just well-funded corporations. By aggregating computational resources from diverse participants, we incentivize collaboration while safeguarding against centralized control of model weights, paving the way for a truly open and cooperative environment for advanced AI.We are seeking a talented Machine Learning Training Platform Engineer to design, develop, and scale the core infrastructure that powers our decentralized ML training platform. In this role, you will have ownership over essential systems including infrastructure orchestration, distributed computing, and service integration, facilitating ongoing experimentation and large-scale model training.ResponsibilitiesMulti-Cloud Infrastructure: Create resource management systems that provision and orchestrate computing resources across AWS, GCP, and Azure using infrastructure-as-code tools like Pulumi or Terraform. Manage dynamic scaling, state synchronization, and concurrent operations across hundreds of diverse nodes.Distributed Training Systems: Design fault-tolerant infrastructure for distributed machine learning, including GPU clusters, NVIDIA runtime, S3 checkpointing, large dataset management and streaming, health monitoring, and resilient retry strategies.Real-World Networking: Develop systems that simulate and manage real-world network conditions—such as bandwidth shaping, latency injection, and packet loss—while accommodating dynamic node churn and ensuring efficient data flow across workers with varying connectivity, as our training occurs on consumer nodes and non-co-located infrastructure.