Machine Learning Research Scientist / Research Engineer - Post-Training
Scale AISan Francisco, CA; Seattle, WA; New York, NY
On-site Full-time $252K/yr - $315K/yr
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
Your responsibilities will include:
Researching and developing cutting-edge post-training methodologies such as SFT, RLHF, and reward modeling to amplify LLM capabilities in text and multimodal contexts.
Designing and experimenting with innovative approaches to optimize preferences.
Analyzing model behaviors, identifying weaknesses, and proposing solutions for bias mitigation and enhancing model robustness.
Publishing your research findings in premier AI conferences.
Preferred qualifications:
Ph. D. or Master’s degree in Computer Science, Machine Learning, AI, or a related discipline.
In-depth knowledge of deep learning, reinforcement learning, and large-scale model fine-tuning.
Experience with post-training strategies like RLHF, preference modeling, or instruction tuning.
Exceptional written and verbal communication skills.
Published work in machine learning at notable conferences (NeurIPS, ICML, ICLR, ACL, EMNLP, CVPR, etc.) and/or journals.
Prior experience in a customer-facing role.
About the job
At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats.
In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.
About Scale AI
Scale AI is at the forefront of artificial intelligence, partnering with elite AI labs to provide top-tier data solutions that accelerate the progress of Generative AI research and development.
Join Baseten as a Post-Training Applied Researcher, where you will be at the forefront of innovative research applications. Your expertise will help bridge the gap between training and real-world applications, making a tangible impact in the industry.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We strive to build a future where everyone has access to the knowledge and tools essential for making AI work effectively for their unique objectives.Our team comprises scientists, engineers, and innovators who have contributed to some of the most widely adopted AI products, including ChatGPT and Character.ai, as well as notable open-weight models like Mistral and popular open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleThe Post-Training Researcher position is pivotal to our roadmap. It serves as a crucial connection between raw model intelligence and a system that is genuinely beneficial, safe, and collaborative for human users.This role uniquely combines fundamental research with practical engineering, as we do not differentiate between these functions internally. Candidates will be expected to produce high-performance code and analyze technical reports. This position is ideal for individuals who relish both deep theoretical inquiry and hands-on experimentation, aiming to influence the foundational aspects of AI learning.Note: This position is classified as an 'evergreen role', meaning we continuously accept applications in this research domain. Given the high volume of applications, an immediate match for your skills and experience may not always be available. However, we encourage you to apply; we regularly review submissions and reach out as new opportunities arise. You are welcome to apply again after gaining more experience, but we ask that you refrain from applying more than once every six months. Additionally, specific postings for singular roles may be available for distinct projects or team needs, in which case you are welcome to apply directly in conjunction with this evergreen role.What You’ll DoDevelop and Optimize Recipes: Refine post-training recipes, encompassing various datasets, training stages, and hyperparameters, while assessing their impact on multiple performance metrics.Iterate on Evaluations: Engage in a continuous process of defining evaluation metrics, optimizing them, and recognizing their limitations. You will be accountable for enhancing performance metrics and ensuring they are meaningful.Debug and Analyze: During the fine-tuning of training configurations, you may encounter results that appear inconsistent. You will be responsible for troubleshooting and cultivating a deeper understanding to apply to subsequent challenges.Scale and Investigate: Assess and expand the capabilities of our models while exploring potential improvements.
Role overview OpenAI is looking for a Researcher focused on Agentic Post-Training, based in San Francisco. This role centers on analyzing and improving how AI systems behave after their initial training. The goal is to broaden the capabilities of AI and refine how models respond in complex situations. What you will do Study and assess agentic behaviors in trained AI models Create new approaches to strengthen these behaviors after training Collaborate with a talented team on projects that shape the future of artificial intelligence research Collaboration and impact This position involves hands-on research with other specialists at OpenAI. The work directly supports the advancement of AI capabilities and helps define new benchmarks for agentic performance in artificial intelligence.
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our mission is to empower humanity by advancing collaborative general intelligence. We envision a future where everyone can harness the knowledge and tools necessary for AI to serve their unique needs and aspirations. Our team comprises scientists, engineers, and builders who have developed some of the most widely utilized AI products, such as ChatGPT and Character.ai, as well as open-weight models like Mistral and popular open-source projects including PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleThe role of a Post-Training Researcher is pivotal to our strategic vision. This position serves as the essential link between raw model intelligence and a practical, safe, and collaborative system for human users.Our research in post-training data sits at the intersection of human insights and machine learning. By integrating human and synthetic data techniques alongside innovative methodologies, we capture the subtleties of human behavior to inform and guide our models. We investigate and model the mechanisms that derive value for individuals, enabling us to articulate, predict, and enhance human preferences, behaviors, and satisfaction. Our objective is to translate research concepts into actionable data through meticulously planned data labeling and collection initiatives, while also understanding the science behind high-quality data that effectively trains our models. Additionally, we develop and assess quantitative metrics to evaluate the success and impact of our data and training strategies.Beyond execution, we explore new paradigms for human-AI interaction and scalable oversight, experimenting with optimal ways for humans to supervise, guide, and collaborate with models. This interdisciplinary role merges research, data operations, and technical implementation, pushing the boundaries of aligned, human-centered AI systems.This position combines foundational research and practical engineering, as we do not differentiate between these roles internally. You will be expected to write high-performance code and comprehend technical reports. This role is perfect for individuals who thrive on deep theoretical exploration and hands-on experimentation, eager to shape the foundational aspects of AI learning.Note: This is an evergreen role that we maintain continuously to express interest in this research area. We receive a high volume of applications, and while there may not always be an immediate fit for your skills and experience, we encourage you to apply. We regularly review applications and reach out to candidates as new opportunities arise. You are welcome to reapply after gaining more experience, but please limit applications to once every six months. You may also notice postings for specific roles for targeted positions.
Full-time|$130K/yr - $250K/yr|On-site|San Francisco
About Distyl AIDistyl AI is at the forefront of developing production-grade AI systems that enhance core operational workflows for Fortune 500 companies. Our innovative solutions, powered by a strategic alliance with OpenAI and bolstered by in-house software accelerators, deliver AI systems with rapid time-to-value, often within just a quarter.Our cutting-edge products have successfully transformed the operations of Fortune 500 clients across a multitude of sectors, including insurance, consumer packaged goods, and non-profit organizations. As a member of our team, you will be instrumental in assisting companies to identify, construct, and unlock the potential of their GenAI investments, frequently for the first time. We pride ourselves on being customer-centric, addressing client challenges directly, and holding ourselves accountable for generating financial impact while enhancing the experiences of end-users.Led by distinguished leaders from prestigious organizations such as Palantir and Apple, Distyl is also supported by renowned investors including Lightspeed, Khosla, Coatue, Dell Technologies Capital, Nat Friedman (former CEO of GitHub), and Brad Gerstner (Founder and CEO of Altimeter), alongside board members from over a dozen Fortune 500 companies.
OpenAI is hiring a Software Engineer for Post-Training Research in San Francisco. This position centers on improving the performance and capabilities of advanced machine learning models after their initial training phase. Role overview Work closely with a skilled team to explore new ways of strengthening AI systems. The focus is on researching and developing methods that push the boundaries of what these models can achieve once training is complete. Collaboration Expect to contribute to ongoing research efforts and share insights with colleagues who are passionate about advancing AI. Teamwork and knowledge exchange are key parts of this role. Location This position is based in San Francisco.
Full-time|On-site|San Francisco Bay Area (San Mateo) or Boston (Somerville)
About the RoleIn the realm of machine learning, pretraining lays the foundation for a general model, while post-training refines that model, enhancing its utility, controllability, safety, and performance in real-world applications. As a Post-Training Research Scientist, you will transform large pretrained robot models into production-ready systems through methodologies such as fine-tuning, reinforcement learning, steering, human feedback, task specialization, evaluation, and on-robot validation at scale. This position offers a unique opportunity for individuals from diverse backgrounds to evolve into full-stack ML roboticists, adept at swiftly identifying challenges across machine learning and control domains. This is where innovative research converges with practical implementation.Your Responsibilities Include:Crafting fine-tuning and adaptation strategies tailored for specific robotic tasks and embodiments.Developing methodologies to enhance reliability, robustness, and controllability of robotic systems.Establishing evaluation frameworks to assess real-world robot performance beyond just offline metrics.Collaborating with ML infrastructure teams to optimize inference-time performance, including latency, stability, and memory usage.Utilizing advanced techniques such as imitation learning, reinforcement learning, distillation, synthetic data, and curriculum learning.Bridging the gap between model outputs and tangible outcomes in the physical world.You Might Excel in This Role If You:Possess experience in fine-tuning large models for downstream applications, including RLHF, imitation learning, reinforcement learning, distillation, and domain adaptation.Have a background in embodied AI, robotics, or real-world machine learning systems.Demonstrate a strong commitment to evaluation, benchmarking, and failure analysis.Are comfortable troubleshooting and debugging across the entire ML stack, from analyzing loss curves to understanding robot behavior.Enjoy rapid iteration and thrive on real-world feedback loops.Aspire to connect foundational models with practical deployment scenarios.About GeneralistAt Generalist, we are dedicated to realizing the vision of general-purpose robots. We envision a future where industries and homes benefit from collaborative interactions between humans and machines, enabling us to achieve more than ever before. Our focus is on building embodied foundation models, starting with dexterity, and advancing the frontiers of data, models, and hardware to empower robots to intelligently engage with their environments.
Join Baseten as a Post-Training Research Engineer and contribute to groundbreaking advancements in machine learning and AI. In this role, you will leverage your engineering skills to analyze and enhance models post-training, ensuring optimal performance and efficiency.
Join Baseten as a Post-Training Research Scientist, where you will play a vital role in advancing our machine learning capabilities. In this position, you will have the opportunity to conduct innovative research, analyze data, and contribute to the development of cutting-edge technologies. Your work will directly impact our projects and enhance the performance of our models.
Advancing Self-Improving SuperintelligenceAt Letta, we are on a mission to revolutionize artificial intelligence by creating self-improving agents that learn and adapt like humans. Unlike current AI systems that are often rigid and brittle, our innovative approach aims to build adaptable AI that continually evolves through experience.Founded by the visionaries behind MemGPT at UC Berkeley's Sky Computing Lab, the birthplace of Spark and Ray, we are backed by notable figures in AI infrastructure, including Jeff Dean and Clem Delangue. Our agents are already enhancing production systems for industry leaders such as 11x and Bilt Rewards, continually learning and improving in real-time.Join our elite team of researchers and engineers dedicated to tackling AI's most significant challenges: creating machines that can reason, remember, and learn as humans do.This position requires in-person attendance (no hybrid options) at our downtown San Francisco office, five days a week.
Full-time|$252K/yr - $315K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
At Scale AI, we collaborate with leading AI laboratories to supply high-quality data and foster advancements in Generative AI research. We seek innovative Research Scientists and Research Engineers with a strong focus on post-training techniques for Large Language Models (LLMs), including Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and reward modeling. This position emphasizes optimizing data curation and evaluation processes to boost LLM performance across text and multimodal formats. In this pivotal role, you will pioneer new methods to enhance the alignment and generalization of extensive generative models. You will work closely with fellow researchers and engineers to establish best practices in data-driven AI development. Additionally, you will collaborate with top foundation model labs, providing critical technical and strategic insights for the evolution of next-generation generative AI models.
Join Cartesia: Pioneering AI InnovationAt Cartesia, we are on a mission to redefine the landscape of artificial intelligence. Our goal is to create the next generation of AI that is interactive, ubiquitous, and capable of continuous reasoning across vast streams of audio, video, and text data. With an impressive foundation built on our pioneering work in State Space Models (SSMs) at the Stanford AI Lab, our team is uniquely positioned to advance model architectures that will make on-device reasoning a reality.Backed by prominent investors like Index Ventures and Lightspeed Venture Partners, along with a network of 90+ advisors, including top experts in AI, we are committed to pushing the boundaries of model innovation and systems engineering.About the RoleWe believe that the next significant advancement in model intelligence will stem from enhanced post-training methods and alignment strategies. As a Post-Training Researcher, you will be at the forefront of developing systems and methodologies that ensure our multimodal models are not just adaptive, but also aligned with human intentions.In this role, you will collaborate across machine learning research, alignment, and infrastructure, crafting innovative techniques for preference optimization, model evaluation, and feedback-driven learning. You will investigate how feedback signals can enhance reasoning capabilities across various modalities while establishing the necessary infrastructure to scale and improve these processes.Your contributions will be pivotal in shaping the learning and improvement trajectory of Cartesia’s foundational models, ultimately enhancing their connection with users.Your ImpactLead research initiatives aimed at enhancing the capabilities and alignment of multimodal models.Create cutting-edge post-training methods and evaluation frameworks to assess model advancements.Collaborate closely with research, product, and platform teams to establish best practices for specialized model development.Design, debug, and scale experimental systems to ensure reliability and reproducibility throughout training cycles.Convert research insights into production-ready systems that enhance model reasoning, consistency, and alignment with human values.
Full-time|$250K/yr - $450K/yr|On-site|San Francisco
About AfterQuery AfterQuery builds training data and evaluation frameworks used by leading AI labs around the world. The team partners with advanced research groups to create high-quality datasets and run detailed evaluations that go beyond standard benchmarks. As a small, post-Series A company based in San Francisco, every team member plays a key role in shaping how future AI models learn and improve. Role Overview The Post-Training Research Scientist focuses on proving the impact of AfterQuery's datasets. This work involves designing and running training experiments to isolate how specific data influences model performance. Projects span Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) post-training, with an emphasis on measuring effects on capability, generalization, and alignment. Working closely with partner labs, the scientist turns data into clear, verifiable results: showing exactly how a dataset leads to measurable improvements under defined conditions. The work is experimental and directly shapes the value of AfterQuery's products. What You Will Do Run controlled SFT and RL experiments to measure how datasets affect model outcomes. Quantify gains in areas like reasoning, tool use, long-horizon tasks, and specialized workflows. Share findings with partner labs to support sales and demonstrate value. Work with internal subject matter experts to improve data quality based on experimental results. What We Look For Strong background in LLM training and evaluation methods. Curiosity about how data structure, selection, and quality shape model behavior. Skill in designing experiments, executing quickly, and drawing practical insights from complex results. Comfort working across fields such as finance, software engineering, and policy. Focus on real-world implementation, not just theory. Research experience at the undergraduate or master's level is preferred; a PhD is not required. Compensation $250,000 - $450,000 total compensation plus equity
About UsAt Applied Compute, we are pioneering the development of Specific Intelligence for enterprises, creating agents that continuously learn from a company’s processes, data, expertise, and objectives. Our mission is to bridge the gap between isolated AI capabilities and their effective application within real business environments. Traditional AI systems often fall short as they lack the ability to adapt based on feedback. Our innovative continual learning layer captures context, memory, and decision-making processes across the enterprise, enabling specialized agents to engage in meaningful work.What Excites Us: We operate at the exciting intersection of product development and cutting-edge research. Our product team designs the platform that empowers a new generation of digital coworkers, while our research team drives advancements in post-training and reinforcement learning to enhance user experiences. As an applied research engineer, you will work directly with clients to implement models in production, combining robust product development with deep research insights to facilitate AI integration in enterprises.Meet Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have previously built reinforcement learning infrastructure at OpenAI, established data foundations at Scale AI, and contributed to significant systems at companies like Together, Two Sigma, and Watershed. We collaborate with Fortune 50 clients, including DoorDash, Mercor, and Cognition, and are proud to be backed by reputable investors such as Benchmark, Sequoia, and Lux.Who Thrives Here: We seek individuals who are passionate about applying innovative research and complex systems to solve real-world challenges. You should feel comfortable navigating new environments rapidly—be it a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment for customer interaction, empathy, and a deep understanding of their operational workflows are essential. Candidates with entrepreneurial backgrounds, extensive side projects, or a proven track record of end-to-end ownership typically excel in our environment.
Full-time|$116K/yr - $170K/yr|Hybrid|Cambridge, MA USA; San Francisco, CA USA
Your Role at Lila SciencesWe are in search of a talented Machine Learning Research Engineer with a focus on LLM post-training. In this pivotal role, you will architect and oversee large-scale training systems, enhance the performance of extensive models, and incorporate state-of-the-art methodologies to boost efficiency and throughput.Key ResponsibilitiesDevelop Ray-based distributed training infrastructure for LLMs and multi-modal models.Implement performance optimizations for large-scale model training, including training and optimization workflows such as SFT, MoE, and long-context scaling.Manage the orchestration of leading-edge and open-source LLMs alongside intricate compute-intensive tools.Create scalable pipelines for data preprocessing and experiment orchestration, utilizing tools for efficient data loading, pipeline parallelism, and optimizer tuning.Establish system-level performance benchmarks and debugging utilities.
Full-time|$200K/yr - $250K/yr|On-site|San Francisco, California, United States
Join fuku as an Applied Research Engineer in San Francisco, CA, where you will be at the forefront of AI video data research. As a crucial member of our team, your mission will involve building robust, high-performance frameworks and extensive pipelines to process and decode video data with exceptional accuracy. You will tackle complex research challenges, refine machine learning models and APIs, and deliver comprehensive solutions across computer vision, audio, and text processing domains. This role is designed for engineers who thrive in both research and production environments and are eager to spearhead the evolution of video understanding from research to deployment.
About the TeamJoin the innovative Post-Training team at OpenAI, where we focus on refining and elevating pre-trained models for deployment in ChatGPT, our API, and future products. Collaborating closely with various research and product teams, we conduct crucial research that prepares our models for real-world deployment to millions of users, ensuring they are safe, efficient, and reliable.About the RoleAs a Research Engineer / Scientist, you will spearhead the research and development of enhancements to our models. Our work intersects reinforcement learning and product development, aiming to create cutting-edge solutions.We seek passionate individuals with robust machine learning engineering skills and research experience, particularly with innovative and powerful models. The ideal candidate will be driven by a commitment to product-oriented research.This position is located in San Francisco, CA, and follows a hybrid work model requiring three days in the office each week. Relocation assistance is available for new employees.In this role, you will:Lead and execute a research agenda aimed at enhancing model capabilities and performance.Work collaboratively with research and product teams to empower customers to optimize their models.Develop robust evaluation frameworks to monitor and assess modeling advancements.Design, implement, test, and debug code across our research stack.You may excel in this role if you:Possess a deep understanding of machine learning and its applications.Have experience with relevant models and methodologies for evaluating model improvements.Are adept at navigating large ML codebases for debugging purposes.Thrive in a fast-paced and technically intricate environment.About OpenAIOpenAI is a pioneering AI research and deployment organization dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We are committed to pushing the boundaries of AI capabilities while prioritizing safety and human-centric values in our products. Our mission is to embrace diverse perspectives, voices, and experiences that represent the full spectrum of humanity, as we strive for a future where AI is a powerful ally for everyone.
Sieve is a 15-person AI research lab in San Francisco focused on video data. The team builds exabyte-scale video infrastructure and develops new approaches for video understanding, drawing from diverse data sources to create advanced datasets. With video now accounting for most internet traffic, Sieve aims to solve the challenge of delivering high-quality training data for applications in creativity, communication, gaming, AR/VR, and robotics. The company partners with leading AI labs and has achieved strong financial results, backed by Series A funding from Matrix Partners, Swift Ventures, Y Combinator, and AI Grant. Internship overview The Applied Research Engineering Intern will help build high-performance components and large-scale pipelines to advance video understanding at internet scale. This role involves tackling ambiguous research problems and turning them into practical solutions. Projects often cover computer vision, audio processing, and text processing. What you will do Develop and optimize models and APIs for video, audio, and text data Improve performance through pre- and post-processing, parallelism, pipelining, and inference optimization Occasionally fine-tune models for specific tasks Work through open-ended research challenges with a small, focused team Who succeeds here Comfortable working with machine learning models and APIs Skilled at optimizing systems for speed and accuracy Enjoys solving ambiguous technical problems across computer vision, audio, and text domains
Join Harvey: Innovating Legal ServicesAt Harvey, we're revolutionizing the landscape of legal and professional services from the ground up. By integrating cutting-edge agentic AI, a robust enterprise platform, and extensive domain expertise, we're redefining how essential knowledge work is performed for generations to come.This is a unique opportunity to contribute to a transformative company at a pivotal moment. With over 1,000 clients across more than 58 countries, strong product-market alignment, and backing from prestigious investors, we are rapidly scaling and shaping a new industry category in real-time. Our ambitions are vast, our standards are high, and the potential for personal, professional, and financial growth is unparalleled.Our team is composed of sharp, driven individuals who are passionately aligned with our mission. We operate with agility and intensity, taking full ownership of the challenges we face — from initial concepts to long-term results. We maintain close relationships with our clients — from executives to engineers — collaborating to address real issues with urgency and care. If you thrive in a dynamic environment, strive for excellence, and want to help shape the future of work alongside high-achieving peers, we invite you to join us in building something extraordinary.At Harvey, we’re writing the future of professional services today — and this is just the beginning.Role OverviewWe are seeking an insightful legal researcher with a comprehensive understanding of how law firms, financial institutions, corporations, and various organizations deliver professional services and manage complex legal and knowledge work. We have exceptional product-market fit and demand spanning diverse customer profiles. Meeting this demand requires a deep comprehension of our customers' workflows and the ways in which AI can enhance those workflows.Your ResponsibilitiesIn this role, you will:Contribute subject-matter expertise to support AI research initiatives.Collaborate closely with our engineering, product, and design teams to conceptualize and develop AI systems.Enhance AI systems through prompt engineering, fine-tuning, and other innovative techniques.Create proprietary benchmarks and datasets to assess models and model systems.Engage directly with clients to grasp their workflows, identify challenges, and translate intricate business and legal requirements into technical solutions.Your Qualifications3-7 years of experience in legal research or related fields.Proficiency in understanding legal workflows and professional services delivery.Experience in AI systems development and implementation is a plus.Strong analytical skills and attention to detail.Excellent communication skills, both written and verbal.
About GranicaGranica is an innovative AI research and infrastructure firm dedicated to creating reliable, steerable representations of enterprise data.We establish trust through Crunch, a policy-driven health layer optimizing large tabular datasets for efficiency, reliability, and reversibility. Utilizing this foundation, we are developing Large Tabular Models—systems designed to learn cross-column and relational structures, delivering trustworthy answers and automation with integrated provenance and governance.Our MissionCurrent AI capabilities are hindered not only by model design but also by the inefficiencies of the data that supports it. At scale, each redundant byte, poorly organized dataset, and inefficient data pathway contributes to significant costs, latency, and energy waste.Granica’s mission is to eliminate these inefficiencies. We leverage groundbreaking research in information theory, probabilistic modeling, and distributed systems to craft self-optimizing data infrastructure: systems that continually enhance how information is represented and utilized by AI.Led by Prof. Andrea Montanari from Stanford, Granica’s Research group merges advances in information theory with learning efficiency in large-scale distributed systems. We collectively believe that the next significant leap in AI will originate from innovations in efficient systems, rather than merely larger models.Granica is at the forefront of developing a new category of structured AI models: foundational models designed to learn and reason from the relational, tabular, and structured data that drives the global economy. While many focus on unstructured text or media, we are venturing into the next frontier: systems capable of comprehending and reasoning over structured information.Your ContributionsCreate and prototype algorithms that form the core of structured AI, enhancing representation learning and efficient information modeling for enterprise and tabular data at petabyte scale.Develop adaptive learners merging statistical learning theory with systems optimization at scale, contributing to a new generation of foundational models for structured information.Design architectures that unify symbolic, relational, and neural components, enabling AI systems to reason directly over structured enterprise data.Construct cost models and optimization frameworks that enhance the efficiency of structured learning, both computationally and economically.
Nov 13, 2025
Sign in to browse more jobs
Create account — see all 730 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.