Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
Key Responsibilities
Collaborate with Scale’s Operations team and enterprise clients to convert ambiguity into structured evaluation data, facilitating the development and upkeep of gold-standard human-rated datasets and expert rubrics that form the basis of AI evaluation systems.
Examine feedback and gathered data to discover patterns, enhance evaluation frameworks, and establish iterative improvement cycles that elevate the quality and relevance of human-curated assessments.
Design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems, including models that critique, grade, and elucidate agent outputs (e.g., RLAIF, model-judging-model configurations), along with scalable evaluation pipelines and diagnostic tools.
Engage in research projects that investigate new methodologies for the automatic analysis, evaluation, and enhancement of enterprise agent behavior, striving to advance how AI systems are assessed and optimized in practical applications.
Basic Qualifications
Bachelor’s degree in Computer Science, Electrical Engineering, or a related field, or equivalent practical experience.
Over 2 years of experience in Machine Learning or Applied Research, with a focus on applied ML systems or evaluation infrastructure.
Hands-on experience with Large Language Models (LLMs) and Generative AI in professional or research settings.
Strong comprehension of cutting-edge model evaluation methodologies and the current research landscape.
Proficiency in Python and major ML frameworks (e.g., PyTorch, TensorFlow).
Solid engineering...
About the job
Join Scale AI as a passionate and technically adept AI Research Engineer within our Enterprise Evaluations team. This pivotal role is integral to our goal of providing the industry's leading Generative AI Evaluation Suite. You will actively contribute to the foundational systems that guarantee the safety, dependability, and ongoing enhancement of LLM-driven workflows and agents for enterprise clients.
The perfect candidate will possess a robust understanding of large language models, a fervor for addressing intricate evaluation dilemmas, and the ability to excel in a fast-evolving research atmosphere. We seek an engineer who can innovate, remains informed about the latest studies in AI evaluation, and is enthusiastic about incorporating cutting-edge research concepts into our workflows to create top-tier evaluation systems.
About Scale AI
Scale AI is at the forefront of AI-driven solutions, dedicated to streamlining operations and enhancing business intelligence through innovative technologies. With a commitment to excellence, we aim to empower enterprises with robust evaluation systems and insights that drive informed decision-making.
About Distyl AIDistyl AI specializes in creating high-performance AI systems that enhance the fundamental operational processes of Fortune 500 companies. Through a strategic alliance with OpenAI, proprietary software accelerators, and extensive expertise in enterprise AI, we deliver effective AI solutions with swift time-to-value, often within a quarter.Our innovations have empowered Fortune 500 clients in various sectors, including insurance, consumer packaged goods, and non-profit organizations. Joining our team means you will assist organizations in recognizing, developing, and extracting value from their Generative AI investments, frequently for the first time. We prioritize customer needs, working backward from the client's challenges and ensuring we generate financial benefits while enhancing the experiences of end-users.Distyl is guided by seasoned leaders from top-tier companies like Palantir and Apple and enjoys backing from prominent investors including Lightspeed, Khosla, Coatue, Dell Technologies Capital, Nat Friedman (Former CEO of GitHub), Brad Gerstner (Founder and CEO of Altimeter), along with board members from numerous Fortune 500 firms.What We Are Looking ForAt Distyl, we are at the forefront of leveraging AI within enterprises. We seek imaginative researchers who aspire to go beyond incremental enhancements on benchmarks and are eager to redefine the application of software in innovative ways.Our researchers hail from diverse academic disciplines but possess a robust research background, operate in an AI-centric manner, and would find conventional research environments unfulfilling.Key ResponsibilitiesThe AI Systems team is dedicated to architecting complex, comprehensive solutions that integrate perception, reasoning, planning, and execution. Researchers amalgamate various components (LLMs, retrievers, evaluators, memory systems, and execution agents) into resilient, scalable systems that deliver consistent performance across dynamic enterprise workflows.Researchers in AI Systems examine the principles governing intricate system interactions. They analyze coordination, information flow, and emergent behavior across multiple agents and models. Their research reveals the foundational mechanics of robustness, composability, and alignment, ultimately establishing the design paradigm for constructing intelligent systems.
Full-time|$130K/yr - $250K/yr|On-site|San Francisco
About Distyl AIDistyl AI specializes in crafting production-grade AI systems that enhance the core operational workflows of Fortune 500 companies. Through a strategic alliance with OpenAI, proprietary software accelerators, and profound enterprise AI knowledge, we deliver efficient AI solutions that ensure rapid value realization—often within a quarter.Our innovative products have empowered Fortune 500 clients across various sectors, including insurance, consumer packaged goods (CPG), and non-profit organizations. Joining our team means you will play a vital role in helping businesses identify, develop, and extract value from their investments in Generative AI for the first time. We prioritize a customer-centric approach, starting from the customer's challenges and holding ourselves accountable for delivering financial impact while enhancing the lives of end-users.Distyl is spearheaded by seasoned leaders from prestigious firms such as Palantir and Apple, and is supported by notable investors including Lightspeed, Khosla, Coatue, Dell Technologies Capital, Nat Friedman (former CEO of GitHub), Brad Gerstner (Founder and CEO of Altimeter), along with board members from numerous Fortune 500 companies.What We Are SeekingAt Distyl, we are redefining the application of AI within enterprises. We need innovative researchers who aspire to go beyond mere incremental enhancements on benchmarks or optimizing existing processes; we seek individuals eager to creatively transform how software is utilized.Our research team is composed of individuals from diverse academic disciplines, all with impressive research backgrounds. They thrive in an AI-native environment and would find traditional research organizations unchallenging.Key ResponsibilitiesThe System Self-Improvement team develops architectures that continually assess and enhance their own performance. Researchers are tasked with designing feedback-driven systems that can identify weaknesses, formulate corrective hypotheses, and autonomously implement improvements through self-reflection, retraining, or workflow adjustments. The ultimate aim is to create AI systems that compound in capability as they are used.Researchers in the Self-Improvement domain investigate how systems can construct self-models—gaining insight into when, why, and how they succeed or fail. They delve into reflective reasoning, reward modeling, and self-evaluation strategies to facilitate autonomous evolution. This research area merges reinforcement learning, interpretability, and meta-optimization to innovate continuously learning enterprise systems.
Full-time|$130K/yr - $250K/yr|On-site|San Francisco
Join Distyl AI as an Applied AI Researcher in System Self-ConstructionDistyl AI is at the forefront of developing high-performance AI systems tailored for the operational needs of Fortune 500 companies. With the backing of OpenAI and a wealth of enterprise AI knowledge, we are committed to delivering AI solutions that generate tangible results in record time.Our innovative products have empowered various Fortune 500 clients across sectors, including insurance, consumer packaged goods, and non-profits. By joining our dynamic team, you will play a vital role in helping organizations harness the full potential of their Generative AI investments, often for the first time. We prioritize a customer-first approach, focusing on problem-solving and ensuring we contribute positively to both financial outcomes and user experiences.Led by industry veterans from renowned companies such as Palantir and Apple, and supported by esteemed investors like Lightspeed and Khosla, Distyl AI is positioned for substantial growth and impact.
Full-time|$130K/yr - $250K/yr|On-site|San Francisco
Join Our Team at Distyl AIDistyl AI is at the forefront of developing advanced, production-grade AI systems that enhance operational workflows for Fortune 500 companies. Through our strategic alliance with OpenAI, proprietary software accelerators, and extensive enterprise AI expertise, we deliver effective AI solutions with rapid results, often in a matter of weeks.Our innovative products have transformed operations for clients across various sectors, including insurance, consumer packaged goods, and non-profit organizations. By joining our dynamic team, you will play a crucial role in helping clients discover, build, and leverage value from their Generative AI investments, frequently for the first time. We prioritize a customer-first approach, focusing on their challenges and holding ourselves accountable for generating financial impact while enhancing user experiences.Distyl is led by accomplished leaders from prestigious companies like Palantir and Apple, with backing from top-tier investors including Lightspeed, Khosla, Coatue, and Dell Technologies Capital, among others.Your RoleAt Distyl, we are redefining the application of AI in enterprise environments. We seek imaginative researchers who aspire to go beyond mere incremental advancements and, instead, aim to innovatively reshape how software is utilized.Our research team comprises individuals from diverse academic backgrounds with robust research accomplishments. We operate in an AI-native manner and thrive in an environment that encourages unconventional thinking.Key ResponsibilitiesSimilar to generating hypotheses in AI for scientific exploration, the System Discovery team focuses on identifying new classes of AI systems. Researchers will investigate various architectures, modalities, and combinations to discover how AI can fundamentally transform work processes.Make connections across different domains to unveil new system archetypes. Engage in experimental problem definitions and develop prototypes that facilitate novel forms of human–AI collaboration.Who You AreBroad Systems Development Experience: You possess a diverse range of experience, having built systems in various paradigms, including retrieval pipelines, reasoning agents, evaluation harnesses, multimodal integrations, or workflow graph systems, and can distill these experiences into innovative solutions.
About GranicaGranica is an innovative AI research and infrastructure firm dedicated to creating reliable, steerable representations of enterprise data.We establish trust through Crunch, a policy-driven health layer optimizing large tabular datasets for efficiency, reliability, and reversibility. Utilizing this foundation, we are developing Large Tabular Models—systems designed to learn cross-column and relational structures, delivering trustworthy answers and automation with integrated provenance and governance.Our MissionCurrent AI capabilities are hindered not only by model design but also by the inefficiencies of the data that supports it. At scale, each redundant byte, poorly organized dataset, and inefficient data pathway contributes to significant costs, latency, and energy waste.Granica’s mission is to eliminate these inefficiencies. We leverage groundbreaking research in information theory, probabilistic modeling, and distributed systems to craft self-optimizing data infrastructure: systems that continually enhance how information is represented and utilized by AI.Led by Prof. Andrea Montanari from Stanford, Granica’s Research group merges advances in information theory with learning efficiency in large-scale distributed systems. We collectively believe that the next significant leap in AI will originate from innovations in efficient systems, rather than merely larger models.Granica is at the forefront of developing a new category of structured AI models: foundational models designed to learn and reason from the relational, tabular, and structured data that drives the global economy. While many focus on unstructured text or media, we are venturing into the next frontier: systems capable of comprehending and reasoning over structured information.Your ContributionsCreate and prototype algorithms that form the core of structured AI, enhancing representation learning and efficient information modeling for enterprise and tabular data at petabyte scale.Develop adaptive learners merging statistical learning theory with systems optimization at scale, contributing to a new generation of foundational models for structured information.Design architectures that unify symbolic, relational, and neural components, enabling AI systems to reason directly over structured enterprise data.Construct cost models and optimization frameworks that enhance the efficiency of structured learning, both computationally and economically.
About UsAt Applied Compute, we are pioneering the development of Specific Intelligence for enterprises, creating agents that continuously learn from a company’s processes, data, expertise, and objectives. Our mission is to bridge the gap between isolated AI capabilities and their effective application within real business environments. Traditional AI systems often fall short as they lack the ability to adapt based on feedback. Our innovative continual learning layer captures context, memory, and decision-making processes across the enterprise, enabling specialized agents to engage in meaningful work.What Excites Us: We operate at the exciting intersection of product development and cutting-edge research. Our product team designs the platform that empowers a new generation of digital coworkers, while our research team drives advancements in post-training and reinforcement learning to enhance user experiences. As an applied research engineer, you will work directly with clients to implement models in production, combining robust product development with deep research insights to facilitate AI integration in enterprises.Meet Our Team: Our diverse team consists of engineers, researchers, and operators, many of whom are former founders. We have previously built reinforcement learning infrastructure at OpenAI, established data foundations at Scale AI, and contributed to significant systems at companies like Together, Two Sigma, and Watershed. We collaborate with Fortune 50 clients, including DoorDash, Mercor, and Cognition, and are proud to be backed by reputable investors such as Benchmark, Sequoia, and Lux.Who Thrives Here: We seek individuals who are passionate about applying innovative research and complex systems to solve real-world challenges. You should feel comfortable navigating new environments rapidly—be it a fresh codebase, a client’s data architecture, or an unfamiliar problem domain. A genuine enjoyment for customer interaction, empathy, and a deep understanding of their operational workflows are essential. Candidates with entrepreneurial backgrounds, extensive side projects, or a proven track record of end-to-end ownership typically excel in our environment.
Full-time|$130K/yr - $250K/yr|On-site|San Francisco
About Distyl AIDistyl AI is at the forefront of developing production-grade AI systems that enhance core operational workflows for Fortune 500 companies. Our innovative solutions, powered by a strategic alliance with OpenAI and bolstered by in-house software accelerators, deliver AI systems with rapid time-to-value, often within just a quarter.Our cutting-edge products have successfully transformed the operations of Fortune 500 clients across a multitude of sectors, including insurance, consumer packaged goods, and non-profit organizations. As a member of our team, you will be instrumental in assisting companies to identify, construct, and unlock the potential of their GenAI investments, frequently for the first time. We pride ourselves on being customer-centric, addressing client challenges directly, and holding ourselves accountable for generating financial impact while enhancing the experiences of end-users.Led by distinguished leaders from prestigious organizations such as Palantir and Apple, Distyl is also supported by renowned investors including Lightspeed, Khosla, Coatue, Dell Technologies Capital, Nat Friedman (former CEO of GitHub), and Brad Gerstner (Founder and CEO of Altimeter), alongside board members from over a dozen Fortune 500 companies.
Join Our Innovative Team at ArcadeAt Arcade, we are revolutionizing the way physical products are created with our cutting-edge AI platform. We empower individuals to turn their creative ideas into tangible products seamlessly, utilizing natural language and generative AI. Our mission is to democratize product design, making it as effortless as sharing a post online.Backed by a remarkable $42M in funding from industry-leading investors including Reid Hoffman and Ashton Kutcher, our company is a rising star in the tech landscape. Guided by our founder Mariam Naficy and a team steeped in AI and design expertise, we are at the forefront of a new frontier that merges AI, personal expression, and on-demand manufacturing.Your Role as an Applied AI EngineerWe are on the lookout for an Applied AI Engineer to enhance our generative AI capabilities. This position combines hands-on model development with the integration of advanced AI techniques into our production systems. You will collaborate with diverse teams to conduct research, experiment with models, and implement AI-driven products.
At Netic, we are revolutionizing the essential services sector with our advanced AI-driven revenue engine, which supports the backbone of the American economy.Backed by $43M in funding from illustrious investors such as Founders Fund, Greylock, Hanabi, and Dylan Field, who spearheaded our Series B, we have empowered our clients to secure hundreds of thousands of jobs across various service industries throughout North America. Our platform has enabled companies to operate with an AI-first approach.Join our innovative team of relentless builders hailing from renowned organizations like Scale, Databricks, HRT, Meta, MIT, Stanford, and Harvard. Together, we are applying frontier AI to solve complex challenges in the physical economy, where data is intricate and the results are both immediate and impactful.As an Applied AI Research Engineer, you will immerse yourself in pioneering research, gain a thorough understanding of the business functions we automate, and lead targeted machine learning projects that yield remarkable outcomes.
Full-time|$130K/yr - $250K/yr|On-site|San Francisco
About Distyl AIDistyl AI specializes in developing robust AI systems that enhance core operational processes for Fortune 500 companies. Our strategic alliance with OpenAI, alongside proprietary software accelerators and extensive enterprise AI knowledge, enables us to deliver effective AI solutions swiftly—often within a quarter.Our innovative products cater to a variety of Fortune 500 clients across sectors, including insurance, consumer packaged goods, and non-profit organizations. As a member of our team, you will play a vital role in helping companies discover, build, and leverage value from their Generative AI investments, often for the first time. We prioritize a customer-centric approach, focusing on solving real challenges and ensuring both financial impact and enhanced user experiences.Distyl is spearheaded by experienced leaders from top firms such as Palantir and Apple, and is supported by notable investors including Lightspeed, Khosla, Coatue, Dell Technologies Capital, Nat Friedman (Former GitHub CEO), Brad Gerstner (Founder and CEO of Altimeter), and board members from numerous Fortune 500 companies.
Join Our Innovative TeamAt OpenAI, we are pioneering the field of artificial intelligence, empowering innovation and shaping the future through transformative research. Our mission is to democratize AI, ensuring its benefits are accessible to all. We are on the lookout for forward-thinking Research Engineers to join our Applied Group, where you will convert groundbreaking research into practical applications that can revolutionize industries, enhance human creativity, and tackle complex challenges.Your Impactful RoleAs a Research Engineer within OpenAI's Applied Group, you will collaborate with some of the brightest minds in AI. Your work will involve deploying cutting-edge models in production settings, transforming theoretical breakthroughs into impactful solutions. If you are passionate about making AI technology accessible and effective, this is your opportunity to leave a significant impact.In this role, you will:Innovate and Deploy: Create and implement advanced machine learning models addressing real-world issues. Translate OpenAI's research from theory to practice, developing AI-driven applications that make a meaningful difference.Collaborate with Experts: Engage closely with researchers, software engineers, and product managers to comprehend intricate business challenges and deliver AI-based solutions. Become part of a vibrant team where creativity and ideas flourish.Optimize and Scale: Develop scalable data pipelines, fine-tune models for peak performance and precision, and ensure readiness for production. Contribute to projects that leverage state-of-the-art technology and innovative methodologies.Learn and Lead: Stay at the forefront of advancements in machine learning and AI. Participate in code reviews, share insights, and exemplify best practices to maintain high standards in engineering.Make a Difference: Oversee and maintain deployed models, ensuring they consistently deliver value. Your contributions will directly shape how AI benefits individuals, businesses, and society as a whole.You may excel in this position if you possess:A Master's or PhD in Computer Science, Machine Learning, Data Science, or a related discipline.Proven experience in deep learning and transformer models.Expertise with frameworks such as PyTorch or TensorFlow.A robust understanding of data structures, algorithms, and software engineering principles.Experience with cloud platforms and deploying machine learning models in production.
About UsAt Applied Compute, we are pioneering Specific Intelligence for enterprises through advanced AI agents that learn continuously from organizational processes, data, and objectives. We recognize the significant gap between what AI models can achieve in isolation and their performance within actual business contexts, often failing to adapt to feedback. Our mission is to build a continual learning layer that captures context, memory, and decision traces across enterprises, creating environments where specialized agents excel at real tasks.Why Join Us? We operate at a unique intersection of product development and research. Our product team is developing the platform that empowers a new generation of digital coworkers, while our research team is advancing post-training and reinforcement learning to enhance product experiences. As applied research engineers, we work closely with customers to deploy models into production effectively. This blend of robust product focus, deep research, and customer engagement is our strategy for successfully integrating AI into enterprise operations. We are product-led, research-enabled, and strategically deployed.Meet Our Team: Our team consists of engineers, researchers, and operators, many of whom are former founders. We have established RL infrastructure at OpenAI, developed data foundations at Scale AI, and built systems at Together, Two Sigma, and Watershed. We collaborate with Fortune 50 clients, including DoorDash, Mercor, and Cognition, and are backed by esteemed investors such as Benchmark, Sequoia, and Lux.Who Excels Here: We seek individuals passionate about applying innovative research and complex systems to overcome real-world challenges. Candidates should thrive in unfamiliar environments, whether it involves navigating new codebases, understanding new customer data architectures, or tackling unfamiliar problem domains. A genuine enjoyment of customer interactions—listening, empathizing, and comprehending how work is accomplished within organizations—is essential. Those with prior entrepreneurial experience, extensive side projects, or a proven ability to manage initiatives from start to finish will thrive in our culture.Your RoleAs a Research Systems Engineer, you will be responsible for training cutting-edge models and developing methodologies that facilitate continual learning within enterprise settings. You will design and execute large-scale experiments, delve into advanced reinforcement learning techniques, and create tools that enhance our understanding of the training process. This role uniquely positions you at the crossroads of research and systems engineering, where you will innovate new algorithms in collaboration with researchers and work alongside infrastructure engineers to deploy them on GPUs.
Full-time|$179.4K/yr - $224.3K/yr|On-site|San Francisco, CA; New York, NY
Join Scale AI as a passionate and technically adept AI Research Engineer within our Enterprise Evaluations team. This pivotal role is integral to our goal of providing the industry's leading Generative AI Evaluation Suite. You will actively contribute to the foundational systems that guarantee the safety, dependability, and ongoing enhancement of LLM-driven workflows and agents for enterprise clients. The perfect candidate will possess a robust understanding of large language models, a fervor for addressing intricate evaluation dilemmas, and the ability to excel in a fast-evolving research atmosphere. We seek an engineer who can innovate, remains informed about the latest studies in AI evaluation, and is enthusiastic about incorporating cutting-edge research concepts into our workflows to create top-tier evaluation systems.
Job OverviewJoin our team at Eragon as an Applied AI Intern, where you will play a crucial role in developing and deploying advanced AI systems. Collaborating closely with engineers and researchers, you'll contribute to transforming AI models from theoretical concepts into impactful real-world applications.This internship offers a unique opportunity for hands-on experience across various aspects of modeling, data management, and systems integration, working on projects that directly reach users.Main ResponsibilitiesModel Development: Assist in refining, assessing, and implementing machine learning models to address practical challenges.System Implementation: Collaborate in building and integrating AI-driven features into live production systems.Data & Pipelines: Engage with datasets to facilitate training, evaluation, and iterative improvements.Experimentation: Conduct experiments, analyze outcomes, and enhance model performance through iterative testing.Evaluation & Monitoring: Contribute to the development of evaluation frameworks and assist in monitoring system performance metrics.Cross-Functional Collaboration: Work alongside engineering and product teams to aid in the development of new features.QualificationsEducation: Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline.Technical Skills: Proficient in Python with familiarity in machine learning frameworks such as PyTorch or TensorFlow.ML Fundamentals: Strong understanding of fundamental machine learning concepts and workflows.Problem-Solving Skills: Demonstrated ability to dissect problems and contribute to effective solutions.Curiosity & Initiative: A keen desire to learn and make meaningful contributions in a fast-paced setting.Preferred QualificationsExperience in machine learning projects, internships, or research roles.Familiarity with large language models, AI agents, or data pipelines.Demonstrated experience in building projects beyond academic coursework.Genuine interest in real-world AI applications.
Role Overview Paraform is hiring an Applied AI Engineer in San Francisco. This role focuses on building and deploying AI systems that directly serve real users. The ideal candidate brings 2-5 years of experience, a strong grasp of modern LLM-based technologies, and a track record of turning advanced models into reliable product features. Success in this role depends on sound product sense and the ability to weigh trade-offs between LLM and traditional machine learning approaches. Experience with LLM-powered applications, retrieval systems, agentic workflows, or automation is valuable. Familiarity with classic ML techniques, such as ranking, recommendation, or classification, will help in designing hybrid systems that balance performance, cost, and reliability. What You Will Do Design and build AI systems to improve matchmaking, ranking, and automation in the Paraform marketplace. Develop LLM-driven features, including retrieval pipelines and agentic workflows, to streamline recruiter and company interactions. Own systems end-to-end: from data pipelines and model design to deployment, monitoring, and iteration in production. Work closely with product managers, ML engineers, and full-stack teams to deliver AI capabilities that shape marketplace outcomes. Create evaluation frameworks to measure real-world performance, reliability, and business impact, not just offline metrics. Set best practices for building and maintaining production AI systems, balancing model quality, cost, latency, and maintainability. Advance the integration of AI into product experiences across the platform. What We Look For 2-5 years of experience at an AI-focused startup (Series A through D). Background working on products with a broad user base, beyond single-enterprise deployments. Proficient in Python and Typescript. Experience developing agentic systems that drive measurable business or user outcomes. Comfort with ambiguity and building in 0 to 1 environments. Ability to communicate technical trade-offs clearly to non-technical stakeholders.
About incident.ioincident.io is a pioneering AI-driven incident response platform designed to empower teams in significantly minimizing their incident response times and enhancing system reliability. Our comprehensive platform integrates on-call management, incident responses, AI SRE capabilities, and status updates, providing teams with the necessary tools to respond swiftly, minimize downtime, and keep their customers informed.Since our inception in 2021, we have successfully assisted over 1,500 organizations, including industry giants like Netflix, Airbnb, and Block, managing upwards of 500,000 incidents. Each month, tens of thousands of professionals across Engineering, Product, and Support leverage incident.io to accelerate service restoration, maintain alignment under pressure, and concentrate on what truly matters.We are a rapidly growing, ambitious team that places immense value on our clients, product excellence, and creating exceptional experiences. With $100M in funding from esteemed investors like Index Ventures, Insight Partners, and Point Nine, together with founders and executives from top-tier tech companies, we are poised for continued growth.The TeamThe Revenue Operations team at incident.io is a compact, high-impact group that collaborates closely with Sales, Marketing, and Customer Success to stimulate revenue growth and elevate GTM productivity.As the Head of Go-To-Market Systems & Applied AI, you will steer the transformation of our go-to-market infrastructure from a traditional systems function to a dynamic AI-powered operational engine. We conceptualize our GTM stack as a product, treat our revenue teams as clients, and prioritize automation and AI as the standard approach to resolving challenges previously addressed through manual efforts.Your role will involve collaborating with GTM leaders and Applied AI engineers to craft the systems, workflows, and intelligence layer essential for driving pipeline generation and revenue scaling. Your success will shape the future landscape of GTM systems at incident.io.
Join Our Innovative Team at David AIDavid AI is pioneering the audio data research landscape. We adopt a rigorous R&D methodology for developing datasets that parallels the standards upheld by leading AI laboratories. Our vision is to seamlessly integrate AI into everyday experiences, with audio serving as the perfect conduit. The evolution of audio AI is rapidly unfolding, yet the availability of high-quality training data remains a critical challenge. This is where David AI steps in.Founded in 2024 by a talented group of former engineers and operators from Scale AI, we have quickly become a trusted partner to numerous FAANG companies and AI research labs. Recently, we secured $50 million in a Series B funding round with notable investors, including Meritech, NVIDIA, and Alt Capital.Our culture is built on sharp intellect, humility, ambition, and a close-knit community. We invite exceptional minds in research, engineering, product development, and operations to join us as we advance the field of audio AI.Research Team OverviewAt David AI, we are convinced that superior model capabilities stem from high-quality, differentiated data. Our research team is dedicated to conducting ambitious, long-term studies into audio technology while collaborating with both internal and external partners to implement cutting-edge research insights into practical applications.Your Role as a Founding Audio AI Research EngineerIn this position, you will establish the research framework that influences how premier AI labs develop their audio models. You will have access to a top-tier team of human AI trainers, robust computing resources, and the autonomy to shape your research agenda.Key ResponsibilitiesCreate and implement comprehensive evaluation frameworks for assessing audio AI capabilities in areas such as speech, emotion detection, conversational dynamics, and acoustic patterns.Investigate and prototype innovative methodologies for audio quality assessment, automated labeling, and optimizing data collection processes.Design focused data collection pipelines aimed at capturing novel, high-value audio capabilities.Develop automated systems for ongoing classifier enhancement and prompt engineering evaluation.Assess cutting-edge models and formulate actionable research strategies.Publish your findings in prestigious conferences.
Full-time|$240K/yr - $315K/yr|On-site|San Francisco, CA | New York City, NY
About Anthropic Anthropic builds AI systems with a focus on reliability, interpretability, and steerability. The team includes researchers, engineers, policy experts, and business leaders, all working together to ensure AI benefits both users and society. Role Overview The Commercial Solutions Architect, Applied AI, joins Anthropic’s Applied AI team as a Pre-Sales Architect. This role centers on demonstrating the value of Claude and helping customers integrate and deploy it within their technology stacks. The position combines technical expertise with customer engagement to design LLM solutions for complex business needs, always maintaining Anthropic’s standards for safety and reliability. As a Commercial Solutions Architect, expect to work closely with key accounts, building reusable solution blueprints, demos, and enablement materials that support the wider adoption of Claude across Anthropic’s commercial clients. Collaboration is central: work alongside Sales, Product, and Engineering teams to guide clients from early technical discovery through to deployment. Use your knowledge to help customers understand Claude’s capabilities, develop evaluation strategies, and design scalable architectures that unlock the full potential of Anthropic’s AI systems. Location San Francisco, CA or New York City, NY
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in creating versatile AI systems designed for optimal performance across various deployment platforms, including data center accelerators and on-device hardware. Our technology emphasizes low latency, minimal memory consumption, privacy, and dependability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are on the lookout for exceptional talent to join our team.The OpportunityThe Data team at Liquid AI drives the development of our Liquid Foundation Models, focusing on pre-training, vision, audio, and emerging modalities. With the stagnation of public data sources, the effectiveness of our models increasingly relies on specially curated datasets. We are seeking engineers with a machine learning mindset who can efficiently gather, filter, and synthesize high-quality data at scale.At Liquid AI, we regard data as a research challenge rather than an infrastructural issue. Our engineers conduct experiments, design ablations, and assess how data-related decisions impact model quality. We will align you with a team where you can experience rapid growth and make a significant impact, be it in pre-training, post-training reinforcement learning, vision-language, audio, or multimodal applications.While we prefer candidates in San Francisco and Boston, we are open to considering other locations.What We're Looking ForWe are in search of a candidate who:Thinks like a researcher and executes like an engineer: You should be able to formulate hypotheses, conduct experiments, and evaluate results. Our engineers produce research-level code while our researchers implement production systems.Learns quickly and adapts: You will be working in rapidly evolving modalities, so the ability to quickly grasp new domains and thrive in ambiguity is essential.Prioritizes data quality: We hold data quality in high regard; tasks such as filtering, deduplication, augmentation, and evaluation are key responsibilities, not afterthoughts.Solves problems autonomously: Data engineers operate within training groups (pre-training and multimodal). While collaboration is crucial, we expect ownership and self-direction.The WorkDevelop and maintain data processing, filtering, and selection pipelines at scale.Establish pipelines for pretraining, midtraining, supervised fine-tuning, and preference optimization datasets.Design synthetic data generation systems utilizing large language models (LLMs), structured prompting, and domain-specific generative techniques.
Join Saris AI as an AI Systems Engineer, where you will play a pivotal role in designing and implementing innovative solutions that leverage artificial intelligence technologies. You will collaborate closely with cross-functional teams to develop AI systems that enhance our products and services, driving impactful results for our clients.
Mar 30, 2026
Sign in to browse more jobs
Create account — see all 4,791 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.