Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
QualificationsWe are looking for candidates with a strong foundation in machine learning and a passion for applied research. Ideal candidates should possess:Experience with machine learning frameworks and tools. Knowledge of AI principles and their application in real-world scenarios. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. A degree in Computer Science, Engineering, or a related field.
About the job
About the Role
Join Thorin as an AI Researcher and play a pivotal role in shaping the core research initiatives that drive our AI innovations. In this position, you will operate at the crossroads of machine learning research, practical model development, and product application, enhancing our understanding and automation of enterprise workflows.
This position merges theoretical research with hands-on implementation, transitioning ideas from conceptual stages through experimentation into functional components that enrich Thorin’s offerings.
Your Responsibilities
Research & Innovation
Conduct innovative machine learning research aligned with real-world product demands.
Investigate new model architectures, training methods, and evaluation techniques specifically designed for understanding and automating organizational workflows.
Model Development & Evaluation
Create, implement, and assess ML/AI methodologies that enhance model efficacy for essential tasks.
Collaborate closely with cross-functional teams to integrate research outcomes into tangible products that meet user needs.
About Thorin
Thorin, incubated by 8VC, is an innovative applied AI company that is transforming productivity within organizations. Our mission is to build proactive, long-term AI agents that continuously monitor workplace interactions across platforms such as Slack, email, and meetings, ensuring seamless task execution without oversight. Unlike conventional AI solutions that respond to input, Thorin provides a dynamic digital representation of business operations, facilitating automation, coordination, and strategic insights over extended periods. With visionary leadership from Joe Lonsdale and backed by 8VC, we are laying the groundwork for AI-enhanced productivity across industries.
Similar jobs
1 - 20 of 12,285 Jobs
Search for Machine Learning Research Engineer Data At Liquid Ai San Francisco
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in creating versatile AI systems designed for optimal performance across various deployment platforms, including data center accelerators and on-device hardware. Our technology emphasizes low latency, minimal memory consumption, privacy, and dependability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are on the lookout for exceptional talent to join our team.The OpportunityThe Data team at Liquid AI drives the development of our Liquid Foundation Models, focusing on pre-training, vision, audio, and emerging modalities. With the stagnation of public data sources, the effectiveness of our models increasingly relies on specially curated datasets. We are seeking engineers with a machine learning mindset who can efficiently gather, filter, and synthesize high-quality data at scale.At Liquid AI, we regard data as a research challenge rather than an infrastructural issue. Our engineers conduct experiments, design ablations, and assess how data-related decisions impact model quality. We will align you with a team where you can experience rapid growth and make a significant impact, be it in pre-training, post-training reinforcement learning, vision-language, audio, or multimodal applications.While we prefer candidates in San Francisco and Boston, we are open to considering other locations.What We're Looking ForWe are in search of a candidate who:Thinks like a researcher and executes like an engineer: You should be able to formulate hypotheses, conduct experiments, and evaluate results. Our engineers produce research-level code while our researchers implement production systems.Learns quickly and adapts: You will be working in rapidly evolving modalities, so the ability to quickly grasp new domains and thrive in ambiguity is essential.Prioritizes data quality: We hold data quality in high regard; tasks such as filtering, deduplication, augmentation, and evaluation are key responsibilities, not afterthoughts.Solves problems autonomously: Data engineers operate within training groups (pre-training and multimodal). While collaboration is crucial, we expect ownership and self-direction.The WorkDevelop and maintain data processing, filtering, and selection pipelines at scale.Establish pipelines for pretraining, midtraining, supervised fine-tuning, and preference optimization datasets.Design synthetic data generation systems utilizing large language models (LLMs), structured prompting, and domain-specific generative techniques.
About Liquid LabsAt Liquid AI, research has always been at the forefront of our mission. Liquid Labs serves as a dedicated internal research accelerator, facilitating groundbreaking advancements in the development of intelligent, personalized, and adaptive machines.Our roots extend back to MIT CSAIL, where pioneering work on Liquid Neural Networks established a new category of efficient sequence-processing architectures. This research laid the groundwork for our Liquid Foundation Models (LFMs), which are scalable, multimodal models designed for real-world applications in resource-constrained settings.In Liquid Labs, we continue this legacy by advancing the realm of efficient, adaptive intelligence through both fundamental research and practical engineering efforts.We collaborate closely with Liquid’s core foundation model and systems teams to turn theoretical concepts into deployable capabilities, setting the stage for a new era of powerful and efficient intelligent systems.About The Role:As a Research Engineer at Liquid Labs, you will be part of a dynamic, high-impact team pushing the boundaries of adaptive intelligence. You will be responsible for designing and implementing innovative architectures, training methodologies, and inference strategies to expand the potential of efficient AI.Your work will blend research and engineering, as you translate scientific concepts into functional systems, publish findings that advance the field, and deploy solutions that redefine what is achievable.While we prefer candidates from San Francisco and Boston, we welcome applications from other locations within the United States.
About Liquid AIFounded as a spin-off from MIT CSAIL, Liquid AI specializes in the development of versatile artificial intelligence systems optimized for performance across various deployment environments, ranging from data center accelerators to on-device hardware. Our focus on low latency, minimal memory consumption, privacy, and reliability allows us to partner effectively with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome talented individuals who can contribute to our mission.The OpportunityThis unique position places you at the forefront of advanced foundation models and their practical applications. You will oversee post-training projects from start to finish for some of the world’s leading enterprises, while also playing a vital role in the ongoing development of Liquid’s core models.In this role, you will not have to choose between impactful customer work and foundational development; instead, you will enjoy deep involvement in both. You will have significant influence over how models are adapted, assessed, and deployed, directly contributing to the enhancement of Liquid’s post-training capabilities.If you are passionate about data integrity, evaluation processes, and ensuring that models perform effectively in real-world scenarios, this is your chance to redefine the standards of applied AI at a foundation-model company.What We're Looking ForWe seek an individual who:Takes ownership: You will lead post-training initiatives from customer requirements to delivery and evaluation.Thinks end-to-end: You will connect the dots across data generation, training, alignment, and evaluation as a cohesive system.Is pragmatic: You prioritize model quality and customer satisfaction over theoretical publications.Communicates clearly: You can interpret customer needs and effectively communicate with internal technical teams, providing constructive feedback when necessary.The WorkServe as the technical lead for post-training engagements with enterprise clients.Translate client requirements into actionable post-training specifications and workflows.Design and implement data generation, filtering, and quality assessment methodologies.Conduct supervised fine-tuning, preference alignment, and reinforcement learning processes.Create task-specific evaluations, analyze outcomes, and integrate insights back into core post-training workflows.
About Liquid AIFounded as a spinoff of MIT CSAIL, Liquid AI specializes in developing versatile AI systems designed for optimized performance across various deployment platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory consumption, privacy, and reliability sets us apart. We collaborate with enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we seek remarkable talent to join our journey.The OpportunityAs we establish our solutions architecture function from the ground up, you will play a pivotal role as one of our inaugural Solutions Architects. Collaborating closely with the Head of Solutions Architecture and the go-to-market organization, you will manage customer engagements from inception to completion.Our models are specifically engineered for environments constrained by memory, latency, and power, encompassing edge devices, mobile applications, embedded systems, and on-premises infrastructure where traditional models cannot operate. You will engage with this boundary daily.Our clientele ranges from AI-native startups to established enterprises venturing into AI for the first time. Your mission is to bridge the gap between our models' capabilities and customers' expectations, delivering on that promise from technical validation through to go-live.
About Liquid AIBorn from the innovative environment of MIT CSAIL, Liquid AI develops cutting-edge general-purpose AI systems that operate seamlessly across various deployment environments—from data center accelerators to on-device hardware. Our solutions prioritize low latency, minimal memory consumption, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are eager to welcome exceptional talent to our dynamic team.The OpportunityJoin us in establishing the product function that will transform Liquid AI's technological advancements into scalable, repeatable solutions for enterprise clients. As a key member of our Product team, you will work closely with technical leaders to define, package, and launch AI solutions that meet market needs. This role requires daily collaboration with ML engineers, GTM leaders, and enterprise customers to understand the value of our technology and effectively deliver it. This position offers significant ownership, allowing you to treat your solution area like your own startup within our organization.What We're Looking ForWe are seeking an individual who embodies the following qualities:Customer Obsession: Prioritizes understanding customer needs through direct feedback rather than assumptions.Self-Direction: Takes initiative to dive deep into problems without prompting, and is comfortable navigating uncertainty to propose effective solutions.Technical Fluency: Engages confidently with ML engineers and researchers, understanding the complexities of deploying AI systems in real-world applications.Founder Mentality: Treats their solution area as a startup, owning outcomes across various functions, from technical architecture to go-to-market strategies.The WorkOversee one or more go-to-market ready solutions from inception to scalable customer deployment.Analyze customer interactions to extract insights and identify productization opportunities.Collaborate with ML and inference teams to develop tools that streamline implementation.Define Ideal Customer Profiles (ICPs), pricing strategies, and packaging for scalable solutions.Partner with GTM teams to enhance outbound sales efforts around productized offerings.
Liquid AI, a spin-off from MIT's CSAIL, develops AI systems designed to run efficiently on standard CPUs. The team emphasizes low latency, minimal memory consumption, and strong reliability. Liquid AI works with major players in consumer electronics, automotive, life sciences, and financial services, and is expanding its team as the company grows. Role overview The Executive Assistant supports the C-suite and executive leadership at Liquid AI's San Francisco office. This position serves as a central point for keeping leaders aligned and informed across Go-To-Market, Product, and Engineering. The Executive Assistant ensures smooth information flow between meetings, decisions, and teams, and also coordinates events and partner visits at the office. What makes a strong candidate Information hub: Tracks project status, decisions, and commitments across different functions. Responds to inquiries directly or knows how to find accurate answers. Proactive planning: Reviews upcoming meetings, identifies preparation gaps, flags conflicts, and ensures materials are ready ahead of time. Clear communication: Liaises with external partners, delivers concise updates to technical leaders, and manages follow-ups across teams. Adaptability: Navigates shifting priorities, changing schedules, and new workstreams with resilience and flexibility. Key responsibilities Serves as the operational backbone for senior leaders by tracking action items, maintaining continuity between meetings, and keeping details organized. Manages complex and dynamic calendars, evaluates meeting requests, aligns schedules with strategic priorities, and protects time for high-impact work. Maintains visibility into cross-functional workstreams, providing leadership with relevant context without requiring them to seek updates. Oversees logistics and preparation for partner and customer visits, ensuring a welcoming and effective experience at the San Francisco office.
Join the Innovative Team at Liquid AIFounded as a spin-off from MIT’s CSAIL, Liquid AI is at the forefront of developing cutting-edge AI systems that operate seamlessly across various platforms, including data center accelerators and on-device hardware. Our technology is designed to ensure low latency, efficient memory usage, privacy, and reliability. We collaborate with leading enterprises in sectors such as consumer electronics, automotive, life sciences, and financial services as we rapidly scale our operations. We are seeking talented individuals who are passionate about technology and innovation.Your Role in Our TeamAs a GPU Performance Engineer, your expertise will be critical in enhancing our models and workflows beyond the capabilities of standard frameworks. You will be responsible for designing and deploying custom CUDA kernels, conducting hardware-level profiling, and transforming research concepts into production code that yields tangible improvements in our pipelines (training, post-training, and inference). Our dynamic team values initiative and ownership, and we are looking for a candidate who thrives on tackling complex challenges related to memory hierarchies, tensor cores, and profiling outputs.While San Francisco and Boston are preferred, we welcome applications from other locations.
Liquid AI, a company spun out of MIT CSAIL, develops general-purpose AI systems designed for efficiency, privacy, and reliability across a wide range of platforms. The team partners with enterprises in fields such as consumer electronics, automotive, life sciences, and financial services. As Liquid AI expands, the company is seeking new team members to help shape the direction of AI technology. Role overview This Product Marketing Manager position is the first of its kind at Liquid AI and reports directly to the VP of Marketing in San Francisco. The role bridges product development, communications, and go-to-market planning. The Product Marketing Manager will play a key part in ensuring Liquid AI’s innovations reach the right audiences and will help establish the foundation for future marketing initiatives. Success in this position requires a strategic mindset, the ability to understand complex technical products, and an understanding of both enterprise clients and technical users. Adaptability and a willingness to build new processes from the ground up are important for this role. What matters at Liquid AI Builder: Uses modern AI tools (including Claude Code) for content creation, campaign management, and prototyping. Able to create demos or proofs-of-concept for launches and sales, and guide others in these areas. Translator: Communicates complex technical details in clear, credible ways for the market. Knows when to consult engineering for more information before a launch and balances immediate needs with long-term planning. Operator: Establishes repeatable systems such as kickoffs, briefs, and retrospectives, rather than focusing only on one-off projects. Cross-functional collaborator: Works effectively with product, go-to-market, communications, and sales teams to drive campaigns across multiple channels. Networker: Brings recommendations for tools, contractors, and agencies to help scale marketing and communications. Comfortable working with AI tools and freelancers. Main responsibilities Lead competitive analysis and market research to refine Liquid AI’s positioning and identify new growth opportunities. Collaborate with cross-functional teams to develop and execute product marketing strategies.
Join Our Team as a Machine Learning EngineerSaris-AI is a pioneering applied AI startup, based in San Francisco and Montreal, focused on revolutionizing the banking sector. Our mission is to address a colossal $100 billion/year challenge that is rapidly expanding, innovating the limits of what can be achieved with advanced multi-turn AI systems.We aim to automate complex workflows that necessitate long-context reasoning, orchestration of tools across legacy systems, and rigorous compliance processes—solving problems that currently lack definitive solutions.Our team has successfully deployed AI agents that manage real customer workflows effectively in production. As we expand our customer base and accelerate our growth, we are in search of highly skilled technical builders who aspire to make a significant impact in the early stages of our journey.As a foundational Machine Learning Engineer, you will own our entire ML stack and bring custom agents to life.
At Exa, we are revolutionizing the way AI applications access information by building a cutting-edge search engine from the ground up. Our team is dedicated to developing a robust infrastructure capable of crawling the web, training advanced embedding models, and creating high-performance vector databases using Rust to facilitate seamless searches.As part of our ML team, you'll be instrumental in training foundational models that refine search capabilities. Our mission? To deliver precise answers to even the most complex queries, effectively transforming the web into an incredibly powerful knowledge database.We are seeking a talented Machine Learning Research Engineer who is passionate about crafting embedding models that enhance web search efficiency. Your responsibilities will include innovating novel transformer-based architectures, curating extensive datasets, conducting evaluations, and continuously improving our state-of-the-art models.
About MercorMercor sits at the forefront of labor markets and artificial intelligence research, collaborating with premier AI laboratories and enterprises to harness the human intelligence crucial for AI evolution.Our expansive talent network empowers the training of cutting-edge AI models, akin to how educators impart knowledge to students—sharing insights, experiences, and contexts that transcend mere code. Currently, our network comprises over 30,000 experts, generating collective earnings exceeding $2 million daily.At Mercor, we are pioneering a unique category of work where expertise fuels AI progress. Realizing this vision necessitates a bold, fast-paced, and deeply dedicated team. You will collaborate with researchers, operators, and AI firms that are at the vanguard of transforming systems that redefine society.As a profitable Series C company, Mercor is valued at $10 billion and maintains an in-office presence five days a week at our new headquarters in San Francisco.About the RoleIn your capacity as a Research Engineer at Mercor, you will operate at the intersection of engineering and applied AI research. You will play a pivotal role in post-training and Reinforcement Learning from Human Feedback (RLVR), synthetic data generation, and large-scale evaluation workflows essential for advancing frontier language models.Your contributions will help train large language models to adeptly utilize tools, exhibit agentic behavior, and engage in real-world reasoning within production environments. You will be instrumental in shaping rewards, conducting post-training experiments, and constructing scalable systems to enhance model performance. Your responsibilities will also include designing and evaluating datasets, creating scalable data augmentation pipelines, and developing rubrics and evaluators that expand the learning potential of LLMs.
About Handshake Handshake connects over 20 million knowledge workers with 1,600 educational institutions and 1 million employers, including every Fortune 50 company. The platform supports career growth and upskilling by bridging students, educators, and employers. Handshake has seen rapid expansion, tripling its Annual Recurring Revenue (ARR) by 2025. Why Work at Handshake? Shape the future of careers in the AI economy at a global level. Collaborate with top AI labs, Fortune 500 companies, and leading educational institutions. Work alongside experienced professionals from organizations such as Scale AI, Meta, xAI, Notion, Coinbase, and Palantir. Contribute to a company with significant growth and revenue potential. Role Overview: AI Research Engineer Design and build advanced post-training systems in partnership with research scientists and domain experts. Develop and maintain infrastructure for large-scale model training and specialized data processing. Create frameworks to verify the quality and integrity of domain-specific datasets. Develop benchmarks for large language models to improve evaluation and capability assessments. Optimize software and hardware performance to accelerate post-training experiments and deployment. Work across teams to ensure thorough validation of model improvements. Location San Francisco, CA
Full-time|$350K/yr - $475K/yr|On-site|San Francisco
At Thinking Machines Lab, our ambition is to enhance human potential by advancing collaborative general intelligence. We envision a future where individuals have the tools and knowledge to harness AI for their distinct requirements and aspirations.Our team comprises dedicated scientists, engineers, and innovators who have contributed to some of the most renowned AI products, including ChatGPT and Character.ai, along with open-weight models like Mistral, and influential open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.About the RoleWe are seeking an Infrastructure Research Engineer to architect, optimize, and sustain the computational frameworks that facilitate large-scale language model training. You will create high-performance machine learning kernels (e.g., CUDA, CuTe, Triton), enable effective low-precision arithmetic operations, and enhance the distributed computing infrastructure essential for training expansive models.This position is ideal for an engineer who thrives in close collaboration with hardware and research disciplines. You will partner with researchers and systems architects to merge algorithmic design with hardware efficiency. Your responsibilities will include prototyping new kernel implementations, evaluating performance across various hardware generations, and helping to establish the numerical and parallelism strategies crucial for scaling next-generation AI systems.Note: This is an evergreen role that remains open continuously for expressions of interest. We receive numerous applications, and there may not always be an immediate opportunity that aligns with your qualifications. However, we encourage you to apply, as we regularly assess applications and will reach out as new positions become available. You are also welcome to reapply after gaining additional experience, but please refrain from applying more than once every six months. Additionally, you may notice postings for specific roles catering to particular projects or team needs. In such cases, you are encouraged to apply directly alongside this evergreen listing.What You’ll DoDesign and develop custom ML kernels (e.g., CUDA, CuTe, Triton) for key LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for contemporary GPU and accelerator architectures.Conceptualize compute primitives aimed at alleviating memory bandwidth bottlenecks and enhancing kernel compute efficiency.Collaborate with research teams to synchronize kernel-level optimizations with model architecture and algorithmic objectives.Create and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.Contribute to the stability and scalability of our infrastructure, ensuring it meets the growing demands of AI development.
Full-time|Remote|San Francisco, CA, US; Remote, CA, US
Join Pinterest as a Senior Machine Learning Engineer specializing in Responsible AI. In this role, you will leverage your expertise to drive innovative applied research in machine learning, ensuring the responsible and ethical use of AI technologies. Collaborate with cross-functional teams to develop advanced algorithms and contribute to large-scale projects that impact millions of users around the globe.
About Liquid AILiquid AI, a pioneering company spun out of MIT CSAIL, is at the forefront of developing general-purpose AI systems that operate efficiently across various platforms, from data center accelerators to on-device hardware. Our commitment to low latency, minimal memory usage, privacy, and reliability allows us to partner with some of the most esteemed enterprises in consumer electronics, automotive, life sciences, and financial services. As we experience rapid growth, we are seeking exceptional talent to join our innovative journey.The OpportunityJoin our cutting-edge Audio team, where we are developing advanced speech-language models capable of handling Speech-to-Text (STT), Text-to-Speech (TTS), and speech-to-speech tasks within a unified architecture. This pivotal role supports applied audio model development, directly collaborating with the technical lead to deliver production systems that operate on-device under real-time constraints. You will take ownership of key workstreams encompassing data pipelines, evaluation systems, and customer deployments. If you are eager to tackle unique technical challenges within a small, elite team where your contributions are impactful, this is the role for you.What We're Looking ForWe are seeking an individual who:Builds first, theorizes later: You prioritize shipping working systems over theoretical models; production-grade code is your default.Owns outcomes end-to-end: You take full responsibility for everything from data pipelines to customer deployments and don't shy away from challenges.Thrives under constraints: On-device, low-latency, memory-constrained environments motivate you. You view constraints as opportunities for innovative design.Ramps quickly on new territory: You are comfortable closing knowledge gaps swiftly and actively seek feedback to drive results.The WorkDevelop and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale.Design, implement, and maintain evaluation systems that assess multimodal performance across both internal and public benchmarks.Fine-tune and adapt audio models to cater to customer-specific use cases, taking charge from requirement gathering through to deployment.Contribute production code to the core audio repository while collaborating closely with infrastructure and research teams.Facilitate experimentation under real hardware constraints, transitioning smoothly between customer-focused projects and core development initiatives.
About Plaid Plaid builds tools that help developers create new financial products and experiences. Since 2013, Plaid has connected millions of users to over 12,000 financial institutions across the US, Canada, the UK, and Europe. The company partners with organizations like Venmo, SoFi, Fortune 500 firms, and major banks to make linking financial accounts to apps and services easier. Headquarters are in San Francisco, with offices in New York, Washington D.C., London, and Amsterdam. Team: Data Foundation & AI The Data Foundation and AI team designs and maintains the machine learning and AI infrastructure that supports Plaid’s products. This group transforms Plaid’s financial network data into flexible formats used by teams across the company. Responsibilities span the entire system lifecycle: data curation for pretraining, model development, deployment, serving, and monitoring in production. Role Overview: Senior Machine Learning Engineer (Research Scientist) This position focuses on applied research for Plaid’s foundation model. The Senior Research Scientist leads efforts to design model architectures, set pretraining objectives, and implement fine-tuning strategies that work across a range of product needs. The role also involves building and maintaining production machine learning systems, including training pipelines, model serving, feature engineering, and performance monitoring. Key Responsibilities Design model architectures and define pretraining objectives for Plaid’s foundation model Develop and apply fine-tuning methods for diverse product use cases Build and maintain end-to-end machine learning systems, from data pipelines to model serving Engineer features and monitor system performance in production Create evaluation frameworks to measure model quality across multiple tasks and metrics Location This role is based in San Francisco.
Join a pioneering team of former Google engineers who have developed ground-breaking defensive technologies, such as Safe Browsing and reCAPTCHA. We are on a mission to confront an urgent challenge: combating the rising tide of adversarial AI attacks that threaten organizations globally.Operating in stealth mode, we are targeting a lucrative $5B+ market that is primed for innovation. Conventional detection methodologies are proving inadequate against the speed and sophistication of AI-driven assaults. Current adversaries are leveraging AI to engineer tailored, high-evasion attacks, leaving traditional systems vulnerable.Your Role:You will design a network of AI agents that are rapid, cost-effective, and precise, collaborating to identify and neutralize emerging threats. Your work will dive deep into real-time threat data, continuously evolving your agents in a fast-paced environment. These agents will function under an orchestration layer that fosters quick adaptation and learning.The Excitement of the ChallengeRapidly Evolving Models: The landscape changes daily; solutions that worked yesterday may be outdated today.Intelligent Adversaries: We are engaged in a real-time arms race against cunning, AI-enhanced attackers crafting sophisticated payloads.No Existing Playbook: We are forging new detection paradigms as swiftly as threats evolve. This high-stakes work places you in the heart of the action from day one.If you thrive on solving challenging problems with rapid feedback, this is your opportunity.Why We Are Positioned to SucceedExpansive Market: The market is vast at $5B and expanding quickly, while established players struggle to adapt.Proven Track Record: Our team has previously developed the foundational technology for Safe Browsing (serving over 5B users) and reCAPTCHA (protecting more than 5M websites) during our time at Google.Experienced Team: This is our third endeavor in creating a category-defining security enterprise, and we know how to scale our technology and our organization effectively.Deeply Integrated AI and Security: We embed AI from the outset rather than layering it on top.Top Talent: We hire only the highest achievers; many on our team were in the top 1% of engineers at Google. If you excelled in your previous role, you will fit right in.Agility: We prioritize speed and efficiency in everything we do.
Full-time|$150K/yr - $150K/yr|On-site|San Francisco Office
Join Reducto as a Machine Learning EngineerAt Reducto, we empower AI teams to harness real-world enterprise data with unparalleled precision.Much of the enterprise data—ranging from financial documents to healthcare records—remains trapped in unstructured formats such as PDFs and spreadsheets. Our vision models are designed to interpret these documents in a human-like manner, enabling the development of innovative products, training of machine learning models, and automation of processes on a large scale.Our rapid growth is a testament to our success, having achieved a staggering 7x year-over-year revenue increase, collaborating with numerous companies from prominent AI teams like Harvey, Vanta, and Scale to major enterprises including FAANG and leading trading firms.With over $100 million raised from esteemed investors such as A16z, Benchmark, and First Round Capital, we are on the lookout for a talented Machine Learning Engineer to assist in training and deploying models crucial for our core product's success.
Full-time|$215K/yr - $290K/yr|On-site|San Francisco Bay Area
Join Retell AI as a Senior Machine Learning EngineerRetell AI is at the forefront of revolutionizing the call center industry using groundbreaking voice AI technology. Within just 18 months of our inception, we have empowered thousands of businesses with our AI voice agents capable of managing sales, support, and logistics calls that traditionally required extensive human teams.Supported by renowned investors, including Y Combinator and Alt Capital, our journey has seen us scale from $5M to an impressive $36M ARR with a dedicated team of 20. Our ambition for 2026 is to develop a state-of-the-art customer experience platform, transforming entire contact centers with AI. We are building intelligent AI “workers” that will serve as frontline agents, quality assurance analysts, and managers—constantly executing, monitoring, and enhancing customer interactions.We are rapidly expanding and seeking passionate innovators eager to solve complex technical challenges and make a tangible impact in one of the fastest-growing voice AI startups. Together, let's shape the future of customer interactions.
About the RoleJoin Thorin as an AI Researcher and play a pivotal role in shaping the core research initiatives that drive our AI innovations. In this position, you will operate at the crossroads of machine learning research, practical model development, and product application, enhancing our understanding and automation of enterprise workflows.This position merges theoretical research with hands-on implementation, transitioning ideas from conceptual stages through experimentation into functional components that enrich Thorin’s offerings.Your ResponsibilitiesResearch & InnovationConduct innovative machine learning research aligned with real-world product demands.Investigate new model architectures, training methods, and evaluation techniques specifically designed for understanding and automating organizational workflows.Model Development & EvaluationCreate, implement, and assess ML/AI methodologies that enhance model efficacy for essential tasks.Collaborate closely with cross-functional teams to integrate research outcomes into tangible products that meet user needs.
Jan 15, 2026
Sign in to browse more jobs
Create account — see all 12,285 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.