Scale AI, Inc.San Francisco, CA; Seattle, WA; New York, NY
On-site Full-time $218.4K/yr - $273K/yr
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
Key Responsibilities:Design, profile, and enhance our training and inference framework. Work collaboratively with ML teams to expedite their research and development processes, empowering them to create next-generation models and data curation strategies. Investigate and incorporate cutting-edge technologies to refine our ML system. Preferred Qualifications:A strong enthusiasm for system optimization. Hands-on experience with multi-node LLM training and inference. Proven experience in developing large-scale distributed ML systems. Robust software engineering capabilities, with proficiency in frameworks and tools like CUDA, PyTorch, Transformers, Flash Attention, etc. Excellent written and verbal communication skills with the ability to thrive in a cross-functional team environment. Desirable Skills:Demonstrated expertise in post-training methodologies and/or innovative use cases for large language models, including instruction tuning, RLHF (Reinforcement Learning from Human Feedback), tool usage, reasoning, agents, and multimodal applications.
About the job
Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.
At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.
If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!
About Scale AI, Inc.
Scale AI is a leader in the AI sector, providing indispensable training and evaluation data as well as comprehensive end-to-end solutions for the machine learning lifecycle. Our platform empowers researchers and engineers to push the boundaries of AI technology.
Similar jobs
1 - 20 of 5,222 Jobs
Search for Embedded Ml Engineer Gesture Recognition
About SesameAt Sesame, we envision a future where technology feels alive—capable of seeing, hearing, and interacting with us in a manner that is both intuitive and human. We are pioneering a new generation of computers that seamlessly integrate voice agents into everyday life. Our talented team comprises industry trailblazers from Oculus, Ubiquity6, Meta, Google, and Apple, bringing a wealth of expertise in both hardware and software development. Join us to redefine the interaction between humans and machines.About the RoleWe are on the lookout for an innovative Embedded Machine Learning Engineer specializing in gesture recognition to facilitate rich, dependable interactions on wearable devices. The ideal candidate will thrive in a dynamic environment and be adept at transforming concepts from ideation to tangible products that users can experience. Collaborating closely with our hardware, firmware, and product teams, you will ensure that user interactions remain fluid and consistent across various settings.Key Responsibilities:Design, implement, and deploy algorithms for gesture detection optimized for ultra-low-power embedded hardware.Adapt and refine larger machine learning models for deployment on mobile-class devices.Oversee the entire development lifecycle: system design, data gathering and curation, synthetic data generation, model training and assessment, and on-device optimization.Collaborate with electrical, mechanical, and product teams to integrate algorithms into evolving hardware frameworks.Research and select promising methodologies from existing literature, and innovate new approaches when necessary to achieve our distinctive objectives.Essential Qualifications:A minimum of 10 years of experience in Software Engineering, Machine Learning Research, or a related field.Demonstrated ability to operate with high autonomy in ambiguous environments.Proven track record of developing and deploying machine learning algorithms on embedded or resource-constrained devices.Strong proficiency in Python and C/C++, with experience in frameworks such as PyTorch or TensorFlow.Hands-on experience with end-to-end machine learning workflows, from data acquisition to on-device deployment.A solid understanding of signal processing and/or time-series analysis methods for sensor data.Exceptional communication skills, with the ability to articulate complex ideas clearly.
Senior Product Engineer (Full-Stack)About embedding-vcOpenArt, a pioneering platform at embedding-vc, utilizes AI to revolutionize storytelling and visual creation, serving millions across the globe. We are on a mission to develop the future of creative tools enhanced by advanced AI technology, enabling users to craft videos, visuals, characters, and narratives with unprecedented speed and ease.Why Choose to Work with Us?Join a small, dynamic team where senior engineers take full ownership of significant systems from start to finish.Experience rapid deployment at scale — your contributions will impact millions.Be part of a founder-led engineering culture that values deep involvement in product and architectural decisions.Engage with an AI-native product that is at the forefront of transforming advanced models into tangible user experiences.Thrive in a low-process, high-trust environment where your judgment, clarity, and speed are paramount.Your RoleWe are seeking a Senior Product Engineer (Full-Stack) to take charge of intricate, user-facing systems from conception to execution. This multifaceted role requires proficiency across frontend UX, backend systems, APIs, and data models, allowing you to balance product speed, scalability, and long-term sustainability effectively.You will collaborate closely with the founders to design, construct, and refine core AI-driven workflows, acting both as a strong technical leader and a thoughtful product strategist.Your ContributionsGuide product features from initial concept and UX design through to backend integration and successful deployment.Architect scalable backend systems, APIs, asynchronous pipelines, and AI inference workflows.Lead system design initiatives with a focus on performance, reliability, and scalability.Create clean, scalable data models across both SQL and NoSQL databases.Develop seamless frontend experiences utilizing React / Next.js.Make informed technical trade-offs and communicate them effectively.Work closely with Product, Design, and Go-to-Market teams.Mentor fellow engineers and elevate the overall engineering standards.Shape architecture, tooling, and engineering best practices.Desired QualificationsEssential SkillsProven full-stack experience with a track record of shipping scalable production systems.Capacity to independently design and deliver complete features.Demonstrated ability to make pragmatic design decisions that balance speed and quality.Excellent communication and collaboration skills to work effectively with diverse teams.Strong problem-solving abilities and a passion for innovation in engineering.
At Runway ML, we are revolutionizing the intersection of art and science through innovative AI technology. Our mission is to build sophisticated world models that transcend traditional artificial intelligence limitations. We believe that to tackle the most pressing challenges—such as robotics, disease, and scientific breakthroughs—we need systems that can learn from experiences just like humans do. By simulating these experiences, we can expedite progress in ways that were previously unimaginable.Our diverse and driven team consists of creative thinkers who are passionate about pushing boundaries and achieving the extraordinary. If you share this ambition and are eager to contribute to our groundbreaking work, we invite you to join us.About the Role*We are open to hiring remotely across North America. We also have offices in NYC, San Francisco, and Seattle.We are on the lookout for a highly skilled and intellectually inquisitive Technical Accounting Manager to be our go-to authority on intricate accounting issues. This position offers significant visibility and is ideal for a professional adept at interpreting complex accounting guidelines, formulating sound conclusions, and translating technical insights into practical accounting practices.
Join Our Team at Air AppsAt Air Apps, we are on a mission to revolutionize resource management through innovative technology. Founded in 2018 in Lisbon, Portugal, we have expanded our reach with offices in both Lisbon and San Francisco, boasting over 100 million downloads globally. Our vision is to create the world’s first AI-powered Personal & Entrepreneurial Resource Planner (PRP), and we are looking for passionate individuals to help us achieve this goal.Our commitment to challenging the status quo drives us to push the boundaries of AI-driven solutions that make a real impact. Here, you will have the opportunity to be a creative force, developing products that empower individuals worldwide.Join us as we embark on this journey to redefine how people plan, work, and live.
Role Overview stok is looking for an Embedded Structural Engineer in San Francisco, CA. This role focuses on designing and developing structural solutions that prioritize both safety and sustainability. Work closely with a skilled team to deliver creative, effective designs that align with client requirements. What You Will Do Contribute engineering expertise to structural design projects from concept through completion Collaborate with team members to develop solutions that meet project goals Ensure all designs uphold safety standards and support sustainability objectives Help deliver innovative results tailored to client needs
About Sygaldry Technologies Sygaldry Technologies develops quantum-accelerated AI servers in San Francisco, focusing on faster AI training and inference. By combining quantum technology with artificial intelligence, the team addresses challenges in computing costs and energy efficiency. Their AI servers integrate multiple qubit types within a fault-tolerant system, aiming for a balance of cost, scalability, and speed. The company values optimism, rigor, and a drive to solve complex problems in physics, engineering, and AI. Role Overview: ML Infrastructure Engineer The ML Infrastructure Engineer joins the AI & Algorithms team, which includes research scientists, applied mathematicians, and quantum algorithm specialists. This role centers on building and maintaining the compute infrastructure that powers advanced research. The systems you build will support reliable GPU access, reproducible experiments, and scalable workloads, so researchers can focus on their core work without needing deep cloud expertise. Expect to design and manage compute platforms for a range of tasks, including quantum circuit simulation, large-scale numerical optimization, model training, tensor network contractions, and high-throughput data generation. These workloads span multiple cloud providers and on-premises GPU servers. Key Responsibilities Develop compute abstractions for diverse workloads, such as GPU-accelerated simulations, distributed training, high-throughput CPU jobs, and interactive analyses using frameworks like PyTorch and JAX. Set up infrastructure to support experiment tracking and reproducibility. Create developer tools that make cloud computing feel local, streamlining environment setup, job submission, monitoring, and artifact management. Scale experiments from single-GPU prototypes to large, multi-node production runs. Multi-Cloud GPU Orchestration Design orchestration strategies for workloads across multiple cloud providers, optimizing job routing for cost, availability, and capability. Monitor and improve cloud spending, keeping track of credit balances, burn rates, and expiration dates.
About SesameAt Sesame, we envision a future where computers exhibit lifelike characteristics, capable of seeing, hearing, and interacting with us in a manner that feels intuitive and human. We are at the forefront of this vision, developing innovative computing solutions that integrate voice agents into everyday life. Our team is composed of visionaries from Oculus and Ubiquity6, along with seasoned professionals from Meta, Google, and Apple, each bringing extensive expertise in hardware and software. Join us on our journey to create a world where technology truly comes alive.About the RoleWe are looking for a skilled Embedded Engineer who thrives in dynamic environments to help launch an advanced consumer electronics product. This role demands rapid firmware development in diverse hardware settings that are intricately connected to various system components. You will leverage your experience in shipping wearable consumer products from the initial prototype phase through to product launch, while effectively coordinating a distributed team to assemble, utilize, and troubleshoot custom embedded systems.Responsibilities:Architect, design, implement, and validate embedded software for a range of platforms, from energy-efficient MCUs to mobile System on Chips (SoCs).Collaborate on hardware design, facilitate bring-up and debugging processes through component selection, interface definition, driver creation, and tool development.Define, enhance, and maintain firmware development, debugging, and Continuous Integration (CI) workflows and environments.Engage in software development for interfacing embedded systems with PCs or network devices for data collection, testing, and calibration.Comprehend requirements and architecture of higher-level software components, leading their integration and optimization for embedded systems interaction.Lead and guide external partners to enhance embedded systems development capabilities.Required Qualifications:Proven ability to work autonomously in high-ambiguity settings.Over 10 years of experience in developing and delivering software for intricate embedded systems.Expertise in C/C++, Python, and firmware building & debugging tools.Track record of shipping and maintaining a complex product, including custom sensor integration.
At Sciforium, we are at the forefront of AI infrastructure, dedicated to the development of advanced multimodal AI models and an innovative serving platform that emphasizes high efficiency. With substantial funding and direct collaboration from AMD, our team is rapidly expanding to create the complete stack for pioneering AI models and dynamic real-time applications.Role OverviewThis position provides a distinct opportunity to engage with the fundamental systems that drive Sciforium's multimodal AI models. You will play a crucial role in constructing the model serving platform, working with C++, Python, runtime execution, and distributed infrastructure to design a swift, dependable engine for real-time AI applications.You will acquire practical experience in performance engineering, discover how large AI models are optimized and deployed at scale, and collaborate closely with ML researchers and seasoned systems engineers. If you thrive in low-level programming and are passionate about performance, this role offers both impactful contributions and significant growth opportunities.
Full-time|$155.6K/yr - $320.3K/yr|Remote|San Francisco, CA, US; Remote, US
About tvScientific tvScientific is the premier CTV advertising platform exclusively tailored for performance marketers. Our innovative approach harnesses vast data and state-of-the-art science to automate and enhance TV advertising, ultimately driving impactful business results. Our platform seamlessly integrates media buying, optimization, measurement, and attribution into one powerful, efficient solution. Developed by industry veterans with extensive backgrounds in programmatic advertising, digital media, and ad verification, our CTV performance platform is designed to help advertisers confidently scale their business. We are currently seeking a Senior MLOps Engineer to join our dynamic, distributed engineering team focused on our Connected TV ad-buying platform, as we expand our Machine Learning capabilities. Having successfully optimized TV ad campaigns, we are poised for massive growth, and we need your expertise to ensure our scalability is both sustainable and effective. As a proud member of Idealab, tvScientific was co-founded by leaders deeply rooted in programmatic advertising and digital media. We empower our clients to purchase ads across the expansive CTV landscape, including platforms such as Hulu, PlutoTV, and the ad-supported tiers of Disney+ and HBO Max. Following our acquisition by Pinterest, we are intensifying our focus on CTV to enhance the performance of search and social advertising.
About UsAt Speak, our mission is to revolutionize language learning.Learning a new language can transform lives by unlocking opportunities in diverse cultures, careers, and communities. With over two billion individuals around the globe striving to learn a language, we recognize that traditional one-on-one tutoring remains difficult to access at scale and has seen little innovation over recent decades. Speak is pioneering an AI-driven, human-level tutor accessible right from your pocket, providing a conversation-first experience where learners can practice speaking, receive immediate feedback, and progress through meticulously crafted lessons. Our goal is to facilitate a comprehensive journey from beginner to proficient speaker across various languages.Launched in South Korea in 2019, Speak has quickly become the leading language learning app in the region, now reaching learners across numerous markets and offering instruction in 15+ languages. Supported by over $150 million in venture capital from prestigious investors such as OpenAI, Accel, Founders Fund, and Khosla Ventures, our team is distributed across San Francisco, Seoul, Tokyo, Taipei, and Ljubljana.Role OverviewWe are seeking a skilled Machine Learning Engineer specializing in speech to join our innovative team. In this role, you will take charge of the entire modeling pipeline for speech recognition, encompassing training, experimentation, deployment, and ongoing monitoring. Collaborating closely with Product teams, you will design cutting-edge learning experiences and assess the effectiveness of production models on our users. As part of a nimble and dynamic team, you'll contribute as both a developer and a thought partner on projects related to ASR, assessments, pronunciation improvements, content personalization, and more. This is an exhilarating opportunity to be part of an ML team focused on crafting personalized learning experiences that will transform language education for millions worldwide.
Full-time|$118K/yr - $148K/yr|Remote|San Francisco, California, USA
At New Relic, we are a global collective of innovators and pioneers committed to transforming the landscape of observability. We develop an intelligent platform that equips businesses to excel in an AI-driven world by providing them with unmatched insights into their intricate systems. As we broaden our international presence, we seek passionate individuals to join our mission. If you're eager to assist the world's leading organizations in optimizing their digital applications, we invite you to consider a career with us!Your OpportunityAre you seeking a high-impact finance position that offers genuine flexibility? We are on the lookout for a Lead Revenue Accountant to become a vital part of our team in a fully remote capacity. In this essential role, you won’t just be managing spreadsheets; you will partner closely with our Revenue Accounting Senior Manager to establish and enhance our revenue processes from the ground up. This is your opportunity to leverage your technical expertise in a dynamic environment where your efforts directly impact our financial reporting and growth. If you thrive on independence and prefer to focus on meaningful work without the daily commute, this role is perfect for you.What You'll Do:Lead the Process: Assume full responsibility for the revenue-related month-end close, ensuring our financial reports are accurate, timely, and transparent.Technical Consultation: Analyze customer contracts and non-standard terms to determine appropriate revenue treatment, delivering clear and concise accounting documentation.Financial Integrity: Prepare and reconcile vital revenue schedules, including deferred revenue and contract assets/liabilities, while upholding a high standard of accuracy.Process Innovation: Promote continuous improvement by optimizing our revenue recognition processes.
About UsAt Lemurian Labs, we are dedicated to democratizing AI technology while prioritizing sustainability. Our mission is to create solutions that minimize environmental impact, ensuring that artificial intelligence serves humanity positively. We are committed to responsible innovation and the sustainable growth of AI.We are in the process of developing a state-of-the-art, portable compiler that empowers developers to 'build once, deploy anywhere.' This technology ensures seamless cross-platform integration, allowing for model training in the cloud and deployment at the edge, all while maximizing resource efficiency and scalability.If you are passionate about scaling AI sustainably and are eager to make AI development more powerful and accessible, we invite you to join our team at Lemurian Labs. Together, we can build a future that is innovative and responsible.The RoleWe are seeking a Senior ML Performance Engineer to take charge of designing and leading our Performance Testing Platform from inception. In this pivotal role, you will be recognized as the technical expert in measuring, validating, and enhancing the performance of large language models (including Llama 3.2 70B, DeepSeek, and others) prior to and following compiler optimization on cutting-edge GPU architectures.This is a critical position that will significantly impact our product quality and customer success. You will work at the intersection of Machine Learning systems, GPU architecture, and performance engineering, constructing the infrastructure that substantiates the value of our compiler.
Embedded Software Engineer - Embedded Systems & FirmwareCompany Overview:At Specter, we are pioneering a software-centric control plane for tangible assets, focusing initially on safeguarding American enterprises by providing comprehensive oversight of their physical resources.We are developing an integrated hardware-software ecosystem leveraging cutting-edge multi-modal wireless mesh sensing technology, which dramatically reduces the cost and timeline for sensor deployment by 10-fold. Our platform aspires to be the perception engine for a company’s physical presence, facilitating real-time visibility of perimeters, autonomous management of operations, and the creation of digital twins for physical processes.Led by our passionate co-founders Xerxes and Philip, our small but dynamic team hails from notable organizations such as Anduril, Tesla, Uber, and the U.S. Special Forces, committed to empowering partners in the rapidly evolving fields of physical AI and robotics.Role Overview:We are on the lookout for an Embedded Software Engineer who will take charge of the complete on-device software stack for our distributed wireless mesh sensor nodes, including the integration of RF modules, cameras, and multi-modal sensors.Key Responsibilities:Design, implement, and maintain high-performance, reliable firmware and software for Specter’s existing and future edge devices, encompassing a variety of embedded platforms (embedded Linux on SoCs, RTOS, bare-metal on microcontrollers).Lead the integration of RF modules, cameras, and multi-modal sensors (e.g., environmental, motion, acoustic) within the embedded software stack, which includes driver development, data pipelines, and hardware enablement.Conduct board bring-up, interpret datasheets/schematics, and troubleshoot complex hardware/software interactions utilizing oscilloscopes, logic analyzers, JTAG/SWD, and other diagnostic tools.Work closely with Hardware Engineering (EE, RF, ME), Product Engineering, and backend software teams to collaboratively design interfaces, support new hardware platforms, and facilitate rapid prototyping and iteration from concept to production deployment.Build and maintain documentation related to the embedded software development lifecycle.
Full-time|$308K/yr - $423.5K/yr|On-site|San Francisco, CA
About FaireFaire is a cutting-edge online wholesale marketplace driven by the belief that the future is local. Independent retailers around the world generate more revenue than giants like Walmart and Amazon combined, yet individually, they often struggle against these behemoths. At Faire, we harness the power of technology, data, and machine learning to connect this vibrant community of entrepreneurs globally. Imagine your favorite local boutique; we empower them to discover and sell exceptional products from around the world. With the right tools and insights, we aim to level the playing field, allowing small businesses to compete effectively with large retail chains and e-commerce platforms.By fostering the growth of independent businesses, Faire is making a positive economic impact in local communities worldwide. We’re in search of intelligent, resourceful, and passionate individuals to join us in driving the shop-local movement. If you share our belief in community, we would love to welcome you to ours.About this Role:We are on the lookout for a Principal ML / AI Engineer to serve as a company-wide technical thought leader and practitioner in shaping the future of Data and AI at Faire. This unique opportunity allows you to influence broad technical strategies across data, engineering, and product while engaging directly with pioneering AI research and applications. This role will report directly to the CTO of Faire.Your Responsibilities:Shape the AI Vision – Collaborate with product, design, strategy & analytics, machine learning, and the wider engineering leadership to define how AI can unlock transformational value for Faire’s retailers and brands. Provide thought leadership to guide company-wide priorities, particularly focusing on product strategy and key investment areas.Prototype and Unblock – Lead the development and implementation of AI systems (such as LLM fine-tuning, RLHF, agent frameworks, etc.) that illustrate what’s achievable and promote adoption across teams. Act as a “super individual contributor” who can delve deeply into technical challenges, enabling the engineering organization to advance quickly with AI and amplify both development and impact.Architect the AI-Ready Stack – Design Faire’s technical ecosystem, encompassing event logging, data warehouses, feature stores, and model serving, to ensure our infrastructure is AI-ready, scalable, and optimized for rapid experimentation.
Full-time|$159.8K/yr - $235K/yr|On-site|San Francisco, CA
Join DoorDash Labs, the innovation center driving automation and robotics for last-mile logistics. As a Senior/Staff Embedded Software Engineer, you'll spearhead the development of ARM-based microcontroller platforms for our cutting-edge robotics products. This role is essential for crafting the low-level firmware that powers our systems, focusing on motion control, sensor integration, communication, power management, and safety-critical functionalities. Expect to engage in hands-on board bring-up and low-level debugging while collaborating closely with electrical and systems engineers to create robust solutions that enhance efficiency for Dashers, merchants, and consumers.
Full-time|$218.4K/yr - $273K/yr|On-site|San Francisco, CA; Seattle, WA; New York, NY
Join Scale AI's ML platform team (RLXF) as a Machine Learning Research Engineer, where you will play a pivotal role in developing our advanced distributed framework for training and inference of large language models. This platform is vital for enabling machine learning engineers, researchers, data scientists, and operators to conduct rapid and automated training, as well as evaluation of LLMs and data quality.At Scale, we occupy a unique position in the AI landscape, serving as an essential provider of training and evaluation data along with comprehensive solutions for the entire ML lifecycle. You will collaborate closely with Scale's ML teams and researchers to enhance the foundational platform that underpins our ML research and development initiatives. Your contributions will be crucial in optimizing the platform to support the next generation of LLM training, inference, and data curation.If you are passionate about driving the future of AI through groundbreaking innovations, we want to hear from you!
About Our TeamAt OpenAI, our Hardware organization is pioneering the development of cutting-edge silicon and system-level solutions tailored to meet the distinctive needs of advanced AI workloads. We are dedicated to building the next generation of AI silicon, collaborating closely with software engineers and research partners to co-design hardware that integrates seamlessly with our AI models. Our mission includes not only delivering high-quality, production-grade silicon for OpenAI's supercomputing infrastructure but also creating custom design tools and methodologies that foster innovation and enable hardware optimized specifically for AI applications.About the RoleWe are on the lookout for a talented Research Hardware Co-Design Engineer to operate at the intersection of model research and silicon/system architecture. In this role, you will play a critical part in shaping the numerics, architecture, and technological strategies for the future of OpenAI's silicon in collaboration with both Research and Hardware teams.Your responsibilities will include diagnosing discrepancies between theoretical performance and real-world measurements, writing quantization kernels, assessing the risks associated with numerics through model evaluations, quantifying system architecture trade-offs, and implementing innovative numeric RTL. This is a hands-on position for individuals who are passionate about tackling challenging problems, seeking practical solutions, and driving them to production. Strong prioritization and transparent communication skills are vital for success in this role.Location: San Francisco, CA (Hybrid: 3 days/week onsite)Relocation assistance available.Key Responsibilities:Enhance our roofline simulator to monitor evolving workloads and deliver analyses that quantify the impact of architectural decisions, supporting technology exploration.Identify and resolve discrepancies between performance simulations and actual measurements; effectively communicate root causes, bottlenecks, and incorrect assumptions.Develop emulation kernels for low-precision numerics and lossy compression techniques, equipping Research with the insights needed to balance efficiency with model quality.Prototype numeric modules by advancing RTL through synthesis; either hand off innovative numeric solutions cleanly or occasionally take ownership of an RTL module from start to finish.Proactively engage with new ML workloads, prototype them using rooflines and/or functional simulations, and initiate evaluations of new opportunities or risks.Gain a holistic understanding of the transition from ML science to hardware optimization, breaking down this comprehensive objective into actionable short-term deliverables.Foster collaborative relationships across diverse teams with varying goals and expertise, ensuring that progress remains unimpeded.Clearly articulate design trade-offs with explicit assumptions and rationale.
Empowering Innovation in Software EngineeringAt TierZero, our mission is to revolutionize software engineering by enabling developers to focus on what they do best—building. Currently, engineers devote 50–70% of their time on operational tasks such as handling alerts, resolving incidents, answering queries, and monitoring deployments. We are developing cutting-edge AI agents designed to alleviate these burdens, thus unleashing creativity and accelerating development speed. The potential is enormous: a $1.5 trillion productivity market ripe for the taking.Why Join Us?Innovative Team: Our founding team comprises trailblazers in AI technology—seasoned engineers who played pivotal roles in shaping foundational systems at Facebook, Databricks, LangChain, and Brex. Our CEO boasts two successful ventures and previously managed cloud infrastructure for Pokémon GO, while our CTO was instrumental in Databricks’ AI model serving and agent initiatives that generated over $300 million in revenue.Transformative Clients: We collaborate with numerous enterprise engineering teams that have experienced significant improvements in reliability and productivity, many of whom eagerly serve as testimonials for our impact.Groundbreaking Product: We’ve developed a versatile agentic platform that integrates code, infrastructure, telemetry, and institutional knowledge. This holistic approach equips specialized AI agents for a variety of tasks, from incident analysis to engineering support. We take pride in offering the industry’s only 95% accuracy guarantee on agent outputs.Strong Financial Backing: Our endeavors are supported by esteemed investors such as Accel and SV Angel, early backers of prominent companies like Facebook, Atlassian, Slack, and Stripe.About Our SolutionsWhile modern AI tools have simplified coding, the engineering timeframe hasn’t accelerated correspondingly. This lag is primarily due to alerts, incidents, support requests, deployments, performance challenges, and the increasing operational complexity of contemporary systems. TierZero addresses these issues by implementing self-learning AI agents that autonomously manage operational workflows from start to finish, thereby reclaiming invaluable time for engineers and enhancing delivery efficiency.Incident Management: TierZero autonomously conducts impact assessments and root cause analyses utilizing existing telemetry, infrastructure, codebases, and documentation, proposing and executing code changes, rollbacks, and other remedial actions.Engineering Support: Beyond merely addressing common queries, TierZero delves deeper by investigating the root causes...
About Us:At Parafin, our mission is to empower small businesses to thrive in today's competitive landscape. We understand that small businesses form the backbone of our economy, yet they often face challenges in accessing essential financial resources. Our innovative technology streamlines access to vital financial tools directly on the platforms they already utilize for sales. Partnering with industry leaders such as DoorDash, Amazon, Worldpay, and Mindbody, we provide small businesses with fast, flexible funding, efficient spend management, and effective savings solutions through simple integrations. Parafin manages the complexities of capital markets, underwriting, servicing, compliance, and customer support to ensure seamless experiences for our partners and their small business clients.We are composed of a dynamic team of innovators with backgrounds from top firms like Stripe, Square, Plaid, Coinbase, Robinhood, and CERN, all driven by a passion for developing tools that facilitate small business success. Backed by esteemed venture capitalists including GIC, Notable Capital, Redpoint Ventures, Ribbit Capital, and Thrive Capital, Parafin stands as a Series C company with over $194M raised in equity and $340M in debt facilities. Join us in shaping a future where every small business has access to the financial tools they need.About The PositionWe are on the lookout for a skilled Software Engineer to join our Infrastructure team and spearhead the advancement of our Machine Learning (ML) Platform. This pivotal role is essential for constructing reliable, scalable, and developer-centric systems for model experimentation, training, evaluation, inference, and retraining that drive underwriting and other ML-powered products for small businesses.As a Software Engineer, you will design, build, and maintain the core frameworks and platforms that empower data scientists to deploy high-quality models into production efficiently and safely. You'll work closely with Data Science and Platform Engineering, taking ownership of the ML platform from end-to-end, and develop both batch and real-time underwriting infrastructure.What You'll DoTransform notebooks into reliable software. Break down data scientist training and inference notebooks into reusable, well-tested components (libraries, pipelines, templates) with clear interfaces and documentation.Develop user-friendly ML abstractions. Create SDKs, CLIs, and templates that simplify the definition of features, model training and evaluation, and deployment to batch or real-time targets with minimal boilerplate.Construct our real-time ML inference platform. Establish and scale low-latency model serving capabilities.Enhance batch ML inference processes. Optimize scheduling, parallelism, cost controls, and observability to improve efficiencies.
Role OverviewJoin BrightAI as a Senior/Staff Embedded Linux Engineer, where you will play a critical role in enhancing, maintaining, and evolving our Yocto-based embedded Linux distribution utilized in production on bespoke hardware. This hands-on position emphasizes improving platform reliability, maintainability, and scalability as our products and company expand. Collaborate closely with hardware, firmware, and application teams to support new hardware revisions, enhance system performance, and troubleshoot intricate system-level challenges. Additionally, you will provide technical leadership and influence the ongoing development of our embedded Linux platform.
Feb 11, 2026
Sign in to browse more jobs
Create account — see all 5,222 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.