About the job
About Stand Insurance
Stand Insurance is rethinking how property risks are understood and managed. By combining advanced physics with artificial intelligence, the team models catastrophic risks at the asset level and automates underwriting and risk mitigation before losses happen. Instead of simply delivering insurance, Stand builds a scalable risk engine that aims to deliver real-world impact and stay in markets where others exit.
Traditional property insurance often relies on outdated data and manual workflows, accepting damage as a given. Stand takes a different path: simulating real-world catastrophes for individual properties, turning those simulations into actionable steps, and automating operations around those insights. The result is a platform that can underwrite risks others avoid, while reducing operational friction.
Role Overview: Machine Learning Engineer – Data Pipeline
This role centers on building and maintaining the tools behind Stand’s data annotation pipeline. Areas of focus include computer vision, human-in-the-loop management, quality assurance, and economic optimization. The main goal: increase automation and lower cost-per-policy, while keeping quality high.
Early on, work will involve hands-on management of the pipeline, quality checks, and close coordination with the annotation team. As experience grows, the focus will shift to developing advanced data science and machine learning systems, especially around quality instrumentation, automated QA, predictive labeling, and computer vision models. Over time, the role will evolve into shaping a systems-driven, automation-focused framework for the entire annotation lifecycle.
Key Responsibilities
Pipeline Operations and Reliability
- Monitor and maintain the daily health of the annotation pipeline
- Set up escalation protocols and frameworks for categorizing failures
- Lead the transition from manual to automated operations
Quality Instrumentation
- Design validation systems that align with downstream model metrics
- Develop anomaly detection models for annotation workflows
- Automate tasks to cut down on manual QA effort
Vendor and Annotator Performance
- Define and track performance metrics for vendors and annotators
Location
San Francisco

