About the job
This is a hybrid position based in our Bay Area (SF or Palo Alto) or Chicago offices, requiring in-office attendance on Tuesdays and Thursdays.
Why Join Us?
At Grindr, we believe AI has the potential to transform the dating landscape. As a Staff MLOps Engineer, you will be pivotal in developing and managing the infrastructure, tools, and scalable systems that empower impactful AI initiatives. You will architect and maintain the platforms facilitating data ingestion, feature computation, model training, automated evaluation, deployment, and continuous monitoring for our machine learning teams that craft recommendations, LLM-powered experiences, advertisements, visual searches, as well as growth, trust, and safety mechanisms. Your role will involve designing foundational systems that enable our ML engineers to innovate swiftly, deploy models reliably, and manage them confidently in production.
What You Will Do:
We are seeking an exceptional MLOps engineer passionate about enabling ML at scale, with experience handling over 6 million daily active users and hundreds of millions of daily interactions. You will thrive in building robust, automated pipelines, developing reliable production training and inference systems, and establishing the infrastructure and processes that enhance ML product development across our organization.
In this role, you will define and implement the strategy for Grindr’s ML platform and oversee the end-to-end model lifecycle.
Key Responsibilities:
- Construct and oversee end-to-end ML pipelines for data ingestion, feature computation, model training, validation, deployment, and inference, all while managing substantial data scales.
- Establish and oversee a feature store, ensuring feature consistency, lineage, and reuse across teams.
- Utilize top-tier tools for managing deployment, scheduling, and environments within the specialized realm of ML Infrastructure.
- Develop automated model deployment workflows that incorporate CI/CD, secure rollout strategies, and reproducibility guarantees.
- Implement monitoring and observability solutions for ML systems, covering data quality checks, drift detection, performance metrics, and alerting mechanisms.
- Support and build training environments featuring experiment tracking, distributed training, hyperparameter tuning, and artifact and environment management.
- Collaborate with ML engineers and data engineers to streamline workflows, enhance model iteration speed, and enforce MLOps best practices.

