About the job
About Handshake
Handshake serves as the premier career network tailored for the AI economy, connecting 20 million knowledge workers with 1,600 educational institutions and 1 million employers, including all Fortune 50 companies. Trusted by every leading AI lab, Handshake facilitates career exploration, recruitment, and skill enhancement, ranging from freelance AI projects to internships and full-time positions. Our extraordinary growth trajectory has seen us triple our Annual Recurring Revenue by 2025.
Why join Handshake now:
Play a pivotal role in shaping the future of careers within the AI economy, creating visible and tangible impact for your community.
Collaborate directly with renowned AI labs, Fortune 500 partners, and leading educational institutions.
Join a diverse team featuring expertise from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir.
Contribute to building a rapidly growing enterprise with significant revenue potential.
The Role
As the Strategic Projects Lead, you will oversee extensive coding data initiatives spanning weeks to months for elite AI and platform teams. Your role involves coordinating with hundreds to thousands of skilled Software Engineering Fellows, designing and managing technical evaluation and annotation processes, while ensuring high standards for delivery, profitability, quality, and client relationships.
You will be responsible for crafting and validating coding assessments, establishing rubric-driven code review systems, measuring quality metrics, and swiftly adapting workflows to accommodate changing demands. This position serves as a bridge between machine learning, product development, engineering, and operations, requiring hands-on coding experience, proficiency with metrics, and a strong commitment to data integrity. Key responsibilities include:
Managing multi-million dollar ARR-equivalent program scopes and delivery metrics.
Overseeing the complete lifecycle of coding data programs: from scope definition to assessment design, fellow selection, annotation, quality assurance, and client feedback.
Designing and administering technical evaluations (take-home assignments, unit-test driven tasks, live coding) to enhance the Software Engineering Talent Bench.
Creating rubrics and audit processes to ensure that code annotations and model labels meet production quality standards.
Developing scripts and infrastructure using Python, TypeScript, and SQL to automate quality assurance, analytics, and reporting.

