About the job
About AfterQuery
AfterQuery builds training data and evaluation frameworks used by leading AI labs around the world. The team partners with advanced research groups to create high-quality datasets and run detailed evaluations that go beyond standard benchmarks. As a small, post-Series A company based in San Francisco, every team member plays a key role in shaping how future AI models learn and improve.
Role Overview
The Post-Training Research Scientist focuses on proving the impact of AfterQuery's datasets. This work involves designing and running training experiments to isolate how specific data influences model performance. Projects span Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) post-training, with an emphasis on measuring effects on capability, generalization, and alignment. Working closely with partner labs, the scientist turns data into clear, verifiable results: showing exactly how a dataset leads to measurable improvements under defined conditions. The work is experimental and directly shapes the value of AfterQuery's products.
What You Will Do
- Run controlled SFT and RL experiments to measure how datasets affect model outcomes.
- Quantify gains in areas like reasoning, tool use, long-horizon tasks, and specialized workflows.
- Share findings with partner labs to support sales and demonstrate value.
- Work with internal subject matter experts to improve data quality based on experimental results.
What We Look For
- Strong background in LLM training and evaluation methods.
- Curiosity about how data structure, selection, and quality shape model behavior.
- Skill in designing experiments, executing quickly, and drawing practical insights from complex results.
- Comfort working across fields such as finance, software engineering, and policy.
- Focus on real-world implementation, not just theory.
- Research experience at the undergraduate or master's level is preferred; a PhD is not required.
Compensation
$250,000 - $450,000 total compensation plus equity

