About the job
Join Perplexity in our mission to redefine the future of AI-powered search and agent experiences! We are on the lookout for exceptional AI Research Scientists and Engineers who are eager to push the boundaries of technology. Our innovative products, including Sonar models, Deep Research Agent, Comet Agent, and advanced search tools, are designed to handle hundreds of millions of queries and are scaling rapidly. If you have a passion for cutting-edge AI and want to contribute to state-of-the-art experiences, we would love to hear from you!
Team Structure
Depending on your interests and expertise, you will have the opportunity to collaborate within one of three specialized teams:
1. Core Research Team
This team focuses on the generation and enhancement of foundational models that underpin all our products, emphasizing core model capabilities and the development of infrastructure that serves the entire organization.
2. Agent Products Team
This team specializes in fine-tuning and optimizing models specifically for our Deep Research Agent and Labs/Canvas products, ensuring that our agent functionalities provide exceptional user experiences.
3. Comet Agent Team
Dedicated to the development and refinement of the Comet Agent product, this team addresses the unique requirements and optimizations necessary for Comet's specific use cases.
Responsibilities
Research & Development
Post-train state-of-the-art large language models (LLMs) using cutting-edge supervised and reinforcement learning methodologies (SFT/DPO/GRPO).
Utilize our comprehensive query-answer dataset to scale model performance across our Sonar, Deep Research, Comet, and Search products.
Remain abreast of the latest advancements in LLM research, focusing on model training, optimization, and personalization techniques.
Implement preference optimization and personalization features to elevate user experience.
Innovate and develop in-house enhancements and optimizations for state-of-the-art models.
Translate research ideas into algorithms and conduct experiments to deploy new models.
Infrastructure & Implementation
Oversee the full-stack data, training, and evaluation pipelines that are essential for model development.
Create robust and efficient training frameworks (based on Megatron/PyTorch) for post-training LLMs.
Establish necessary infrastructure components to support cutting-edge model development.

