About the job
About METR
METR is a non-profit organization dedicated to empirical research aimed at assessing the potential risks posed by advanced AI models to humanity. We strive to enhance civilization's understanding of the dangers associated with AI systems, ensuring that informed actions can be taken. Learn more about our mission through our published talks: overall goals and recent updates.
Our Key Achievements:
Establishing Autonomous Replication Evaluations: Our pioneering work has set the standard for testing autonomous replication capabilities in AI models.
Pre-release Evaluations: Collaborating with leaders like OpenAI and Anthropic, we have evaluated models prior to their release, and our findings have been extensively referenced by policymakers and AI research labs.
Inspiring Lab Evaluation Initiatives: Our research has motivated numerous top AI firms to establish internal evaluation teams.
Commitments from Leading Labs: The safety frameworks of major organizations, including Google DeepMind, OpenAI, and Anthropic, acknowledge our contributions to responsible AI scaling policies.
Our work has been recognized by various reputable sources, including the UK government, Time Magazine, and many others. Our strong connections with labs, governmental bodies, and academic institutions enable us to leverage our insights effectively.

