companyCenter for AI Safety logo

Senior Research Engineer at aisafety | San Francisco

Center for AI SafetySan Francisco, CA
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Senior

Qualifications

To be successful in this role, candidates should possess a strong background in machine learning and engineering practices. Experience with large language models and a deep understanding of AI safety will be essential. Strong analytical skills, the ability to work independently, and effective collaboration with diverse teams are highly valued.

About the job

The Center for AI Safety (CAIS) is at the forefront of research and advocacy dedicated to addressing the societal-scale challenges posed by artificial intelligence. Our mission is to mitigate the risks associated with AI through innovative technical research, initiatives to foster the field, and strategic policy engagement. Together with our sister organization, the Center for AI Safety Action Fund, we tackle some of the most pressing issues in AI today.
 
In the role of Senior Research Engineer, you will immerse yourself in the dynamic intersection of pioneering machine learning research and dependable engineering practices. You will own research projects from inception to publication, working autonomously with guidance from an advisor. Your responsibilities include designing and conducting experiments on large language models, developing the necessary tools for large-scale model training and evaluation, and transforming findings into research publications. You will collaborate closely with CAIS researchers, as well as external academic and commercial partners, utilizing our compute cluster for extensive training and evaluation. Your work will cover critical areas such as AI honesty, robustness, transparency, and the investigation of trojan/backdoor behaviors, all aimed at reducing the real-world risks posed by advanced AI systems.

About Center for AI Safety

The Center for AI Safety is a pioneering organization dedicated to researching and advocating for responsible AI. We focus on identifying and mitigating the risks AI poses to society, striving for a safer technological future.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.