companyThinking Machines Lab logo

Security-Focused Software Engineer at thinkingmachines | San Francisco

On-site Full-time $350K/yr - $475K/yr

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Entry Level

Qualifications

Minimum Qualifications:Bachelor’s degree or equivalent experience in computer science, engineering, or a related field. Proficient in at least one backend programming language (Python or Rust preferred). Strong foundation in software engineering principles with a security-oriented mindset.

About the job

At Thinking Machines Lab, our mission is to enhance human capabilities through the development of collaborative general intelligence. We are dedicated to creating a future where everyone can utilize AI tailored to their specific needs and aspirations.

Our team consists of accomplished scientists, engineers, and innovators responsible for some of the most popular AI applications, including ChatGPT and Character.ai, along with renowned open-weight models like Mistral and influential open-source projects such as PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

We are on the lookout for a passionate Software Engineer with a focus on security to ensure our products are secure by design while facilitating rapid and ambitious product development. You will collaborate closely with product and research teams to integrate security measures into the design and development processes, and create tools and automation to maintain system safety at scale.

Note: This is an ongoing opportunity, and we encourage you to express your interest. While we receive numerous applications and there may not always be an immediate match for your skills, we encourage you to apply. We consistently review applications and will reach out as new roles become available. You may reapply if you gain additional experience, but please limit applications to once every six months. We also post specific roles for particular projects or teams, and you are welcome to apply for those as well.

What You’ll Do

  • Collaborate with product and research teams to integrate security into the development lifecycle: threat modeling, design reviews, and establishing secure defaults for new features.
  • Design and implement security controls throughout our product stack (authentication, authorization, session management, input validation, etc.).
  • Create and maintain security tooling and automation for engineers: secure frameworks and templates, CI/CD checks, dependency management, and vulnerability detection.
  • Work alongside researchers to identify and address AI-specific product risks, such as model abuse, prompt injection, data leakage, or misuse of capabilities.
  • Enhance observability and detection for security-related events: access anomalies, abuse patterns, and suspicious behavior in production.

About Thinking Machines Lab

Thinking Machines Lab is at the forefront of AI innovation, empowering individuals and organizations by providing access to cutting-edge AI technologies. Our diverse team is dedicated to creating impactful solutions that drive progress and enhance capabilities across various sectors.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.