Research Engineer / Scientist, Alignment Science

Anthropic · San Francisco, CA
full-time mid

About this role

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role: You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science, you'll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy ), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.   Our blog provides an overview of topics that the Alignment Science team is either currently exploring or has previously explored. Our current topics of focus include... Scalable Oversight:  Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains. AI Control:  Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios. Alignment Stress-testing :  Creating  model organisms of misalignment  to improve our empirical understanding of how alignment failures might arise. Automated Alignment Research:  Building and aligning a system that can speed up & improve alignment research. Alignment Assessments : Understanding and documenting the highest-stakes and most concerning emerging properties of models through pre-deployment alignment and welfare assessments (see our Claude 4 System Card ) , misalignment-risk safety cases, and coordination with third-party evaluators. Safeguards Research : Developing robust defenses against adversarial attacks, comprehensive evaluation frameworks for model safety, and automated systems to detect and mitigate potential risks before deployment. Model Welfare:  Investigating and addressing potential model welfare, moral status, and related questions. See our  program announcement  and welfare assessment in the  Claude 4 system card  for more. Note: For this role, we conduct all interviews in Python and prefer candidates to be based in the Bay Area. Representative projects: Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions. Run multi-agent reinforcement learning experiments to test out techniques like  AI Debate . Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks. Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts. Contribute ideas, figures, and writing to research papers, blog posts, and talks. Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our  Responsible Scaling Policy . You may be a good fit if you: Have significant software, ML, or research engineering experience Have some experience contributing to empirical AI research projects Have some familiarity with technical AI safety research Prefer fast-moving collaborative projects to extensive solo efforts Pick up slack, even if it goes outside your job description Care about the impacts of AI Strong candidates may also: Have experience authoring research papers in machine learning, NLP, or AI safety Have experience with LLMs Have experience with reinforcement learning Have experience with Kubernetes clusters and complex shared codebases Candidates need not have: 100% of the skills needed to perform the job Formal certifications or education credentials The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $350,000 — $500,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of th