A
Anthropic AI Safety Fellow
full-time
principal
About this role
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link . We’re accepting applications on a rolling basis for cohorts starting in July 2026 and beyond. Applications for the May 2026 cohort are now closed.
Anthropic Fellows Program Overview
The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent. We provide funding and mentorship to promising technical talent - regardless of previous experience - to research the frontier of AI safety for four months.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In our previous cohorts, over 80% of fellows produced papers (more below).
We run multiple cohorts of Fellows each year. This application is for cohorts starting in July 2026 and beyond.
What to Expect
Direct mentorship from Anthropic researchers
Access to a shared workspace (in either Berkeley, California or London, UK)
Connection to the broader AI safety research community
Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD & access to benefits (benefits vary by country)
Funding for compute (~$15k/month) and other research expenses
Mentors, Research Areas, & Past Projects
Fellows will undergo a project selection & mentor matching process. Potential mentors amongst others include:
Jan Leike
Sam Bowman
Sara Price
Alex Tamkin
Nina Panickssery
Trenton Bricken
Logan Graham
Jascha Sohl-Dickstein
Nicholas Carlini
Joe Benton
Collin Burns
Fabien Roger
Samuel Marks
Kyle Fish
Ethan Perez
Our mentors will lead projects in select AI safety research areas, such as:
Scalable Oversight: Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.
Adversarial Robustness and AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
Model Organisms: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
Model Internals / Mechanistic Interpretability: Advancing our understanding of the internal workings of large language models to enable more targeted interventions and safety measures.
AI Welfare: Improving our understanding of potential AI welfare and developing related evaluations and mitigations.
On our Alignment Science and Frontier Red Team blogs, you can read about past projects, including:
AI agents find $4.6M in blockchain smart contract exploits: Winnie Xiao and Cole Killian, mentored by Nicholas Carlini and Alwin Peng
Subliminal Learning: Language Models Transmit Behavioral Traits via Hidden Signals in Data: Alex Cloud and Minh Le, et al., mentors including Samuel Marks and Owain Evans
Open-source circuits: Michael Hanna and Mateusz Piotrowski with mentorship from Emmanuel Ameisen and Jack Lindsey
For a full list of representative projects for each area, please see these blog posts: Introducing the Anthropic Fellows Program for AI Safety Research , Recommendations for Technical AI Safety Research Directions .
You may be a good fit if you
Are motivated by reducing catastrophic risks from advanced AI systems
Are excited to transition into full-time empirical AI safety research and would be interested in a full-time role at Anthropic
Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit here at Anthropic. In previous cohorts, over 40% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on safety at other organizations.
Have a strong technical background in computer science, mathematics, physics, cybersecurity, or related fields
Thrive in fast-paced, collaborative environments
Can implement ideas quickly and communicate clearly
Strong candidates may also have:
Experience with empirical ML research projects
Experience working with Large Language Models
Experience in one of the research areas mentioned above
Experience with deep learning frameworks and experiment management
Track record of open-source contributions
Candidates must be:
Fluent in Python programming
Available to work full-time on the Fellows program for 4 months
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify