Research Engineer, AI Observability

Anthropic · San Francisco, CA
full-time mid

About this role

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Team As AI training and deployments scale, the volume of data we need to monitor and understand is exploding. Our team uses Claude itself to make sense of this data. We own an integrated set of tools enabling Anthropic to ask open-ended questions, surface unexpected patterns, and maintain meaningful human oversight over massive datasets. Our tools are widely adopted internally — powering ongoing enforcement , threat intelligence investigations , model audits , and more — and we’re looking for experienced engineers and researchers to both scale up existing applications and go zero-to-one on new ones. About the Role As a Research Engineer on our team, you'll design and build systems that let AI analyze large, unstructured datasets — think tens or hundreds of thousands of conversations or documents — and produce structured, trustworthy insights. You'll work across the full stack, from core analysis frameworks through user-facing apps and interfaces. This is a high-leverage role. The tools you build will be used by dozens of researchers and investigators, and directly shape our ability to measure and mitigate both misuse and misalignment.   Responsibilities: Design and implement AI-based monitoring systems for AI training and deployment Extend and improve core frameworks for processing large volumes of unstructured text Partner with researchers and safety teams across Anthropic to understand their analytical needs and build solutions Develop agentic integrations that allow AI systems to autonomously investigate and act on analytical findings Contribute to the strategic direction of the team, including decisions about what to build, what to partner on, and where to invest You May Be a Good Fit If You: Have 5+ years of software engineering experience, with meaningful exposure to ML systems Are excited about the problem of scaling human oversight of AI systems Are familiar with LLM application development (context engineering, evaluation, orchestration) Enjoy building tools that other people use — you care about UX, reliability, and documentation Can context-switch between deep infrastructure work and user-facing product thinking Thrive in collaborative, cross-functional environments Strong Candidates May Also Have: Research experience in AI safety, alignment, or responsible deployment Practical experience with both data science and engineering, including developing and using large-scale data processing frameworks Experience with productionizing internal tools or building developer-facing platforms Background in building monitoring or observability systems Comfort with ambiguity — our team is small and growing, and you'll help define what we become The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $320,000 — $405,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from