Research Engineer, Environment Scaling

Anthropic · Remote-Friendly (Travel Required) | San Francisco, CA
full-time mid

About this role

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role The Environment Scaling team is a team of researchers and engineers whose goal is to improve the intelligence of our public models for novel verticals and use cases. The team builds the training environments that fuel RL at scale. This is a unique role that combines executing directly on ML research, data operations, and project management to improve our models. You'll own the end-to-end process of creating RL environments for new capabilities: identifying high-value tasks, designing reward signals, managing vendor relationships, and measuring impact on model performance. Responsibilities: Improve and execute our fine-tuning strategies for adapting Claude to new domains and tasks Manage technical relationships with external data vendors, including evaluation of data quality and reward design Collaborate with domain experts to design data pipelines and evaluations Explore novel ways of creating RL environments for high value tasks Develop and improve QA frameworks to catch reward hacking and ensure environment quality Partner with other RL research teams and product teams to translate capability goals into training environments and evals You may be a good fit if you: Have experience with fine-tuning large language models for specific domains or real-world use cases and/or domain expertise in an area where we would like to make our models more useful. Have experience with reinforcement learning, reward design, or training data curation for LLMs Are comfortable managing technical vendor relationships and iterating quickly on feedback Find value in reading through datasets to understand them and spot issues Have strong project management and interpersonal skills Are passionate about making AI more useful and accessible across different industries Are excited about a role that includes a combination of ML research, data operations, and project management Strong candidates may also: Have experience training production ML systems Be familiar with distributed systems and cloud infrastructure Have domain expertise in an area where we would like to make our models more useful Have experience working with external vendors or technical partners The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $350,000 — $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe t