Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.
11 - 50
March 7
Loading...
Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.
11 - 50
• Construct and rapidly iterate on machine learning experiments • Help improve the behavior of powerful AI systems through finetuning • Make AI helpful, honest, and harmless • Contribute to research on improving language models through constitutional AI • Opportunity to do creative, cutting-edge research on frontier models • Role can be research or engineering oriented • Improve model behaviors • Have significant Python, machine learning, research engineering, or research experience • Prefer fast-moving collaborative projects with concrete goals • Results-oriented with a bias towards flexibility and impact • Care about the impact of AI and of your work
• Prior experience with large language model finetuning techniques such as RLHF is a plus • Experience with complex shared codebases and RL infrastructure is a plus • Experience authoring research papers in machine learning, NLP, or AI alignment or similar industry experience is a plus
• Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant • Comprehensive health, dental, and vision insurance for you and all your dependents • 401(k) plan with 4% matching • 22 weeks of paid parental leave • Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more • Stipends for education, home office improvements, commuting, and wellness • Fertility benefits via Carrot • Daily lunches and snacks in our office • Relocation support for those moving to the Bay Area
Apply Now