AI Jobs & Internships at Anthropic 2026
Anthropic was founded in 2021 by former OpenAI researchers including Dario Amodei, Daniela Amodei, and others who left to build an AI safety company from the ground up. Their primary product is Claude — a family of AI assistants known for their helpfulness, harmlessness, and honesty — built on Anthropic's Constitutional AI alignment approach. Anthropic has raised over $7 billion and is one of the most influential AI safety research organizations in the world.
AI Roles at Anthropic
Research Scientist
AI Safety Engineer
LLM Fine-Tuning Engineer
AI Engineer
Alignment Researcher
Policy Researcher
Data Curator
AI Red Team Specialist
Work Culture at Anthropic
Anthropic has a uniquely safety-first culture where technical AI safety is not just a department but the organizational mission. Researchers have significant freedom to work on safety-relevant problems they believe are important, and the company actively encourages academic publication and engagement with the external safety research community. The culture is intellectually collaborative with significant overlap between engineering and research roles — Claude engineers frequently contribute to safety research. Decision-making involves careful consideration of safety implications even for apparently mundane product decisions.
How to Get a Job at Anthropic
- 1.
Engage deeply with Anthropic's published safety research — Constitutional AI, mechanistic interpretability, and evaluations papers — and be able to discuss their technical details in interviews
- 2.
Safety-relevant publications, even in non-traditional venues, are valued alongside top-tier ML conference papers
- 3.
Demonstrate genuine commitment to AI safety as a mission — Anthropic values candidates who have thought carefully about existential and near-term AI risks
- 4.
For technical roles, show strength in ML fundamentals and experience working with LLMs at a low level — fine-tuning, evaluation, and safety testing
- 5.
The application process includes writing samples for many roles — prepare to articulate your thinking on AI safety topics clearly and precisely