16 open positions available
Develop and lead policy proposals on national security challenges involving AI, and collaborate with technical teams to translate research into policy. | Over 10 years of experience in government or private-sector roles related to national security, with an active TS/SCI clearance, and experience designing policy and regulatory proposals for technology and security. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role We are looking for a National Security Policy Lead to guide our work to address a range of national security challenges involving AI. This role will develop and lead engagements on policy approaches to national security during a critical period in AI development and governance. You will be a high-agency member of a team dedicated to national security, and your work will ensure that Anthropic supports the security of U.S. and allied democracies, their geopolitical strength and competitiveness, and their adoption of AI for defense and intelligence purposes. You will partner closely with colleagues across legal, trust and safety, product and sales, and research functions. In this role, you will: Design policy proposals to address national security challenges related to AI, and lead associated policy engagements Shape Anthropic’s own policies and approaches to mitigating national security risks involving its products Develop strategies for AI to safeguard the geopolitical strength and competitiveness of the United States and allied democracies Support and promote collaborations with national security partners, across the public and private sectors, on model testing and deployment for national security Collaborate with technical teams to translate Anthropic threat model research into concrete policy proposals, stakeholder education, and meaningful contributions to public discussions and debates Engage in thought leadership and planning for changes that very powerful AI may bring to the global national security landscape You may be a good fit if you: Have 10+ years of experience working in government or private-sector roles related to national security Hold an active TS/SCI clearance or held one in the last two years, and have the ability to obtain/maintain one Possess excellent written and verbal communication skills, and can translate technical concepts into accessible information. Have experience designing and advocating for concrete policy and regulatory proposals regarding technology and national security Are adept at working with diverse cross functional teams (including but not limited to trust and safety, legal, product, research, comms, and marketing) Are high-agency, able to develop and execute strategy independently, taking account of dependencies with other cross-functional teams within an organization. Have demonstrated interest and experience in a complicated technical subject (ideally, AI, but other examples could be quantum computing, cryptography, fusion power, etc.) The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $295,000—$345,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Lead and build a TPM team supporting large-scale compute and infrastructure programs, managing critical projects from inception to completion. | Extensive experience in technical program management, team building from scratch, supporting large-scale infrastructure projects, and excellent communication skills for senior leadership. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About The Role Anthropic’s Compute and Infrastructure organizations are responsible for the systems that train our models, serve our products, and support our engineering teams. That includes datacenter operations, capacity planning across cloud providers and our own facilities, accelerator cluster management, production serving infrastructure, developer tooling, data pipelines, and networking. It’s a lot of surface area, and the demands on it are growing fast. We’re hiring a TPM leader to own program management across this whole ecosystem from compute supply through to production workloads. Today, there are a few TPMs working in these areas but no dedicated TPM team, and you will build it. We’re bringing up new datacenters, scaling multi-cloud compute across AWS, GCP, and Azure, managing datacenter construction, and building the software infrastructure to keep pace. The team is currently small but expected to grow very quickly, and we’re looking for a senior leader with experience at scale to build and scale this team to support Anthropic’s rapid growth. You’ll report to the Head of TPM, partnering closely with various engineering leaders on technical strategy, roadmapping, and aligning TPM support where it is most impactful. Expect to spend most of your time as an IC at the start. You’ll personally drive 2–3 critical programs while hiring your team in parallel. As the team grows, you’ll shift more toward people leadership, but this is a role where you need to be comfortable doing the work yourself before you can hand it off. What You'll Do IC Program Leadership (Near-term Focus) • Own and drive 2–3 of the highest-priority programs across compute and infrastructure while you build the team • Run the actual programs—datacenter bring-up timelines, capacity scaling plans, infrastructure migrations, cross-team reliability efforts, or whatever the most pressing needs are • Build the processes and playbooks as you go—figure out what works by doing it, then codify it for the team • Earn credibility with engineering leads through solid execution, not just strategy Team Building & Development • Build a TPM team largely from scratch: define roles, write JDs, source candidates, close hires • Set the standard for what good TPM work looks like in this domain through your own output • Coach and develop TPMs • Transition programs to your team as you hire Planning & Prioritization • Work with various engineering leads to identify work that would most benefit from TPM support • Make real tradeoffs about what to staff vs. what to skip given limited TPM capacity during the build phase • Maintain portfolio-level visibility across programs—status, risks, dependencies, blockers • Represent the team in planning cycles and leadership reviews Cross-functional Coordination • Coordinate across Compute, Infrastructure, and partner teams (Research, Product, Security, Finance, Legal) on programs that span organizational boundaries • Drive alignment on programs that cross the hardware/software line—e.g., capacity plans that feed into training schedules, or efficiency work that spans accelerator kernels and serving systems • Own executive communication on program status, risks, and resource needs You May Be a Good Fit If You • Have 10+ years of experience in technical program management, with 7+ years directly managing TPMs and ideally some experience leading larger TPM organizations • Have built a team or function from scratch before—you know the difference between hiring for a defined role vs. figuring out what the roles should be • Have scaled TPM teams to support rapidly-growing, fast-moving company environments • Have worked across physical and software infrastructure—datacenters, networking, hardware ops, distributed systems, cloud platforms, developer tooling. You don’t need to be deep in all of it, but you need to be conversant enough to ask the right questions and spot the real risks. • Have run large-scale compute or infrastructure programs—capacity planning, cluster deployments, datacenter build-outs, cloud migrations, or similar • Can communicate complex programs clearly to senior leadership without losing the important details • Are good at context-switching between doing the work and managing people, and don’t see the IC work as beneath you • Are comfortable making staffing and prioritization decisions without perfect information Deadline to apply: None, applications will be received on a rolling basis. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary $435,000—$565,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How We're Different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Coordinate complex infrastructure programs to support AI research and production at scale, ensuring reliability, security, and efficiency. | Experience in managing large-scale infrastructure projects, deep technical understanding of infrastructure systems, stakeholder management skills, and familiarity with AI/ML infrastructure and cloud platforms. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role Anthropic's Infrastructure organization is the engine that powers our mission. Every breakthrough in AI safety research and every interaction users have with Claude depends on the systems we build and operate: massive clusters for training frontier models, production infrastructure serving millions of users reliably, and developer platforms that help engineers move fast without breaking things. As a Technical Program Manager for Infrastructure, you’ll work across multiple infrastructure domains to coordinate complex programs that have broad organizational impact. You’ll be solving novel scaling challenges at the frontier of what's possible, all while maintaining the security and reliability our mission demands. This role is ideal for someone who thrives in ambiguity and believes their job is to make everyone around them more effective. You’ll partner closely with engineering leadership to drive strategic initiatives while ensuring seamless coordination between research, engineering, and product teams. What you’ll do: Developer Productivity & Tooling Drive cross-functional programs to improve developer environments, CI/CD infrastructure, and release processes that enable rapid innovation while maintaining high security standards Coordinate large-scale migrations and platform modernization efforts across engineering teams Partner with teams to measure and improve developer productivity metrics, identifying bottlenecks and driving systematic improvements Lead initiatives to integrate AI tools into development workflows, helping Anthropic be at the forefront of AI-assisted research and engineering Infrastructure Reliability & Operations Drive programs to establish and achieve reliability targets across training infrastructure and production services Coordinate incident response improvements, post-mortem processes, and on-call rotations that help teams operate effectively Establish metrics and dashboards to track infrastructure health, capacity utilization, and operational excellence Cross-functional Coordination Serve as the critical bridge between infrastructure teams, research, and product, translating technical complexities into clear updates for a variety of audiences Consult with stakeholders to deeply understand infrastructure, data, and compute needs, identifying solutions to support frontier research and product development Drive alignment on priorities and timelines across teams with competing constraints You May Be a Good Fit If You Have 5+ years of technical program management experience, with a track record of successfully delivering complex infrastructure programs in ML/AI systems or large-scale distributed systems Have deep technical understanding of infrastructure systems—enough to engage substantively with engineers, identify technical risks, and add value beyond project tracking Excel at creating structure and processes in ambiguous environments, bringing clarity to complex cross-team initiatives Have strong stakeholder management skills and can build trust with both technical and non-technical partners Are comfortable navigating competing priorities and using data to drive technical decisions Have experience with developer productivity initiatives, CI/CD systems, or infrastructure scaling Thrive in fast-paced environments and can balance strategic planning with tactical execution Are obsessed with reliability, scalability, security, and continuous improvement Have a passion for supporting internal partners like research to understand their unique needs Are passionate about AI infrastructure and understand the unique challenges of building and operating systems at frontier scale Experience with Kubernetes, cloud platforms (AWS, GCP, Azure), and ML infrastructure (GPU/TPU/Trainium clusters) Background working with research teams and translating their needs into concrete technical requirements Experience driving adoption of AI tools to improve engineering productivity Familiarity with observability tooling and practices Deadline to Apply: None, applications will be received on a rolling basis. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $290,000—$365,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Leading complex programs involving data science, marketing operations, and vendor management to develop marketing measurement capabilities. | Extensive experience in technical program management, deep understanding of marketing measurement, analytics infrastructure, and data science, with technical fluency in data platforms and experimentation design. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a Technical Program Manager for Marketing Technology, you will lead our Marketing Mix Modeling (MMM), incrementality testing, brand measurement, and marketing data infrastructure programs. You'll orchestrate complex, cross-functional initiatives spanning vendor partnerships, data infrastructure, experimentation design, and stakeholder alignment to build world-class marketing measurement capabilities for Anthropic's growth. You'll serve as the central coordinator between Data Science, Growth Marketing, Brand Marketing, Product, Engineering, Data Infrastructure, Privacy, Finance, and external partners including MMM vendors, media platform partners, and agencies. This role requires someone who can navigate technical complexity, drive alignment across diverse stakeholders, and translate between business strategy and technical execution across both performance and brand measurement. As the program lead for our measurement infrastructure, you'll be responsible for delivering our MMM proof-of-concept, establishing ongoing experimentation frameworks, designing and executing brand lift studies, leading the strategic assessment and migration of infrastructure, and building the operational foundations that enable data-driven marketing investment decisions at scale. Responsibilities: Marketing Measurement Intelligence: Lead end-to-end program management for MMM proof-of-concept execution, and transition to production operations. Design and execute comprehensive incrementality testing programs including geo-based experiments, conversion lift studies, and in-platform tests with media partners to calibrate and validate MMM outputs. Lead brand lift study design and execution across media platforms to measure awareness, consideration, favorability, and intent. Synthesize measurement results across MMM, brand lift, and incrementality testing for holistic marketing effectiveness views, building reporting frameworks that connect brand health metrics to business outcomes. MarTech Infrastructure & Vendor Management: Support strategic assessments of marketing technology platforms, facilitating cross-functional evaluation and driving stakeholder alignment on build-vs-buy decisions while mapping dependencies and identifying blockers. Serve as key contact for vendors and agencies, managing relationships, business reviews, and coordinating execution with implementation roadmaps. Establish operational excellence standards including monitoring, alerting, version control, automated privacy validation, and incident response protocols while maintaining executive visibility into platform initiatives and working with Legal and Security on vendor reviews. Marketing Workflow Automation: Partner with Marketing leadership to identify, prioritize, and support deployment of AI-powered automation solutions for marketing operations. Establish governance frameworks, quality standards, validation processes, and monitoring mechanisms for automated marketing workflows. Build sustainable operating models for ongoing automation maintenance and continuous improvement. Track and measure automation impact to demonstrate ROI to leadership and cross-functional teams. Act as a center of excellence to socialize successful automation stories within Marketing to the broader Anthropic organization. You May Be a Good Fit If You: Have 7+ years of technical program management experience, with 3+ years in marketing measurement, analytics infrastructure, or data science programs Have a track record of successfully managing complex programs involving data science, marketing operations, engineering, agencies, and vendors Possess deep understanding of MMM, attribution, incrementality testing, brand lift studies, and experimentation design Have strong technical fluency with customer data platforms, marketing data sources, data warehouses, and analytics platforms Have experience evaluating and migrating between marketing technology platforms or data infrastructure systems Can engage with data scientists on regression analysis, causality, adstock modeling, and experimental design Understand CDP architecture including event collection, tag management, streaming delivery, reverse ETL, and privacy compliance Are familiar with privacy regulations (GDPR, CCPA) and data sovereignty requirements Have a track record of delivering 0-to-1 programs on aggressive timelines with high visibility Excel at translating technical concepts for varied audiences and can influence without authority Thrive in ambiguous situations, bringing structure to complex challenges with competing priorities and limited resources Have excellent written and verbal communication skills with executive presence and strong presentation abilities Are passionate about Anthropic's mission and interested in the challenges of bringing frontier AI capabilities to market Deadline to apply: None, applications will be received on a rolling basis. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $290,000—$365,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Owns operational health, incident management, and platform improvements for AI safety infrastructure, coordinating across multiple teams. | Requires solid technical program management experience, understanding of ML systems, incident management, and cross-team coordination, with an interest or experience in AI safety. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role Safeguards Engineering builds and operates the infrastructure that keeps Anthropic's AI systems safe in production — the classifiers, detection pipelines, evaluation platforms, and monitoring systems that sit between our models and the real world. That infrastructure needs to be not just correct, but reliable: when a safety-critical pipeline goes down or degrades, the consequences can be serious, and they can be invisible until someone looks closely. As a Technical Program Manager for Safeguards Infrastructure and Evals, you'll own the operational health and forward momentum of this stack. Your primary responsibility is driving reliability — owning the incident-response and post-mortem process, ensuring SLOs are defined and met in partnership with various teams, and making sure that when things go wrong, the right people know, the right actions get taken, and those actions actually get closed out. Alongside that ongoing operational rhythm, you'll coordinate the larger platform investments: migrations, eval-platform improvements, and the cross-team dependencies that connect them. This role sits at the intersection of operations and program management. It requires genuine technical depth — you need to understand how these systems work well enough to triage effectively, judge what's actually safety-critical versus what can wait, and have informed conversations with the engineers building and maintaining them. But the core of the job is keeping the machine running well and the work moving. What You'll Do: Own the Safeguards Engineering ops review - Drive the recurring cadence that keeps the team informed and coordinated: surfacing recent incidents and failures, bringing visibility to reliability trends, and making sure the right people are in the room when decisions need to be made. This is the heartbeat of how Safeguards Eng stays ahead of operational risk. Drive incident tracking and post-mortem execution - When incidents happen — and in this space, they happen regularly — you'll make sure they get followed through properly. That means tracking incidents across the organization (including those owned by partner teams like Inference), ensuring post-mortems get written, and most critically, making sure the action items that come out of them actually get done. Closing the loop on post-mortem actions is one of the highest-leverage things this role does. Establish and maintain SLOs with partner teams - Work with Safeguards Engineering teams and key partners — particularly Inference and Cloud Inference — to define service-level objectives for safety-critical pipelines. Then build the tracking and reporting that makes it possible to tell whether those SLOs are being met, and surface it when they're not. Maintain runbook quality and incident-ownership clarity - Safety-critical systems need clear playbooks for when things go wrong. Partner with engineering leads to keep runbooks accurate, actionable, and up to date — and ensure that ownership of incidents (including for areas like account-banning false positives and CSAM detection) is unambiguous so that nothing falls through the cracks during an active incident. Drive platform migrations and infrastructure projects - Own the program management for the larger infrastructure work on the roadmap: migrating the infra from one platform to the next, moving from one incident platform to the next and from one cloud system monitoring to another, and other migrations as they come. These are cross-team efforts with real dependencies — your job is to keep them sequenced, on track, and connected to the teams that need them. Coordinate evals platform improvements - Partner with the evals engineering team to drive improvements to the evaluation platform — including self-serve capabilities and the broader eval factory infrastructure. Help scope the work, track dependencies on other Safeguards systems, and make sure the evals platform is keeping pace with the team's needs. You might be a good fit if you: Have solid technical program management experience, particularly in operational or infrastructure-heavy environments — you're comfortable owning a mix of ongoing operational cadences and discrete project work simultaneously. Understand how production ML systems work well enough to triage incidents intelligently and have substantive conversations with engineers about what's going wrong and why — you don't need to write the code, but you need to follow the technical thread. Are energized by closing loops. Post-mortem action items that never get done, SLOs that no one checks, runbooks that go stale — these things bother you, and you know how to build the processes and follow-ups that fix them. Can work effectively across team boundaries — comfortable coordinating with partner teams (like Inference) where you don't have direct authority, and skilled at keeping shared work moving through influence and clear communication. Thrive in environments where the work shifts between "keep the lights on" and "build something new" — and can context-switch between incident follow-ups and longer-horizon platform projects without dropping either. Have experience with or strong interest in AI safety — you understand why the reliability of a safety-critical pipeline is a different kind of problem than the reliability of a product feature, and that distinction motivates you. Strong candidates may also: Have experience with SRE practices, incident management frameworks, or on-call operations at scale. Have worked on or with evaluation infrastructure for ML systems — understanding how evals get designed, run, and interpreted. Have experience driving infrastructure migrations in complex, multi-team environments — particularly where the migration touches operational systems that can't go offline. Be familiar with monitoring and alerting tooling (PagerDuty, Datadog, or equivalents) and the operational culture around them. Deadline to apply: None, applications will be received on a rolling basis. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $290,000—$365,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Manage global income tax provision, compliance, and related financial reporting, collaborating with external advisors and internal teams. | Over 10 years of progressive tax experience, CPA certification, deep knowledge of ASC 740, and experience with international and multi-state tax issues. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role We are looking for a Senior Manager, Tax Operations, Provision, and Compliance to join our Finance & Accounting team at Anthropic. In this role, you will own and manage the company’s global income tax provision process and tax compliance function. Reporting to the Tax Operations, Provision, and Compliance Lead, you will build and optimize our tax infrastructure as we continue to scale rapidly, ensuring accurate and timely tax reporting while navigating the complexities of a high-growth AI company. This role will work closely with the Accounting team, external tax advisors, and cross-functional partners to ensure compliance with all applicable tax regulations and support the company’s strategic financial goals. If you are looking for an opportunity to make a significant impact on the tax function of an innovative company, come join us in our mission to build cutting-edge, safe AI. Responsibilities: Oversee the preparation and filing of federal, state, and international income tax returns, either directly or through coordination with external advisors Manage tax compliance calendar and ensure timely filing of all required returns, extensions, and estimated payments Own and manage the quarterly and annual income tax provision process (ASC 740), including current and deferred tax calculations, effective tax rate analysis, and financial statement disclosures Prepare and review tax account reconciliations, including deferred tax assets and liabilities, uncertain tax positions (FIN 48/ASC 740-10), and valuation allowance analyses Support financial close processes related to income tax, including journal entries, flux analysis, and management reporting Collaborate with external auditors on income tax-related audit requests and ensure timely completion of deliverables Monitor and assess the impact of new tax legislation, regulations, and guidance on the company’s tax position Partner with cross-functional teams (Legal, Finance, People) on tax implications of business transactions, equity compensation, intercompany arrangements, and international expansion Identify and implement process improvements and automation opportunities within the tax function Manage relationships with external tax advisors and service providers Support tax-related aspects of R&D tax credit studies and other tax incentive programs You may be a good fit if you: Have 10+ years of progressive income tax experience, with a mix of public accounting (Big 4 preferred) and in-house corporate tax experience (in-house tax department experience is a must) Hold a bachelor’s degree in accounting, Finance, or related field; Hold CPA certification and/or advanced degree in taxation and accounting Have strong technical knowledge of ASC 740, including experience with income tax provisions for multi-state and international operations Demonstrate deep familiarity with federal and state corporate income tax compliance Are proficient in tax provision software (e.g., Corptax, OneSource, Longview) and ERP systems (e.g., NetSuite, Workday) Possess strong analytical and problem-solving skills with keen attention to detail Have excellent organizational and project management abilities, with the capacity to manage multiple priorities and deadlines Communicate effectively with both technical and non-technical stakeholders Are a self-starter comfortable operating with autonomy in a fast-paced, evolving environment Strong candidates may also have: Experience in high-growth technology companies or pre-IPO environments A master’s degree in taxation Experience with international tax matters, including transfer pricing, GILTI, FDII, and Subpart F Demonstrate familiarity with R&D tax credits (Section 41) and Section 174 capitalization requirements Experience with equity compensation tax implications (ISOs, NSOs, RSUs) Be comfortable with ambiguity and building processes from the ground up Willingness to roll up their sleeves and get hands-on while also thinking strategically about scaling the function Be passionate about Anthropic’s mission to build safe, transformative AI systems Deadline to apply: None. Applications will be reviewed on a rolling basis. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $230,000—$300,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Lead cross-team initiatives for infrastructure integration, optimize performance, coordinate model launches, and improve processes in AI inference systems. | Several years of technical program management experience, deep understanding of inference systems or hardware accelerators, strong stakeholder management, and experience with infrastructure scaling. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a Technical Program Manager for Inference, you'll be the critical bridge between our inference systems and the broader organization. You'll drive strategic initiatives across inference runtime and accelerator performance—coordinating model launches, managing cross-platform dependencies, and ensuring reliability across multiple hardware targets. This role is essential for keeping our most contended infrastructure teams shipping effectively while Research, Product, and Safety all depend on their output. Responsibilities: Systems Integration & Coordination: Lead cross-functional initiatives for new infrastructure integration, establishing clear ownership, timelines, and communication channels between teams. Drive end-to-end planning for major infrastructure transitions including platform modernization and new tech adoption. Performance & Efficiency: Partner with engineering teams to identify optimization opportunities, track performance metrics, and prioritize work that unlocks capacity gains. Coordinate across runtime and accelerator layers to ensure efficiency wins ship without compromising reliability. Launch Coordination: Drive end-to-end readiness for model and feature launches across multiple hardware platforms. Establish processes for cross-platform validation, manage launch timelines, and ensure smooth handoffs between runtime, accelerator, and downstream teams. Strategic Planning: Own and prioritize the inference deployment roadmap, working closely with engineering leadership to prioritize initiatives and manage dependencies. Provide visibility into upcoming changes and their organizational impact. Stakeholder Communication: Build strong relationships across research, engineering, and product teams to understand requirements and constraints. Translate technical complexities into clear updates for leadership and ensure alignment on priorities and timelines. Process Improvement: Identify inefficiencies in current workflows and drive systematic improvements. Establish metrics and dashboards to track infrastructure health, capacity utilization, and deployment success rates. You may be a good fit if you: Have several years of experience in technical program management, with proven success delivering complex infrastructure programs, preferably in ML/AI systems or large-scale distributed systems Have deep technical understanding of inference systems, compilers, or hardware accelerators to engage substantively with engineers and identify technical risks. Excel at creating structure and processes in ambiguous environments, bringing clarity to complex cross-team initiatives Have strong stakeholder management skills and can build trust with both technical and non-technical partners Are comfortable navigating competing priorities and using data to drive technical decisions Have experience with infrastructure scaling initiatives, hardware integrations, or deployment governance Thrive in fast-paced environments and can balance strategic planning with tactical execution Are passionate about AI infrastructure and understand the unique challenges of deploying and scaling large language models Deadline to apply: None, applications will be received on a rolling basis. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $290,000—$365,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Drive adoption of AI solutions within state and local government agencies, managing the full sales cycle and navigating procurement processes. | Over 5 years of enterprise sales experience in government sectors, with deep understanding of government operations, procurement, and compliance. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role As a State and Local Government Account Executive at Anthropic, you'll drive the adoption of safe, frontier AI across South and Central US state and local government agencies. You'll leverage your deep understanding of state and local government operations and consultative sales expertise to propel revenue growth while becoming a trusted partner to customers, helping them embed and deploy AI while uncovering its full range of capabilities. In collaboration with GTM, product, and marketing teams, you'll help refine our approach to the state and local government market while maintaining the highest standards of security and compliance. Responsibilities: • Drive new business and revenue growth specifically within state and local government agencies, owning the full sales cycle from initial outreach through deployment • Navigate the unique requirements of state and local government procurement, including state-specific regulations, security standards, and agency-specific requirements • Build and maintain relationships with key decision-makers across state, county, and municipal agencies, becoming a trusted advisor on AI capabilities and implementation • Develop and execute strategic account plans that align with agency missions and modernization initiatives • Coordinate closely with cloud service providers (AWS, GCP) and system integrators to ensure successful deployment and integration • Provide detailed market intelligence and customer feedback to product teams to ensure our offerings meet state and local government requirements • Create and maintain sales playbooks specific to state and local government use cases and procurement processes • Take a leadership role in growing our state and local government presence while maintaining hands-on engagement with key accounts • Collaborate across teams to ensure coordinated delivery of commitments and maintain appropriate documentation of customer engagements You may be a good fit if you have: • 5+ years of enterprise sales experience in the state and local government space, with a proven track record of driving adoption of emerging technologies • Deep understanding of state and local government agency missions, challenges, and technology needs • Demonstrated ability to balance strategic leadership with hands-on sales execution • Experience navigating complex state and local procurement processes and compliance requirements • Strong track record of exceeding revenue targets in the state and local government space • Extensive experience with state and local government contracting vehicles and procurement mechanisms • Excellent relationship-building skills across all levels, from technical teams to senior agency leadership • Proven ability to coordinate across multiple stakeholders, including cloud providers and system integrators • Strategic thinking combined with attention to detail in execution • Familiarity with state-specific data privacy laws and security compliance frameworks • A passion for safe and ethical AI development, with the ability to articulate its importance in government contexts Deadline to apply: None. Applications will be reviewed on a rolling basis. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $290,000—$360,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Manage and develop partnerships with scientific labs and organizations to deploy AI technologies in life sciences, while shaping research and impact strategies. | Extensive experience in life sciences research, partnership management, and technical fluency in AI concepts, with a proven track record in building communities of practice and managing multi-stakeholder initiatives. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About The Role We're looking for a Program Manager to grow our AI for Science Program. You'll build and manage partnerships with individual labs from around the world. This program is the primary way that Anthropic’s life sciences efforts works with individual labs that have new and innovative ideas for how Claude can contribute to a growing set of scientific domains. This is a foundational role where you'll shape our strategy for deploying Claude for Life Sciences across domains of research including the biomedical sciences, chemistry, and materials science. You'll work with incredibly interdisciplinary partners from academia as well as early stage companies. Your goal will be to develop relationships and goal directed partnerships that demonstrate or expand the capabilities of Claude in the sciences. You will focus both on individual projects but also building communities of practice that bring together scientists with similar interests or objectives. The right person will combine deep relationships in sciences with an entrepreneurial and experimental drive to build something new at the frontier of AI. About Beneficial Deployments Beneficial Deployments ensures AI reaches and benefits the communities that need it most. We partner with nonprofits, foundations, and mission-driven organizations to deploy Claude in education, global health, economic mobility, and life sciences. We prioritize opportunities to make AI more accessible and raise the floor of what is possible in a given domain while also partnering with leaders to help expand the use of AI to new tasks that raise the ceiling of what is possible. Responsibilities • Own and execute Anthropic's AI for Science API credit strategy, managing partnerships with academic labs, small start-ups, and potentially foundations • Identify, evaluate, and onboard new partners via our API credit application calls as well as targeted relationship building • Serve as the primary relationship owner for AI for Science, coordinating with Anthropic’s Applied AI, product, and GTM teams to deliver impact • Compile regular reports and learnings from the AI for Science ecosystem to help inform research and GTM priorities • Manage timelines, resource requirements, and regular reporting associated with the AI for Science program • Work closely with partners to understand their technical and operational needs and translate them into deployment plans • Comfort with early technical prototyping and testing to help learn alongside and unblock partners • Set a research agenda for understanding AI's impact on individual labs across domains • Identify opportunities to create public goods -- open-source tools, datasets, and research -- that benefit the broader ecosystem • Represent Anthropic at global scientific convenings and conferences • Help shape team processes and culture as we scale our Life Science work You Might Be a Good Fit If You Have • 7+ years of experience in life sciences research with ample exposure to the intersection of new technologies as a catalyst for accelerating progress • 3+ years of experience managing external partnerships, grantmaking, or multi-stakeholder initiatives • Deep relationships and credibility in the life sciences domains: you are familiar with the key players at major universities, research institutes, and tools companies • Technical fluency to engage credibly on AI concepts, understand LLM capabilities and limitations, and partner effectively with technical teams • Analytical mindset with experience using data to inform strategy and measure impact • Experience building communities of practice in which scientists from multiple domains converge on shared exploration and use of new technologies • An entrepreneurial mindset: you've built programs or organizations from scratch, likely at startups or in founding roles • A genuine drive to maximize impact for underserved communities • High agency and comfort with ambiguity; you thrive when you're building the plane while flying it • Strong prioritization skills and the ability to manage a high volume of partnerships and opportunities • Willingness to travel for occasional partner visits and conferences Strong Candidates May Also Have • Experience working with philanthropic foundations on science funding • Background at biotech companies, academic research institutions, funding, and roles at the intersection of scientific research and technology product • Background in wet lab research with an ability and aptitude for scientific communications The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary $215,000—$300,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How We're Different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Design and implement operational processes, manage vendor relationships, partner with cross-functional teams, serve as platform administrator, and develop impact tracking systems. | At least 4-6 years of operations or program coordination experience, excellent organizational and communication skills, proficiency with project management tools, and ability to work in a fast-paced, ambiguous environment. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role As an Operations Coordinator on Anthropic's Communications team, you'll be the operational backbone that enables our communications efforts to run smoothly and scale effectively. In this role, you'll build and refine the systems, processes, and tools that help our team tell Anthropic's story to the world. You'll work across our communications functions—including external and internal communications, brand communications, and editorial —to ensure we're operating efficiently as we navigate the fast-paced world of AI. This role is ideal for someone who thrives in dynamic environments, enjoys bringing order to complexity, and wants to be part of shaping how one of the leading AI safety companies tells its story to the world. You'll have the opportunity to directly impact how Anthropic engages with media, policymakers, employees, and the broader public as we work toward our mission of building safe, reliable AI systems. Responsibilities: Design and implement operational processes that make it easier for communicators to produce great comms, operate with more efficiency and ease, and scale with minimal friction Manage our team’s vendor relationships, including onboarding, contracting, POs and payments, tracking, and troubleshooting Partner with cross-functional teams (Marketing, Policy, Product, Research) to align on communications operations and shared processes Serve as the administrator for our communications tools and platforms, and identify opportunities for automation Build and maintain systems that track and measure our team’s impact against our strategic goals; generate regular reports that inform our strategy Identify operational bottlenecks and proactively develop solutions, often with incomplete information Support rapid-response communications efforts by quickly mobilizing resources and coordinating across stakeholders You may be a good fit if you: Have 4-6 years of experience in operations, program coordination, or a similar role Possess exceptional organizational skills and meticulous attention to detail Have a strong sense of urgency and can prioritize effectively in a fast-paced environment Thrive in ambiguity and are comfortable building processes from scratch Are proficient with project management and communications tools (e.g., Asana, Airtable, Slack) Have excellent written and verbal communication skills Can work independently while also being a collaborative team player Are adaptable and can shift priorities quickly as business needs evolve Care about ensuring that transformative AI systems are developed safely Strong candidates may also have: Familiarity with the communications, PR, or media relations space Track record of process improvement initiatives that demonstrably increased team efficiency Experience supporting executive communications or high-stakes events Knowledge of media monitoring and analytics tools Experience with data analysis and operational metrics Experience implementing new tools or systems across a team Comfort working with technical teams and translating between technical and non-technical audiences The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $185,000—$255,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Manage contractor security access, develop onboarding programs, and collaborate cross-functionally to ensure security compliance and effective workforce onboarding. | Requires 5+ years in workforce management or security operations, experience with security programs, compliance frameworks, and program management skills. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role As a Business Systems Analyst - Security Workforce on Anthropic's BizTech team, you'll own and evolve how Anthropic manages contractor security, access governance, and technical onboarding. This role sits at the intersection of security operations, workforce management, and employee experience—ensuring that everyone who joins Anthropic, whether as a contractor or full-time employee, has the access they need to do meaningful work while maintaining the security posture our mission demands. You'll build and run the programs that govern how contingent workers are onboarded, credentialed, and offboarded from a security and access perspective. You'll also own the technical onboarding experience for our engineering and research roles, creating a seamless first-week experience that sets people up for success. This is a role for someone who believes that security and great user experience aren't at odds—they're complementary. As Anthropic continues to scale rapidly, you'll help ensure our contingent workforce programs and onboarding processes evolve to meet the growing demands of our organization while maintaining the high standards of security, compliance, and experience that Anthropic requires. Responsibilities Contingent Worker Security Program Own the IT and security access for contingent workers from a security and access perspective, including onboarding, periodic access reviews, and offboarding Develop and maintain policies governing contractor access levels, permissioning frameworks, and compliance requirements Partner with Security, Legal, and People teams to ensure contractor engagements meet security standards and regulatory obligations Build and manage relationships with staffing agencies for the security organization Implement and maintain process for contractor identity management, access provisioning, and audit trails Conduct regular access reviews, audits, and certifications for the contingent workforce Create clear documentation and runbooks for contractor security processes Support audit and compliance efforts with documentation and evidence gathering Technical Onboarding Program Design and continuously improve the onboarding experience for technical roles (Engineering, Research, Security, IT) Coordinate cross-functionally to ensure new hires have accounts, access, equipment, and resources ready before day one Build onboarding curricula that balance security training, cultural integration, and role-specific enablement Gather feedback from new hires and iterate on the program to reduce time-to-productivity Partner with team leads to develop role-specific onboarding tracks Track onboarding metrics and report on program effectiveness Cross-Functional Partnership Collaborate with Security on access control policies, least-privilege principles, and compliance requirements Work with People Operations on workforce planning and contractor-to-employee conversions Partner with IT on tooling, automation, and system integrations Act as a liaison between security requirements and business needs, translating complex policies into user-friendly guidance You may be a good fit if you: Have 5+ years of experience in contingent workforce management, security operations, HR operations, or a related field Have demonstrated experience building or managing contractor/vendor security programs Are familiar with identity and access management principles, least-privilege access, and role-based access control Have experience with compliance frameworks (SOC 2, ISO 27001) and how they apply to workforce management Have a track record of building or improving onboarding programs Possess strong program management skills with the ability to manage multiple workstreams simultaneously Excel at documentation creation and process design, with exceptional attention to detail Are comfortable with ambiguity and building programs from scratch Take a data-driven approach to measuring and improving program outcomes Are an effective communicator who can translate security requirements into clear guidance for non-security audiences Believe that security and great user experience are complementary, not competing priorities Care deeply about the people going through your programs and advocate for their experience Are proactive and self-directed with the ability to manage competing priorities Care about the societal impacts of your work and are excited about contributing to AI safety Strong candidates may also: Ability to balance security posture with speed and contractor experience — understands that overly rigid processes create friction and workarounds Track record of partnering cross-functionally with CW, HR, or People teams — doesn't operate in a silo and understands how security decisions impact the broader contractor lifecycle Knowledge of international compliance considerations (data residency, GDPR, regional privacy laws) and how they impact access decisions for a global contractor population Have familiarity with tools like Workday, Okta, or similar HRIS/IAM platforms Have background in HR, IT, or Security operations Have experience in AI/ML companies or other environments with heightened security requirements Have knowledge of ITIL or other IT service management frameworks Have experience with vendor management systems or procurement processes Have certifications in security (CISSP, CISM) or project management (PMP, etc.) Have experience implementing knowledge management systems or documentation platforms Deadline to apply: None. Applications will be received on a rolling basis. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $190,000—$300,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Lead and execute state and local government engagement strategies for AI policy in Western US, building relationships and managing policy campaigns. | Over 10 years of experience in government affairs or legislative roles, with existing relationships in Western states, and experience in technology policy or emerging tech regulation. | About the position We are looking for a Regional State and Local Affairs Lead to lead Anthropic's engagement with state and local governments across the Western United States, advancing AI policy, legislation, and regulation that supports Anthropic's mission of ensuring that artificial intelligence systems are developed safely and benefit humanity. In this role, you will work closely with Anthropic's Head of State and Local Government Relations and coordinate with the broader government relations team to execute state policy campaigns in your region, build coalitions to advance policy outcomes, and manage engagement across state legislatures, governors' offices, attorneys general, state agencies, and local governments in Western states. Successful candidates will demonstrate a track record of political strategy and execution—with specific examples of how you have contributed to complex policy campaigns, built relationships with key stakeholders, supported legislative and regulatory outcomes, and executed with accountability and focus in fast-paced environments. Anthropic is equal parts research lab, policy think-tank, and technology startup. We care deeply about safe development of AI systems, and build partnership with governments through proactive, opinionated, substantive policy conversations. We recognize that our approach to AI policy is genuinely distinctive in the marketplace—grounded in honest assessment of technological trajectories and authentic concern for safe scaling—and we need a regional lead who can bring that positioning to life at the state and local level. This role offers an opportunity to be part of building a high-impact state affairs operation at a critical moment for AI policy. You will be responsible for executing on strategic priorities in your region, developing relationships with key government stakeholders, and contributing to the broader state affairs function. Responsibilities • Execute Anthropic's state affairs strategy across Western states, implementing policy campaigns and engagement plans developed in coordination with the Head of State Affairs • Build and maintain relationships with state legislators, legislative staff, governors' offices, attorneys general, and state agency leaders across your assigned region • Cultivate relationships with governors' offices and executive branch leadership to advance Anthropic's policy priorities in the region • Develop and manage partnerships with external stakeholders including universities, industry partners, and advocacy organizations in Western states • Track and report on state policy developments in your region, providing intelligence and analysis to inform company strategy • Monitor and engage on legislative and regulatory developments across Western states, identifying opportunities and risks for Anthropic • Manage relationships with key state and local stakeholders including elected officials, agency leadership, and external partners—with the primary objective of advancing Anthropic's policy priorities • Represent Anthropic at regional events including conferences, summits, and government meetings • Translate Anthropic's technical research and policy positions into actionable engagement strategies tailored to Western state political environments • Coordinate with Anthropic's policy, communications, and executive teams to ensure regional activities align with company-wide positioning Requirements • Align with our mission and embrace the imperative of policy impact and change • Have 10+ years of experience in state government affairs, political strategy, or legislative roles, with demonstrated success contributing to legislative and regulatory outcomes • Have existing relationships with state leaders in Western states, including legislators, governors' offices, or attorneys general • Have experience working on state government relations in California and additional Western states, with understanding of regional political dynamics • Have a track record of executing on strategic objectives and delivering measurable outcomes in government affairs roles • Understand how state legislatures and executive branches work—committee processes, legislative and regulatory procedures, and how to move policy forward • Have experience engaging with governors' offices and attorneys general on policy matters • Have experience building relationships with external stakeholders including universities, industry groups, and advocacy organizations • Have exposure to technology policy or emerging technology regulation; you understand the unique challenges of navigating policy for rapidly evolving technologies • Can manage multiple priorities and stakeholder relationships across different states and adapt to different political environments • Demonstrate understanding of how to build coalitions and identify aligned constituencies to advance policy objectives • Have ability to work under time pressure and manage competing demands in high-stakes environments • Can translate complex technical topics into clear, accessible policy positions and talking points • Have experience working with communications teams to develop effective messaging for policy campaigns • Are drawn to working with an organization that approaches AI policy with intensity and intellectual honesty • We require at least a Bachelor's degree in a related field or equivalent experience. Benefits • competitive compensation and benefits • optional equity donation matching • generous vacation and parental leave • flexible working hours • a lovely office space in which to collaborate with colleagues
Ship features that help users develop real skill with AI, prototype end-to-end solutions, and influence product strategy. | Requires 6+ years of full-stack web development experience, strong front-end and back-end skills, and a portfolio of innovative interaction designs. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role We believe skill with AI is fundamental to human agency. Education Labs builds the paradigms that help people become genuinely more capable—not just more engaged. This is a new kind of role: part researcher, part product builder, part interaction designer. You'll be the second technical builder on a small team studying how AI transforms human capability—and shipping features based on what we discover. You'll have significant creative license to define what "good" looks like, exploring new interaction patterns rather than optimizing existing ones. We're skeptical of tutorials, onboarding flows, and engagement metrics. We care about experiences that make users progressively more capable, curious, and empowered over time. This means integrating skill development into product design, using Claude itself as a capability-building partner, and measuring success by how users actually grow. You'll operate as a one-person technical shop: prototyping new ideas, establishing technical direction, and shipping production-quality features to millions of users. You'll need strong product instincts and clean interface design sensibilities, balanced with comfort in ambiguity and frontier thinking. Responsibilities: • Ship features that help users develop real skill with AI—measuring success by capability growth, not time-on-site • Architect end-to-end prototypes (front-end and back-end) that test new interaction paradigms, with particular attention to the front-of-the-frontend: motion, polish, and interaction feel • Define technical direction for the team—establish patterns others can follow • Build relationships across Product, Design, and Research to influence how skill development principles shape Anthropic's broader product strategy • Shape team strategy and roadmap—identify the highest-leverage opportunities and build conviction across stakeholders • Translate research insights about skill development and human-AI collaboration into shipped product through close collaboration with researchers • Document and share your work through clear writing, prototypes, and presentations that influence thinking across the organization You may be a good fit if you have: Strong full-stack engineering with design sensibility • 6+ years building and shipping web products, with deep expertise across the stack • Strong front-end craft: TypeScript/JavaScript, React, CSS—with an eye for interaction design, motion, and visual detail • Solid back-end and data pipeline experience: Python, API design, analytics infrastructure • A portfolio showcasing innovative interaction designs and high-quality implementations • Track record of independently driving features from prototype to production Deep conviction about human capability • Strong perspective on how technology should enhance human capabilities rather than diminish them • Experience or genuine passion for skill development, HCI, developer tools, or products that help people become more capable • Skepticism of purely engagement-driven metrics; interest in measuring capability outcomes Research mindset with product execution • Comfort with ambiguity and exploring undefined problem spaces • Ability to rapidly prototype, test with users, and iterate toward production • Strong instincts for product design and user experience, even without formal design training Strategic leadership and coalition building • Experience setting vision, shaping team strategy, and building conviction across cross-functional stakeholders • Ability to build productive relationships with Product, Design, Research, and Engineering teams—especially when your team isn't the owner • Strong sense of prioritization—knowing what to build now, what to defer, and what to cut • Track record of influencing roadmaps and decisions beyond your immediate team Strong candidates may also have: • Experience in developer tools, creative tools, learning platforms, or other products where user skill development and mastery matter more than time-on-site • Background in learning sciences, cognitive science, HCI, skill acquisition research, or educational psychology (formal or self-directed) • Experience with experimentation frameworks, A/B testing, or analytics that measure capability development in production • Previous experience in research labs, frontier tech companies, or startups with high autonomy and ambiguity • Published writing, talks, or open-source work on skill development, human-AI interaction, or product philosophy • Experience building AI-native product experiences or working with LLMs in production contexts Strong candidates may also have • Background in learning sciences, cognitive science, HCI, or educational psychology • Experience in developer tools, creative tools, or learning platforms where mastery matters more than engagement • Published writing, talks, or open-source work on skill development or human-AI interaction • Experience building AI-native product experiences or working with LLMs in production What this role is not This is a hands-on technical role building product features, embedded within a research team. You'll provide technical guidance and help set direction, but this role doesn't involve people management off the bat. If you're looking to immediately transition into engineering management or lead a large team, this likely isn't the right fit. The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $1—$2 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Develop and manage executive protection programs, conduct risk assessments, coordinate protection teams and vendors, and maintain security protocols for company executives. | 10+ years executive protection experience with 5+ years in program management, strong leadership and interpersonal skills, ability to work with C-suite executives, and preferably a background in security or law enforcement. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As part of the Anthropic security department, the Global Safety, Intelligence and Security team assesses, identifies, and mitigates physical safety and security risks in a manner that best protects Anthropic's people, assets, operations, and reputation. The Protective Services Program Manager will be responsible for developing, implementing, and managing comprehensive protection strategies for company executives and other key personnel as well as overseeing our vendor executive protection program. This role requires a sophisticated understanding of risk management, proven leadership experience, excellent communication skills, and the ability to operate with discretion in a dynamic technology environment. Responsibilities: Design and implement comprehensive executive protection programs and protocols Conduct thorough risk assessments and develop mitigation strategies for executive travel, events, and daily operations Manage and coordinate executive protection team members and vendor relationships Develop and maintain strong relationships with executives, their staff, and key stakeholders Create and maintain standard operating procedures for executive protection operations Coordinate with various internal teams including Global Security, Travel, and Executive Admin teams Utilize protective intelligence gathering and analysis to inform security operations Manage program budgets and resource allocation Provide regular reporting on program effectiveness and incident metrics You may be a good fit if you: Have 10+ years of experience in executive protection, with at least 5 years in program management Possess strong leadership and interpersonal skills with the ability to interact professionally with C-suite executives and their staff Have experience building from the ground up or scaling executive protection programs in corporate environments Demonstrate excellent judgment and decision-making abilities in high-pressure situations Are adaptable and can maintain composure in dynamic, fast-paced environments Have strong project management and organizational skills Strong candidates may also have the following: Bachelor's degree in Security Management, Criminal Justice, or related field Advanced certifications in Executive Protection or Security Management Military or law enforcement background Experience in technology sector security operations International protective operations experience Proven track record of building relationships with law enforcement and other public sector security agencies Deadline to apply: None. Applications will be reviewed on a rolling basis. The expected salary range for this position is: Annual Salary: $220,000—$275,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Develop and update AI usage policies focused on user well-being, advise on safety interventions, conduct policy reviews, and collaborate with cross-functional teams to ensure safe AI product use. | Requires senior-level experience in mental health or related fields, policy drafting skills, ability to work with AI products, and strong collaboration across technical and policy teams, with a preference for advanced clinical degrees. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role: As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on mitigating potential risks related to user well-being, including concerns regarding mental health, sycophancy, delusions, and emotional attachment. In addition, you will advise teams on opportunities for promoting well-being, including potential intervention development and supporting beneficial use cases. Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way. • Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. Responsibilities: Serve as an internal subject matter expert, leveraging deep expertise in mental health and well-being to: • Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases • Design evaluation frameworks for testing model performance in areas of expertise • Conduct regular reviews and testing of existing policies to identify and address gaps and ambiguities • Review flagged content to drive enforcement and policy improvements • Update our usage policies based on feedback collected from external experts, our enforcement team, and edge cases that you will review • Work with safeguards product teams to identify and mitigate concerns, and collaborate on designing appropriate interventions • Educate and align internal stakeholders around our policies and our approach to safety in your focus area(s) • Keep up to date with new and existing AI policy norms and standards, and use these to inform our decision-making on policy areas You may be a good fit if you have experience: • As a researcher, subject matter expert, clinician, or trust & safety professional working in one or more of the following focus areas: psychology, mental health, developmental science, or human-AI interaction. Note: For this role, an advanced degree in clinical psychology, counseling psychology, psychiatry, social work, or a related field is preferred. • Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions • Crafting evidence-based and psychometrically valid definitions for emerging phenomena • Working with generative AI products, including writing effective prompts for policy evaluations • Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams • Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space • Thinking creatively about the risks and benefits of new technologies, and leverage data and research to inform policy recommendations • Navigating and prioritizing work efforts amidst ambiguity Deadline to apply: None. Applications will be reviewed on a rolling basis. The expected salary range for this position is: Annual Salary: $190,000—$220,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Detect and investigate CBRNE threats focusing on biological harms, develop detection capabilities, create threat intelligence reports, and collaborate with policy and external stakeholders. | Requires deep biological and chemical domain expertise, 3+ years in biosecurity threat analysis, proficiency in SQL and Python, experience with AI systems, and strong communication and collaboration skills. | About Anthropic Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About this role As a Technical Threat Investigator focused on CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosives) risks with particular emphasis on biodefense, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic's AI systems for biological threats and harm on Anthropic's threat intelligence team. This role combines deep technical investigation skills with specialized domain expertise in biodefense to protect against sophisticated threat actors who may attempt to leverage our AI technology for malicious biological applications. You will work at the intersection of AI safety and biosecurity, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against emerging biological threats in the rapidly evolving landscape of AI-enabled risks. Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays. Responsibilities: • Detect and investigate CBRN threats: Identify and thoroughly investigate attempts to misuse Anthropic's AI systems for developing, enhancing, or disseminating CBRNE weapons, pathogens, toxins, or other CBRNE threats to harm people, critical infrastructure, or the environment. The primary focus will be on biological and chemical harms. • Cross-platform threat analysis: Ground investigations in real threat actor behavior, basing findings off of cross-internet and open-source research, as well as past publicly reported programs. • Conduct technical investigations: Utilize SQL, Python, and other technical tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRNE threat actors across our platform. • Create actionable intelligence: Develop detailed threat intelligence reports on biological attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems, with specific focus on biodefense implications. • Develop biological-specific detection capabilities: Create abuse signals, tracking strategies, and detection methodologies specifically tailored to identify users attempting dual-use biological misuse, including emerging biothreat vectors and novel attack patterns. • Collaborate with policy & enforcement teams: Work closely with policy & enforcement teams to make informed decisions about user violations related to biological threats and ensure appropriate mitigation actions. • External stakeholder engagement: Communicate findings with external partners including government agencies, regulatory bodies, scientific organizations, and biosecurity research communities. You may be a good fit if you: • Have biological and chemical domain expertise: Possess deep knowledge in biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), biodefense, synthetic biology, or related biological threat domains • Strong technical investigation skills: Demonstrated experience in technical analysis and investigations, with proficiency in SQL, Python, and data analysis tools for threat detection and user behavior analysis • Threat intelligence or Targeting background: Experience in threat actor profiling, utilizing threat intelligence frameworks, and conducting adversarial analysis, particularly in biosecurity or related domains • Experience with AI systems: Have hands-on experience with large language models and deep understanding of how AI technology could potentially be misused for biological threats • Cross-functional collaboration: Excellent stakeholder management skills and ability to work effectively with diverse teams including researchers, policy experts, legal teams, and external partners • Communication skills: Ability to present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership Preferred qualifications: • Advanced degree (MS or PhD) in biological sciences, biodefense, biosecurity, or related field, or equivalent professional experience • Real world experience countering weapons of mass destruction, CBRNE, or other high risk dangerous asymmetric threats • 3+ years of experience in biosecurity threat analysis, biological defense, or related investigative roles • Comfortable with SQL and Python • Experience working with government agencies or in regulated environments dealing with sensitive biological information • Background in AI safety, machine learning security, or technology abuse investigation • Familiarity with synthetic biology, biotechnology, or dual-use biological research • Experience building and scaling threat detection systems or abuse monitoring programs The expected salary range for this position is: Annual Salary: $230,000-$355,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Create tailored applications specifically for Anthropic with our AI-powered resume builder
Get Started for Free