AN

Anthropic

6 open positions available

2 locations
1 employment type
Actively hiring
Full-time

Latest Positions

Showing 6 most recent jobs
AN

Business Systems Analyst

AnthropicAnywhereFull-time
View Job
Compensation$230K - 300K a year

The Business Systems Analyst will lead functional requirements gathering sessions and configure business systems, particularly focusing on Workday. They will also create documentation and training materials while managing system testing and providing ongoing support. | Candidates should have over 5 years of experience as a Business Systems Analyst with expertise in enterprise software configuration, particularly Workday. Strong analytical skills, excellent communication abilities, and experience with system integrations are also required. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role As a Business Systems Analyst on Anthropic's Business Technology team, you will play a critical role in driving our organization's operational excellence through strategic systems implementation and optimization. While you'll be actively involved in our Workday implementation, this role extends far beyond a single platform to encompass our broader enterprise systems ecosystem. You'll serve as the vital bridge between business stakeholders and technology solutions, translating complex functional requirements into actionable system configurations and processes. Your expertise will be instrumental in ensuring our business systems not only meet today's needs but also scale with our rapid growth and evolving requirements. From documentation and enablement to hands-on systems configuration, you'll help empower our teams with the tools and knowledge they need to do their best work. Responsibilities Lead functional requirements gathering sessions with business stakeholders across multiple departments to deeply understand process needs and translate them into technical system requirements Configure and optimize business systems with a focus on Workday modules (HCM, Financial Management, Planning) while maintaining expertise across other enterprise platforms Design and implement system workflows, automations, and integrations that enhance operational efficiency and user experience Create comprehensive documentation including system guides, process flows, training materials, and best practice resources Develop and deliver enablement programs and training sessions for end users, ensuring successful adoption of system functionality and process changes Partner closely with business teams to analyze current state processes, identify improvement opportunities, and design future state solutions Manage system testing efforts including test plan development, UAT coordination, and bug resolution tracking Provide ongoing support and troubleshooting for business systems, serving as a subject matter expert and escalation point Collaborate with IT Engineering teams to define integration requirements and ensure seamless data flow between systems Measure and monitor system performance and user feedback to continuously optimize configurations and identify enhancement opportunities You may be a good fit if you Have 5+ years of experience as a Business Systems Analyst with deep expertise in enterprise software configuration and implementation Possess hands-on experience with Workday (HCM and/or Financial modules preferred) along with demonstrated proficiency in other business systems such as GSuite, NetSuite, or similar platforms Excel at requirements gathering and stakeholder management, with the ability to facilitate productive discussions between technical and business teams Have strong analytical and problem-solving skills with experience designing efficient business processes and system workflows Are skilled at creating clear, comprehensive documentation and training materials for diverse audiences Demonstrate excellent communication skills with the ability to explain complex technical concepts to non-technical stakeholders Have experience with system integrations, data mapping, and understanding of API fundamentals Show proficiency in project management with the ability to manage multiple initiatives simultaneously Are passionate about user experience and driving adoption of business systems and processes Bring a collaborative mindset and enjoy working in a fast-paced, dynamic environment Strong candidates may also have Experience with additional business systems such as procurement platforms, expense management tools, or business intelligence platforms Knowledge of data analysis tools and techniques including SQL or similar query languages Familiarity with automation platforms or workflow tools (e.g., Zapier, Microsoft Power Platform, or similar) Experience with change management methodologies and driving organizational transformation Background in business process improvement, Lean, or Six Sigma methodologies Previous experience in high-growth technology companies or startup environments Interest in AI and how emerging technologies can enhance business operations Deadline to apply: None. Applications will be reviewed on a rolling basis. The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation. Annual Salary: $230,000—$300,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Business Systems Analysis
Workday
Requirements Gathering
Stakeholder Management
Analytical Skills
Problem-Solving
Documentation
Training
System Integrations
Data Mapping
API Fundamentals
Project Management
User Experience
Collaboration
Change Management
Business Process Improvement
Direct Apply
Posted 4 days ago
AN

Product Designer, Claude Code

AnthropicAnywhereFull-time
View Job
Compensation$260K - 305K a year

Design and prototype agentic workflows across various surfaces, invent new interaction patterns, and work closely with engineering and research teams. | Bachelor's degree or equivalent experience, experience with developer tools or IDEs, understanding of AI/ML, and a passion for tinkering and exploring new tools. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role In the Product Designer, Claude Code role, you'll: Design for agentic workflows across surfaces. Own the experience of Claude Code across terminal, IDE extensions, web, and Slack. Design how Claude communicates progress, requests permissions, handles failures, and coordinates multi-step autonomous work. Invent new interaction patterns. Claude Code is intentionally low-level and unopinionated—a power tool for sophisticated users. Design conventions that respect developer expertise while making complex agentic behaviors feel intuitive and trustworthy. Prototype in code. This is a developer tool. You'll work directly with terminal interfaces, understand CLI constraints, and build functional prototypes. The best ideas will come from hands-on exploration, not just mockups. Ship fast and iterate. Claude Code moves quickly—new capabilities ship constantly as models improve. You'll need to design, test, and refine in tight cycles, often on critical-path work for major launches. Work fluidly across the team. Partner with engineering, research, and product without needing clear handoffs or defined lanes. Jump into problems wherever you're useful—whether that's pairing with an engineer on implementation details, exploring a new model capability with a researcher, or pressure-testing an idea with users directly. Raise the bar on craft. Obsess over the details that make power tools feel right: information density, speed, keyboard-first interaction, clear feedback loops, minimal friction. Why this role matters: Claude Code started as a research project and became one of Anthropic's most successful products, generating over $1B in annualized revenue within six months of launch. It redefined how developers work, and we're just getting started. This is genuinely new territory. We're designing the future of coding — Claude working autonomously on tasks, making decisions, asking for input when needed, and coordinating with humans throughout. The interaction patterns for this don't exist yet. We need to invent them. The product is expanding fast: from terminal to VS Code and JetBrains IDEs, to web, to Slack. Each surface brings new design challenges. And the underlying models keep getting more capable, which means the design problems keep evolving. We need designers who can move as fast as the technology—people who are excited by the ambiguity, not frustrated by it. You might be a great fit if: You live at the intersection of code and design. You genuinely enjoy developer tools and tinker with new things for fun. This role will shape how developers experience agentic coding—not just at Anthropic, but across the industry.. You're a tinkerer and explorer. You try new tools before anyone asks you to. You have opinions about terminal emulators and keyboard shortcuts. You're energized by where AI is headed—and want to be part of figuring out what comes next. Bonus points: Experience designing for command-line interfaces, IDEs or other dev tools Contributions to open source projects or developer tools Understanding of how LLMs work and their current capabilities/limitations A GitHub profile, personal site, or side projects that show what you build when no one's asking The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation. Annual Salary: $260,000—$305,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

UI/UX Design
Developer Tools
Command-line Interfaces
Prototyping in Code
Cross-functional Collaboration
Direct Apply
Posted 6 days ago
Anthropic

Security Risk Analyst

AnthropicSeattle, WAFull-time
View Job
Compensation$255K - 345K a year

Support and monitor IBM i systems, troubleshoot issues, and ensure audit readiness. | Extensive experience in risk, governance, or compliance roles, with knowledge of security frameworks like SOC2, ISO 27001, and experience in high-growth tech environments. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About The Role As part of Anthropic's Compliance Team, you'll help build and scale our risk management function. This unique role requires taking well established risk frameworks and adapting them to manage security and compliance risks in the rapidly evolving AI landscape.You’ll be a key contributor in shaping how the organization evaluates and mitigates risks that evolve from industry leading research, products, and public policy. As our Risk Analyst reporting to the Head of Compliance, you'll be responsible for bringing clarity to complex risk scenarios, developing innovative assessment methodologies, and ensuring our risk management approach scales with our ambitious mission to ensure transformative AI helps people and society flourish. Responsibilities • Triage and evaluate submitted risks through comprehensive assessment of inherent and residual risk scores, aligning with company policies, objectives, and our current control environment • Drive collaborative engagement with stakeholders across the organization to develop effective risk treatment plans and establish robust mitigating controls • Contribute to and maintain our Controls Portfolio by documenting mitigating controls and ensuring accurate mapping to relevant compliance frameworks • Partner with the Risk Management Lead to analyze and report on key risk metrics and trends, providing actionable insights for executive decision-making and strategic planning • Shape the evolution of our risk management program, helping build and refine processes that scale with our growing organization • Ensure the effectiveness of risk management controls through rigorous monitoring and documentation support for both internal and external audits You May Be a Good Fit If You • Have 5-10 years of experience in governance, risk, and/or compliance roles, with a track record of adapting frameworks to evolving business needs • Have navigated compliance challenges within high-growth organizations, particularly in heavily regulated environments • Possess deep understanding of information security risks, controls, and threat models, with the ability to apply this knowledge to emerging technology challenges • Bring hands-on experience with security frameworks such as SOC2, ISO 27001, FedRAMP, and HIPAA • Excel at quantitative risk analysis and can adapt frameworks to novel use cases • Can effectively translate complex security risks for diverse stakeholders, bridging technical details with business context to foster a risk-aware culture Strong Candidates May Also Have Experience With • Hands-on experience with GRC platforms, project management tools, and service management systems, with a focus on scaling and automating risk processes • Bring experience building or significantly improving risk management programs within high-growth technology organizations, particularly those dealing with emerging technologies • Hold relevant certifications such as CRISC, ISC2 Risk Management, ISO 31000, or other information security risk credentials that demonstrate commitment to the craft Deadline to apply: None. Applications will be reviewed on a rolling basis. The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation. Annual Salary $255,000—$345,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How We're Different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

IBM i / AS400 operations
System monitoring and troubleshooting
Unix/Linux fundamentals
Audit and compliance support
Verified Source
Posted 11 days ago
AN

Protective Services Program Manager

AnthropicAnywhereFull-time
View Job
Compensation$220K - 275K a year

Develop and manage executive protection programs, conduct risk assessments, coordinate protection teams and vendors, and maintain security protocols for company executives. | 10+ years executive protection experience with 5+ years in program management, strong leadership and interpersonal skills, ability to work with C-suite executives, and preferably a background in security or law enforcement. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As part of the Anthropic security department, the Global Safety, Intelligence and Security team assesses, identifies, and mitigates physical safety and security risks in a manner that best protects Anthropic's people, assets, operations, and reputation. The Protective Services Program Manager will be responsible for developing, implementing, and managing comprehensive protection strategies for company executives and other key personnel as well as overseeing our vendor executive protection program. This role requires a sophisticated understanding of risk management, proven leadership experience, excellent communication skills, and the ability to operate with discretion in a dynamic technology environment. Responsibilities: Design and implement comprehensive executive protection programs and protocols Conduct thorough risk assessments and develop mitigation strategies for executive travel, events, and daily operations Manage and coordinate executive protection team members and vendor relationships Develop and maintain strong relationships with executives, their staff, and key stakeholders Create and maintain standard operating procedures for executive protection operations Coordinate with various internal teams including Global Security, Travel, and Executive Admin teams Utilize protective intelligence gathering and analysis to inform security operations Manage program budgets and resource allocation Provide regular reporting on program effectiveness and incident metrics You may be a good fit if you: Have 10+ years of experience in executive protection, with at least 5 years in program management Possess strong leadership and interpersonal skills with the ability to interact professionally with C-suite executives and their staff Have experience building from the ground up or scaling executive protection programs in corporate environments Demonstrate excellent judgment and decision-making abilities in high-pressure situations Are adaptable and can maintain composure in dynamic, fast-paced environments Have strong project management and organizational skills Strong candidates may also have the following: Bachelor's degree in Security Management, Criminal Justice, or related field Advanced certifications in Executive Protection or Security Management Military or law enforcement background Experience in technology sector security operations International protective operations experience Proven track record of building relationships with law enforcement and other public sector security agencies Deadline to apply: None. Applications will be reviewed on a rolling basis. The expected salary range for this position is: Annual Salary: $220,000—$275,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Executive Protection
Risk Management
Program Management
Leadership
Stakeholder Communication
Security Operations
Budget Management
Protective Intelligence
Direct Apply
Posted 3 months ago
Anthropic

Policy Design Manager, User Well-being

AnthropicAnywhereFull-time
View Job
Compensation$190K - 220K a year

Develop and update AI usage policies focused on user well-being, advise on safety interventions, conduct policy reviews, and collaborate with cross-functional teams to ensure safe AI product use. | Requires senior-level experience in mental health or related fields, policy drafting skills, ability to work with AI products, and strong collaboration across technical and policy teams, with a preference for advanced clinical degrees. | About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role: As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on mitigating potential risks related to user well-being, including concerns regarding mental health, sycophancy, delusions, and emotional attachment. In addition, you will advise teams on opportunities for promoting well-being, including potential intervention development and supporting beneficial use cases. Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way. • Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. Responsibilities: Serve as an internal subject matter expert, leveraging deep expertise in mental health and well-being to: • Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases • Design evaluation frameworks for testing model performance in areas of expertise • Conduct regular reviews and testing of existing policies to identify and address gaps and ambiguities • Review flagged content to drive enforcement and policy improvements • Update our usage policies based on feedback collected from external experts, our enforcement team, and edge cases that you will review • Work with safeguards product teams to identify and mitigate concerns, and collaborate on designing appropriate interventions • Educate and align internal stakeholders around our policies and our approach to safety in your focus area(s) • Keep up to date with new and existing AI policy norms and standards, and use these to inform our decision-making on policy areas You may be a good fit if you have experience: • As a researcher, subject matter expert, clinician, or trust & safety professional working in one or more of the following focus areas: psychology, mental health, developmental science, or human-AI interaction. Note: For this role, an advanced degree in clinical psychology, counseling psychology, psychiatry, social work, or a related field is preferred. • Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions • Crafting evidence-based and psychometrically valid definitions for emerging phenomena • Working with generative AI products, including writing effective prompts for policy evaluations • Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams • Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space • Thinking creatively about the risks and benefits of new technologies, and leverage data and research to inform policy recommendations • Navigating and prioritizing work efforts amidst ambiguity Deadline to apply: None. Applications will be reviewed on a rolling basis. The expected salary range for this position is: Annual Salary: $190,000—$220,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Policy Development
Mental Health Expertise
Content Strategy
Stakeholder Engagement
User Experience
Data Visualization
Regulatory Reporting
Safety and Risk Mitigation
Verified Source
Posted 3 months ago
Anthropic

Technical Threat Investigator, Safeguards (CBRN)

AnthropicAnywhereFull-time
View Job
Compensation$230K - 355K a year

Detect and investigate CBRNE threats focusing on biological harms, develop detection capabilities, create threat intelligence reports, and collaborate with policy and external stakeholders. | Requires deep biological and chemical domain expertise, 3+ years in biosecurity threat analysis, proficiency in SQL and Python, experience with AI systems, and strong communication and collaboration skills. | About Anthropic Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About this role As a Technical Threat Investigator focused on CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosives) risks with particular emphasis on biodefense, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic's AI systems for biological threats and harm on Anthropic's threat intelligence team. This role combines deep technical investigation skills with specialized domain expertise in biodefense to protect against sophisticated threat actors who may attempt to leverage our AI technology for malicious biological applications. You will work at the intersection of AI safety and biosecurity, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against emerging biological threats in the rapidly evolving landscape of AI-enabled risks. Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays. Responsibilities: • Detect and investigate CBRN threats: Identify and thoroughly investigate attempts to misuse Anthropic's AI systems for developing, enhancing, or disseminating CBRNE weapons, pathogens, toxins, or other CBRNE threats to harm people, critical infrastructure, or the environment. The primary focus will be on biological and chemical harms. • Cross-platform threat analysis: Ground investigations in real threat actor behavior, basing findings off of cross-internet and open-source research, as well as past publicly reported programs. • Conduct technical investigations: Utilize SQL, Python, and other technical tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRNE threat actors across our platform. • Create actionable intelligence: Develop detailed threat intelligence reports on biological attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems, with specific focus on biodefense implications. • Develop biological-specific detection capabilities: Create abuse signals, tracking strategies, and detection methodologies specifically tailored to identify users attempting dual-use biological misuse, including emerging biothreat vectors and novel attack patterns. • Collaborate with policy & enforcement teams: Work closely with policy & enforcement teams to make informed decisions about user violations related to biological threats and ensure appropriate mitigation actions. • External stakeholder engagement: Communicate findings with external partners including government agencies, regulatory bodies, scientific organizations, and biosecurity research communities. You may be a good fit if you: • Have biological and chemical domain expertise: Possess deep knowledge in biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), biodefense, synthetic biology, or related biological threat domains • Strong technical investigation skills: Demonstrated experience in technical analysis and investigations, with proficiency in SQL, Python, and data analysis tools for threat detection and user behavior analysis • Threat intelligence or Targeting background: Experience in threat actor profiling, utilizing threat intelligence frameworks, and conducting adversarial analysis, particularly in biosecurity or related domains • Experience with AI systems: Have hands-on experience with large language models and deep understanding of how AI technology could potentially be misused for biological threats • Cross-functional collaboration: Excellent stakeholder management skills and ability to work effectively with diverse teams including researchers, policy experts, legal teams, and external partners • Communication skills: Ability to present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership Preferred qualifications: • Advanced degree (MS or PhD) in biological sciences, biodefense, biosecurity, or related field, or equivalent professional experience • Real world experience countering weapons of mass destruction, CBRNE, or other high risk dangerous asymmetric threats • 3+ years of experience in biosecurity threat analysis, biological defense, or related investigative roles • Comfortable with SQL and Python • Experience working with government agencies or in regulated environments dealing with sensitive biological information • Background in AI safety, machine learning security, or technology abuse investigation • Familiarity with synthetic biology, biotechnology, or dual-use biological research • Experience building and scaling threat detection systems or abuse monitoring programs The expected salary range for this position is: Annual Salary: $230,000-$355,000 USD Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

CBRNE threat investigation
Biosecurity expertise
SQL
Python
Threat intelligence
AI safety
Cross-functional collaboration
Technical investigations
Verified Source
Posted 3 months ago

Ready to join Anthropic?

Create tailored applications specifically for Anthropic with our AI-powered resume builder

Get Started for Free

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt