via LinkedIn
$120K - 180K a year
Support the design and implementation of AI model governance, ensure regulatory compliance, and promote responsible AI practices across the organization.
Requires 4-6+ years in AI governance, model risk, or compliance, with strong understanding of AI/ML lifecycle, regulatory landscape, and ethical AI assessments.
Overview We are seeking a mission‑driven Lead, Responsible AI & Governance to help ensure that our AI systems are safe, ethical, compliant, and aligned with organizational and regulatory expectations. In this role, you will contribute to the development, implementation, and ongoing improvement of AI governance frameworks while partnering with compliance, legal, security, and data teams to embed responsible AI practices throughout the model lifecycle. This is a hands‑on, senior individual contributor role—ideal for someone who can evaluate AI risks, guide stakeholders on governance expectations, and strengthen model oversight processes without direct people management responsibilities. Key Responsibilities Model Risk Management & Governance • Support the design and implementation of end‑to‑end model governance processes, including model risk assessments, documentation, validation coordination, and lifecycle oversight. • Maintain and operationalize AI/ML policies, standards, and operating procedures. • Partner with internal audit and risk teams during model audits to ensure alignment with organizational expectations and industry best practices. • Contribute to transparency, explainability, and performance-monitoring frameworks for AI/ML models across the enterprise. Regulatory & Compliance Partnership • Serve as a key partner to HIPAA, privacy, security, and legal teams to ensure compliance with AI-related regulatory and data protection requirements. • Help monitor the evolving AI regulatory landscape (e.g., FDA guidance, ONC rules, EU AI Act) and communicate impacts to internal stakeholders. • Ensure responsible data usage, privacy protections, and security controls are incorporated into AI workflows. Ethical AI & Responsible Deployment • Support ethical AI reviews, including fairness assessments, bias detection, model explainability evaluations, and human‑impact assessments. • Assist with AI incident response activities such as documentation, triage coordination, root‑cause analysis, and remediation tracking. • Work closely with product and engineering teams to integrate responsible AI principles throughout model design, development, and deployment. Cross‑Functional Enablement & Education • Educate technical and non‑technical teams on responsible AI practices, governance processes, and regulatory expectations. • Develop dashboards, trackers, and reporting mechanisms to measure governance program performance. • Help drive organization-wide adoption of responsible AI through training materials, communication, and stakeholder engagement. Required Qualifications • 4–6+ years of experience in AI governance, model risk management, compliance, data governance, or related fields. • Strong understanding of AI/ML development lifecycles, risk assessment methodologies, and model validation concepts. • Familiarity with regulatory or compliance domains such as HIPAA, privacy laws, and AI-related requirements. • Experience with fairness/bias assessment, explainable AI techniques, or ethical technology evaluations. • Excellent communication skills with the ability to collaborate across cross-functional teams. Preferred • Experience in healthcare, financial services, or another regulated industry. • Background in data science, machine learning, statistical modeling, or analytics. • Knowledge of the NIST AI Risk Management Framework, ISO/IEC AI standards, or similar AI governance frameworks.
This job posting was last updated on 2/15/2026