via Indeed
$70K - 90K a year
Support AI governance program implementation, use case vetting, stakeholder communication, risk assessment, training, and regulatory compliance.
Experience in AI governance, compliance, or risk management with strong communication and collaboration skills; legal or technical background preferred.
IT Risk and Compliance Analyst (AI Governance & Agentic AI Program) Remote — Eligible Locations: AZ, FL, GA, IN, SC, NC, TX, TN, KY Type: Long Term Contract CornerStone Technology Talent Services is seeking a skilled AI Governance & Agentic AI Program Specialist to support the ongoing build-out and maturation of an enterprise AI Governance Program. This role focuses on responsible AI development and deployment, regulatory alignment, and risk mitigation across a wide range of AI use cases. You will collaborate closely with cross-functional partners in Legal, Privacy, Risk, IT, and specialized governance councils. Key Responsibilities Use Case Vetting & Documentation • Support the intake, review, and approval of AI use cases. • Maintain the internal AI Registry and coordinate review cycles with governance councils. Stakeholder Engagement & Communication • Work closely with business owners and suppliers to collect requirements, model information, training data context, metrics, and disclosures for the AI Registry and model cards. • Help establish and maintain feedback loops on AI governance efforts, program metrics, and new standards or policies. Program Implementation & Build-Out • Assist in the implementation of AI Governance processes, frameworks, workflows, and council operations. • Support the development of new governance workflows, including AI incident response. • Contribute to dashboards and reporting to support program visibility and performance tracking. • Help define and refine governance requirements and best practices for agentic AI systems. Security & Risk Assessment • Collaborate with Application Security and Risk teams to support the application of frameworks such as OWASP for LLMs. • Participate in red teaming, security reviews, and documentation of risk assessments. Training & Literacy • Assist in creating role-based training and literacy materials to help teams understand and adopt responsible AI principles. Regulatory Monitoring & Compliance • Support monitoring of emerging AI regulations, standards, and legislation, and contribute to internal compliance mapping activities. Qualifications • Experience in AI governance, data privacy, compliance, or risk management. • Strong documentation, coordination, and/or project management skills. • Ability to collaborate effectively across technical and non-technical teams. • Experience working with legal perspectives on technology topics. • Strong communication skills and the ability to bridge technical and regulatory domains. • Interest in or familiarity with AI/ML technologies and their ethical, legal, or operational considerations. Preferred Qualifications • Advanced degree or experience in law, computer science, or related fields; experience in legal environments is a plus. • Relevant certifications such as CIPP/US, CISM, CIPM, or AI governance–related credentials. • Knowledge of regulatory frameworks such as NIST AI RMF, ISO 42001, EU AI Act, or U.S. data privacy/security laws; familiarity with IP considerations is beneficial. • Experience with red teaming, application security, or security assessments. • Understanding of model lifecycle management, auditability, and responsible AI best practices.
This job posting was last updated on 12/7/2025