KEY RESPONSIBILITIES
• Design and implement an enterprise AI governance framework covering the full AI lifecycle: development, validation, deployment, monitoring, and decommissioning.
• Define AI risk classification methodology (risk tiers) and establish corresponding oversight and approval processes.
• Develop policies for responsible AI: fairness, explainability, transparency, privacy preservation, and human oversight.
• Align the AI governance framework with emerging global and regional AI regulations (UAE AI Regulation, EU AI Act, ISO/IEC 42001).
• Build an AI model inventory / registry tracking all deployed models, risk levels, owners, and review schedules.
• Establish AI incident and bias monitoring processes; define escalation and remediation playbooks.
• Collaborate with data science, legal, compliance, and IT security teams to embed governance controls throughout the MLOps pipeline.
• Lead AI ethics and responsible AI training programmes for technology and business teams.
• Produce AI governance dashboards, risk reports, and board-level briefings on AI programme health.
REQUIRED SKILLS & QUALIFICATIONS
• 5+ years (AM) / 8+ years (Manager–SM) in AI governance, responsible AI, risk management, or data ethics roles.
• Knowledge of global AI governance frameworks: NIST AI RMF, EU AI Act, UAE National AI Strategy, ISO/IEC 42001.
• Understanding of AI/ML model lifecycle: training, validation, deployment, monitoring, and bias detection.
• Experience developing AI risk classification systems and model review processes.
• Strong policy writing and regulatory interpretation skills.
• Familiarity with MLOps platforms (MLflow, Azure ML, Vertex AI, SageMaker) for governance integration.
• Certification in AI ethics, risk management (FRM, PRM), or data governance (CDMP) is highly advantageous.