KEY RESPONSIBILITIES
• Work with business stakeholders to translate use cases into AI/ML problem statements, success metrics, and model requirements.
• Design and build machine learning models and generative AI solutions (LLMs, RAG architectures, classification, forecasting, NLP) aligned to enterprise use cases.
• Conduct rigorous model validation: accuracy benchmarking, bias testing, fairness evaluation, and explainability analysis.
• Build and maintain ML pipelines for data preprocessing, feature engineering, model training, and deployment using MLOps tooling.
• Deploy models to production environments (cloud or on-premise) and monitor for drift, degradation, and anomalies.
• Document models in the enterprise AI registry, including architecture, training data, assumptions, risk classification, and performance baselines.
• Collaborate with data engineers to ensure high-quality, governed data feeds into model training and inference pipelines.
• Apply responsible AI principles throughout the model lifecycle: fairness, transparency, privacy preservation, and human-in-the-loop design.
• Produce model cards, technical documentation, and stakeholder-facing performance reports.
REQUIRED SKILLS & QUALIFICATIONS
• 4+ years (AM) / 7+ years (Manager–SM) in machine learning, data science, or AI engineering roles.
• Strong proficiency in Python and ML frameworks: scikit-learn, PyTorch, TensorFlow, HuggingFace Transformers.
• Experience with generative AI and LLM frameworks: OpenAI API, LangChain, RAG architectures, vector databases (Pinecone, Weaviate, Chroma).
• Hands-on MLOps experience: MLflow, Kubeflow, Azure ML, AWS SageMaker, or Vertex AI.
• Knowledge of model explainability tools (SHAP, LIME) and bias detection frameworks.
• Experience deploying models in cloud environments (Azure, AWS, GCP) with CI/CD pipelines.
• Familiarity with enterprise AI governance principles and responsible AI practices is a strong advantage.