
Apparel Group
AI Machine Learning Engineer - MLOps & Model Deployment
- Permanent
- Dubai, United Arab Emirates
- Experience 2 - 5 yrs
Job expiry date: 30/05/2026
Job overview
Date posted
15/04/2026
Location
Dubai, United Arab Emirates
Salary
AED 20,000 - 30,000 per month
Compensation
Job description
The AI Machine Learning Engineer - MLOps & Model Deployment role is responsible for translating business problems into machine learning formulations and designing end-to-end AI solutions including feature extraction, training pipelines, hyperparameter tuning, and packaging models as reproducible artifacts. The role involves selecting suitable architectures such as gradient boosting and transformers, optimizing inference using quantization, distillation, and mixed precision, and conducting evaluation using calibration, fairness metrics, cost-sensitive metrics, and PR/ROC under imbalance. The position includes building real-time and batch inference services, integrating with message buses and vector databases, and implementing model versioning, lineage tracking, experiment tracking, rollbacks, and canary releases. The role requires monitoring schema checks, data drift, performance regression, and cost observability, and implementing alerting and autoscaling policies aligned with SLAs. The engineer designs data contracts, builds ETL/ELT pipelines using Spark and Databricks, enforces data quality gates, and manages schema evolution. The role applies privacy-by-design practices including PII handling, tokenization, and secure secrets management, and supports cost-efficient data architectures using tiering, caching, and Parquet or Delta formats. The AI ML Engineer also designs experiments including A/B testing and counterfactual evaluation, integrates models via APIs and SDKs with business rules and fallbacks, produces documentation including model cards and decision logs, and collaborates with stakeholders to present trade-offs and technical solutions. The position requires working with cloud platforms including Azure, AWS, and GCP, implementing scalable architectures, deploying machine learning models, and supporting production-grade AI systems.
Required skills
Key responsibilities
- Translate business problems into machine learning formulations and select appropriate architectures including gradient boosting and transformers with defined success metrics
- Build end-to-end machine learning pipelines including feature extraction, training, hyperparameter tuning, and packaging models as reproducible artifacts
- Optimize model inference using quantization, distillation, and mixed precision techniques to improve latency and throughput on CPU and GPU
- Conduct advanced model evaluation using calibration, fairness metrics, cost-sensitive metrics, and PR/ROC analysis under imbalanced datasets
- Implement model versioning, lineage tracking, experiment tracking, rollbacks, and canary releases for production machine learning systems
- Build real-time and batch inference services and integrate models with message buses and vector databases
- Monitor data schema checks, data drift, performance regression, and cost observability for deployed models
- Create alerting and autoscaling policies aligned with SLAs and maintain incident runbooks for model services
- Design data contracts and implement ETL/ELT pipelines using Spark and Databricks with testing and backfills
- Apply privacy-by-design principles including PII handling, tokenization, and secure secrets management
- Design and execute experiments including A/B testing and counterfactual evaluation with defined success criteria
- Integrate models via APIs and SDKs with business rules and fallback mechanisms for graceful degradation
Experience & skills
- Hold a Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field
- Demonstrate experience designing, training, and deploying machine learning models and AI solutions
- Possess strong programming skills in Python and experience with TensorFlow, PyTorch, and Scikit-learn
- Demonstrate hands-on experience with MLOps tools including Docker, Kubernetes, MLflow, and CI/CD pipelines
- Possess proficiency in data processing and ETL tools including Spark and Databricks
- Demonstrate knowledge of model optimization techniques including quantization and distillation
- Show familiarity with cloud platforms including Azure, AWS, or GCP and scalable architecture design
- Demonstrate understanding of data governance, privacy standards, and compliance requirements