We’re an award-winning global communications company operating in nine countries across the Middle East, North Africa, and Southeast Asia. Our strategy is to become the region’s leading digital infrastructure provider. Ooredoo Group’s strategic vision is guided by five key pillars:
Value-Focused Portfolio: Boosting asset returns by focusing on telco operations, towers, data centres, the sea cable business and fintech.
Strengthen the Core: Optimally using deployed capital and maintain an appropriate cost structure.
Evolve the Core: Monetising opportunities to generate new revenue streams via programmes focusing on analytics, digitalisation of operations, and partnerships with digital service providers.
People: Building an engaged and empowered workforce through integrated learning programs and coaching and mentoring.
Excellence in Customer Experience: Creating superior customer experiences.
From day one, every employee who joins our team becomes an integral part of our success journey. We offer you the chance to enhance your skills, advance your career, and maintain a healthy work-life balance. Empowering you to catapult your personal and professional growth. If you’re looking to challenge your growth potential, Ooredoo is the employer for you.
The Central ML Ops Engineer will be responsible for designing and implementing best practices for Machine Learning Operations (MLOps) across the Group to ensure consistent and efficient deployment, monitoring, and management of AI models. This role will focus on developing and maintaining MLOps frameworks, tools, and processes that can be applied across all OpCos, while providing guidance and support to local ML Ops teams to drive consistent, high-quality AI delivery. The Central ML Ops Engineer will work closely with Data Engineers, AI Engineers, and OpCo ML Ops teams to ensure that AI models are deployed, maintained, and governed in a standardized and scalable manner across the organization.
MLOps Framework Development: Design, develop, and maintain a comprehensive MLOps framework, defining best practices for model deployment, monitoring, and lifecycle management. Ensure these standards are documented and easily accessible for OpCo teams.
Process Standardization: Establish and promote standardized processes, methodologies, and templates for MLOps activities across OpCos, ensuring consistency in AI model deployment and operations.
Tooling and Automation: Identify, implement, and manage MLOps tools (e.g., CI/CD pipelines, model monitoring, versioning tools) that streamline model deployment and management processes. Provide guidance on tool adoption at the OpCo level.
Technical Support and Troubleshooting: Serve as the central point of expertise for complex MLOps issues, providing support and guidance to local OpCo ML Ops teams. Troubleshoot and resolve technical challenges related to AI model deployment and operationalization.
Best Practice Dissemination: Conduct training sessions, workshops, and regular knowledge-sharing activities to disseminate best practices and ensure local teams are equipped to follow established MLOps frameworks.
Cross-functional Collaboration: Collaborate closely with Central AI Engineers, Data Engineers, and Solution Architects to align MLOps practices with AI model development and data engineering workflows.
Performance Monitoring and Optimization: Develop and implement centralized monitoring strategies for tracking model performance, ensuring that deployed models across all OpCos meet performance, scalability, and compliance requirements.
Governance and Compliance Oversight: Ensure that all AI models deployed across OpCos comply with the organization’s governance, ethical AI, and data privacy standards. Implement mechanisms for continuous compliance monitoring.
Innovation and Continuous Improvement: Stay updated on emerging MLOps trends and technologies, continuously refining and updating the Group’s MLOps framework to incorporate new tools and practices that enhance operational efficiency and model performance.
5+ years of experience in MLOps, DevOps, or a similar role with a focus on machine learning or data science operations.
Proven experience in designing and implementing MLOps frameworks at scale, preferably in multi-location or multi-country environments.
Strong understanding of machine learning model lifecycle management, including deployment, versioning, and monitoring.
Expertise in CI/CD pipelines, containerization, and orchestration tools (e.g., Jenkins, GitLab, Docker, Kubernetes).
Familiarity with cloud-based AI platforms (e.g., AWS SageMaker, Azure ML, Google Cloud AI) and MLOps tools (e.g., MLflow, Kubeflow).
Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s degree in AI, Machine Learning, or Data Science preferred).
Experience working in telecommunications or large-scale technology environments.
Knowledge of data privacy regulations and compliance frameworks, especially in the Middle East and North Africa (MENA) region.
Familiarity with distributed computing and big data technologies (e.g., Apache Spark, Hadoop).