
Salla
Senior Data Engineer
- Permanent
- Mecca, Saudi Arabia
- Experience 5 - 10 yrs
Job expiry date: 11/05/2026
Job overview
Date posted
28/03/2026
Location
Mecca, Saudi Arabia
Salary
SAR 20,000 - 30,000 per month
Compensation
Job description
The Senior Data Engineer role focuses on designing, developing, implementing, maintaining, documenting, and optimizing highly scalable data pipelines for both real-time and batch processing, with deep expertise in ClickHouse including schema design, ingestion optimization, query performance tuning, and cluster management. The position involves building and optimizing data systems to support accurate, low-latency analytics and reporting use cases, developing and maintaining solutions for streaming and serverless data processing, and collaborating with cross-functional teams to implement and support end-to-end analytics workflows. The role requires ensuring data quality, reliability, performance, and high availability across data platforms while monitoring, troubleshooting, and optimizing data infrastructure. The engineer will leverage real-time data processing technologies such as Apache Kafka, Flink, and Spark Streaming, and apply a deep understanding of distributed systems architecture with emphasis on scalability, reliability, and fault tolerance. The role requires proficiency in programming languages including Python for building data pipelines, automation scripts, and integrations; Go for developing high-performance data services or tools; TypeScript for contributing to data-related front-end or service-side applications; Bash/Shell scripting for writing automation scripts for data operations; and Rust as a good-to-have for building memory-safe, high-performance systems. The position also requires hands-on experience with cloud platforms such as AWS, GCP, or Azure, along with familiarity with containerization and orchestration tools including Docker and Kubernetes. The role supports building scalable real-time analytics solutions that empower teams to harness real-time insights to enhance customer experience and drive business growth, while mentoring junior engineers and contributing to continuous improvement of engineering practices in an inclusive workplace environment compliant with applicable laws and regulations.
Required skills
Key responsibilities
- Design, implement, maintain, and document highly scalable data pipelines for real-time and batch processing using ClickHouse, ensuring schema design optimization, ingestion efficiency, query performance tuning, and cluster management while supporting high-throughput ingestion and analytics workloads aligned with business intelligence and operational analytics requirements.
- Build and optimize data systems to support accurate, low-latency analytics and reporting use cases by implementing distributed systems architecture principles, ensuring scalability, reliability, and fault tolerance while supporting both real-time data processing and batch workloads across analytics platforms.
- Develop and maintain solutions for streaming and serverless data processing using Apache Kafka, Flink, and Spark Streaming, ensuring real-time event ingestion, transformation, enrichment, and delivery pipelines that enable near real-time insights and analytics-driven decision-making.
- Collaborate with cross-functional teams including analytics, product, engineering, and business stakeholders to implement and support end-to-end analytics workflows, ensuring seamless integration between data sources, processing layers, and reporting environments.
- Ensure data quality, reliability, and performance across the platform by implementing monitoring, alerting, validation, and data governance best practices, maintaining high availability of analytics infrastructure and preventing data inconsistencies across systems.
- Monitor, troubleshoot, and optimize data infrastructure deployed on cloud platforms such as AWS, GCP, or Azure, leveraging containerization and orchestration tools including Docker and Kubernetes to maintain system scalability, availability, and performance.
- Develop automation scripts using Python, Go, TypeScript, Bash, and Shell scripting to streamline data operations, pipeline orchestration, deployment processes, and infrastructure maintenance across distributed environments.
- Mentor junior engineers, contribute to continuous improvement of engineering practices, and promote best practices in data engineering, distributed systems architecture, and real-time analytics solutions while supporting business growth and enhancing customer experience.
Experience & skills
- Demonstrate 5+ years of experience in Data Engineering or related fields, with hands-on experience building scalable data pipelines for real-time and batch processing environments supporting analytics and reporting use cases.
- Possess strong expertise in ClickHouse including schema design, ingestion optimization, query performance tuning, and cluster management while supporting high-volume analytics workloads and distributed data systems.
- Demonstrate proven experience with real-time data processing technologies including Apache Kafka, Flink, or Spark Streaming, implementing event-driven architectures and real-time analytics pipelines.
- Show deep understanding of distributed systems architecture with emphasis on scalability, reliability, and fault tolerance, including handling high-throughput data ingestion and distributed query processing.
- Demonstrate proficiency in programming languages including Python for building data pipelines and automation scripts, Go for high-performance data services, TypeScript for service-side or data-related applications, and Bash/Shell scripting for automation and operational workflows.
- Possess good-to-have experience with Rust, particularly for building memory-safe, high-performance systems supporting data infrastructure and analytics services.
- Demonstrate hands-on experience with cloud platforms such as AWS, GCP, or Azure, deploying and managing scalable data infrastructure and analytics environments.
- Show familiarity with containerization and orchestration tools including Docker and Kubernetes for deploying, scaling, and managing distributed data engineering workloads.