
Luxoft
Senior Databricks Data Engineer
- Permanent
- Abu Dhabi, United Arab Emirates
- Experience 2 - 5 yrs
Job expiry date: 08/05/2026
Job overview
Date posted
24/03/2026
Location
Abu Dhabi, United Arab Emirates
Salary
AED 20,000 - 30,000 per month
Compensation
Job description
The Senior Databricks Data Engineer is responsible for supporting the development, migration, and optimization of an enterprise data platform within the investment domain, leveraging Databricks, PySpark, Python, SQL, and cloud-based data platforms. The role focuses on building and maintaining scalable data pipelines and lakehouse data models using Delta Lake and lakehouse architecture patterns to support investment analytics, portfolio management, risk analysis, and trading data workflows. The position requires hands-on expertise in Databricks architecture, including job orchestration, cluster management, and performance tuning, along with advanced SQL and data modelling techniques such as dimensional modeling and query optimization. Responsibilities include migrating datasets, pipelines, and transformation logic from Palantir Foundry to Databricks Delta Lake, including pipeline conversion, data model redesign, and production cutover. The engineer works extensively with financial datasets including market data, portfolio holdings, transactions, pricing data, risk metrics, and financial instruments. The role also involves building ETL and ELT pipelines to enable data availability for analytics, reporting, and portfolio management systems, implementing data validation, reconciliation, and data quality frameworks to ensure accuracy and consistency, and maintaining data lineage, documentation, and governance aligned with financial industry standards. Additional responsibilities include optimizing Spark jobs, cluster configurations, and storage formats for performance and cost efficiency, and supporting monitoring, troubleshooting, and performance tuning of production pipelines using tools such as Dynatrace and Datadog. The position operates in a cloud environment (Azure preferred) with responsibilities spanning storage, security, networking, and enterprise data architecture, requiring collaboration with data engineers, quantitative analysts, and investment stakeholders to deliver reliable and performant data solutions.
Required skills
Key responsibilities
- Develop and maintain scalable data pipelines and transformation workflows using Databricks, PySpark, and SQL for enterprise data platforms
- Support migration of datasets, pipelines, and transformation logic from Palantir Foundry to Databricks Delta Lake including pipeline conversion and production cutover
- Work with investment and financial datasets including market data, portfolio holdings, transactions, pricing data, risk metrics, and financial instruments
- Build and maintain ETL and ELT pipelines enabling data availability for investment analytics, reporting, and portfolio management systems
- Implement data validation, reconciliation, and data quality frameworks to ensure financial data accuracy and consistency
- Optimize Spark jobs, cluster configurations, and storage formats to improve performance and cost efficiency within Databricks environments
- Maintain data lineage, documentation, and governance practices aligned with financial industry standards and regulatory expectations
- Support monitoring, troubleshooting, and performance tuning of production data pipelines using observability tools such as Dynatrace and Datadog
Experience & skills
- Demonstrate advanced Databricks experience including Databricks architecture, Delta Lake, job orchestration, cluster management, and performance tuning
- Possess expert-level PySpark and Python skills with ability to design, optimize, and refactor distributed data processing workflows
- Exhibit advanced SQL and data modelling expertise including dimensional modeling, lakehouse architecture patterns, and query optimization
- Show experience working with investment data such as market data, portfolio holdings, transactions, pricing data, risk metrics, and financial instruments
- Demonstrate hands-on experience with cloud platforms preferably Azure including storage, security, and networking considerations
- Provide experience with Palantir Foundry including pipelines, ontologies, data lineage, transformations, and platform governance as a strong advantage
- Demonstrate proven migration experience from Palantir to Databricks including pipeline conversion, data model redesign, and production cutover
- Exhibit familiarity with monitoring and observability tools such as Dynatrace or Datadog
- Hold relevant certifications such as Databricks certification, cloud certifications (Azure or AWS), or enterprise data architecture certifications
- Demonstrate English language proficiency at C1 Advanced level