
Property Finder
Data Platform Intelligence Manager - B2B
- Permanent
- Dubai, United Arab Emirates
- Experience 5 - 10 yrs
Job expiry date: 05/04/2026
Job overview
Date posted
19/02/2026
Location
Dubai, United Arab Emirates
Salary
AED 30,000 - 40,000 per month
Compensation
Job description
pleaThe Data Platform Intelligence Manager - B2B at Property Finder, a leading property portal in the MENA region and a fast-growing proptech organization, leads a high-performing Data Engineering team as a people-first Engineering Manager, focusing on maximizing Team Output × Team Health rather than hands-on coding. Reporting to the Director of Data Platform Intelligence, the role acts as the core engineering partner to B2B Product and Commercial organizations, co-owning the roadmap for how data powers enterprise offerings and moving beyond fulfilling data requests to building highly scalable data products that deliver actionable customer intelligence, optimize commercial performance, and provide deep behavioral insights. The mandate includes championing the architectural and operational evolution of the foundational data platform, guiding the transition from traditional descriptive data pipelines into a proactive, AI-enabled intelligence engine by embedding advanced analytics and machine learning capabilities into core infrastructure to drive product monetization, enhanced customer ROI, and long-term retention. The tech stack includes Python and SQL; orchestration with Dagster (migrating from legacy Airflow); compute and storage with AWS Redshift, S3, Clickhouse, and Athena; transformation with dbt (Core/Cloud) and Fivetran; infrastructure on AWS (ECS/EKS, Glue, Lambda) using Terraform with Terragrunt; and AI/GenAI capabilities leveraging AWS Bedrock, AWS Agentcore, and LangChain. The role drives Business OKRs, translates business requirements into technical roadmaps, articulates technical debt, risks, and architectural trade-offs such as cost vs. latency to Product, Finance, and Executive stakeholders, owns Agile delivery processes including Sprint Planning, Standups, and Retrospectives, manages backlogs balancing features, tech debt, and infrastructure investments, and serves as primary contact for downstream consumers such as Data Science, Analytics, and Product teams. It enforces coding standards, CI/CD practices, architectural guidelines, monitors SLAs, data freshness, pipeline reliability, incident response, leads Root Cause Analysis (RCA), champions Cloud Cost Optimization (FinOps) on AWS, automates workflows including documentation, data validation, and metadata, builds natural-language interfaces for enterprise-wide data access, architects scalable feature pipelines for predictive models and monetization algorithms, deploys LLM-powered observability and automated RCA to reduce MTTR, and leverages Agile/Scrum methodologies and project management tools such as Jira and Linear in high-stakes production environments.
Required skills
Key responsibilities
- Drive Business OKRs by partnering with B2B Product Managers and Commercial stakeholders to translate business requirements into technical roadmaps, prioritize initiatives directly impacting company objectives, ensure every data pipeline and data product is aligned with measurable commercial outcomes, and articulate technical debt, architectural trade-offs such as cost vs. latency, and platform risks in clear language to Product, Finance, and Executive teams.
- Own Agile/Scrum delivery processes including Sprint Planning, Standups, and Retrospectives; collaborate with stakeholders to prioritize and groom backlogs balancing new features, tech debt remediation, and infrastructure investments; ensure predictable, high-quality delivery of scalable data products; and serve as the primary point of contact for downstream data consumers including Data Science, Analytics, and Product teams.
- Architect and evolve the foundational data platform by guiding the migration from legacy Airflow to Dagster, embedding advanced analytics and machine learning capabilities into core infrastructure, building scalable feature pipelines to power predictive models and monetization algorithms, and transforming traditional descriptive data pipelines into a proactive AI-enabled intelligence engine leveraging AWS Bedrock, AWS Agentcore, and LangChain.
- Enforce engineering excellence by implementing coding standards, CI/CD practices, and scalable architectural guidelines across Python and SQL codebases; eliminate recurring system failures by addressing root causes rather than symptoms; proactively remove inefficient processes, low-value work, and unnecessary meetings; and institutionalize best practices across orchestration, transformation, compute, and infrastructure layers including dbt (Core/Cloud), Fivetran, AWS Redshift, Amazon S3, Clickhouse, Amazon Athena, AWS ECS/EKS, AWS Glue, AWS Lambda, Terraform, and Terragrunt.
- Own platform health and reliability by monitoring and maintaining SLAs, data freshness, pipeline reliability, and incident response processes; lead Root Cause Analysis (RCA) with a focus on systemic fixes and prevention of recurrence; deploy LLM-powered observability and automated RCA to drastically reduce MTTR; and champion Cloud Cost Optimization (FinOps) practices on AWS to balance performance, scalability, and cost efficiency.
- Automate and scale data workflows by implementing AI-assisted development practices, automating documentation, data validation, and metadata management, building natural-language interfaces for frictionless enterprise-wide data access, and reducing data cycle times through AI and automation.
Experience & skills
- Demonstrate 5+ years of hands-on experience leading Data Engineering teams with a proven track record of delivering complex, end-to-end data products, including direct accountability for hiring, developing, retaining, and performance managing high-performing engineers, and experience managing remote or distributed teams in high-growth environments.
- Possess 8+ years of total experience in Data Engineering or Software Engineering with deep technical expertise in Python and SQL, strong knowledge of Cloud Data Warehouses such as Redshift, Snowflake, and BigQuery, hands-on experience with orchestration tools including Airflow, Dagster, and Prefect, and the technical depth required to review code, challenge engineering estimates, identify architectural risks, and guide decisions across compute, storage, transformation, and infrastructure layers.
- Exhibit strong experience with Agile/Scrum methodologies and project management tools such as Jira and Linear; demonstrate the ability to own SLAs, manage production incidents, and operate in high-stakes environments; and show a track record of establishing operational excellence through monitoring, alerting, incident response, and systematic Root Cause Analysis (RCA).
- Preferably bring background in proptech, real estate, or two-sided marketplace platforms; experience with streaming data systems and event-driven architectures; familiarity with ML infrastructure, feature stores, and GenAI integration in data workflows; knowledge of data observability and quality frameworks such as Monte Carlo and Great Expectations; and exposure to FinOps practices and cloud cost optimization strategies within AWS-based ecosystems.