Property Finder is the leading digital real estate platform in the Middle East and North Africa region. A UAE-born startup, Property Finder expanded its operations to Qatar, Bahrain, Saudi Arabia, and Egypt over the years. Recently, it acquired a significant stake in Zingat in Turkey. The company is one of the largest technology start-ups in the region and on a journey to becoming a Unicorn. We are aspiring to create a lighthouse technology company which will have a lasting impact in the entire tech ecosystem in our geography.

 

This cornucopia of information allows our development teams to craft advanced applications that help consumers and businesses to make the best possible decisions with easy to use tools giving accurate and deep insights.

You will be part of a growing team qualified Data Professionals (6 today), in a wider data team, working  to deliver high-quality data solutions to all our data science, business intelligence and product development teams. You will enable these teams to work as autonomously as possible across a well-architected, secure, resilient and performant data estate. You will provide them with tools that allow them increasingly to self-provision and self-configure pipelines, reports and datasets. 

The Data  team is expected to deploy the latest techniques, tooling, best in class 3rd party products and supporting methodologies to deliver the most advanced B2C/B2B data-driven products any brand presents to customers in MENA. 

Our new data stack consists of Airflow/FiveTran, Snowflake, DBT & Looker, with supporting AWS tech & custom software development as necessary.

The role is focused on developing self-service data-tooling for use by BI developers and data scientists across the firm as well as the integration, curation of data for business users. These initiatives are managed with engineering rigour and careful consideration of security, privacy and regulatory requirements. 

You will also be responsible for continuously maintaining and innovating our Data Warehouse, Data Lake, data ingestion (‘ETL/ELT’) procedures and internal & external reporting apps (APIs), ensuring that the growth in the volume and variety of data doesn’t compromise existing service commitments.

You will work closely with Backend Engineers to ensure that the Data  team delivers reliable and performant services to enable widespread consumption of data and works with the most appropriate technologies in the market.

 

RESPONSIBILITIES

  • Develop and maintain Data Pipeline Applications and self-service tooling
  • Ensure a consistent flow of high-quality data into our environment (batch, CDC, API)
  • Curate data to enable business user self-service
  • Work closely with our BI & Data Science team, providing insights and support to create the best architecture for our data products.
  • Support AWS infrastructure required to service Data Pipelines and APIs.

 

YOUR PROFILE

Essential

  • Business and value-focused, stakeholder management experience
  • Professional knowledge of Python, Pep8 and coding quality and standards.
  • Experience with ETL/ELT processing pipelines and design in batch and near real-time, experience with ETL tools that implement a DAG e.g. Airflow and Luigi.
  • Robust SQL, data modelling skills and optimization knowledge for different analytics workloads and scenarios
  • Experience with PubSub/Stream data such as RabbitMQ, AWS Kinesis, Kafka
  • Familiarity with AWS services and concepts (VPC, IAM, RDS, EC2)
  • Experience using Docker, Kubernetes, Terraform

 

Nice to have

  • Experience using SnowPlow is advantageous.
  • Experience of Cloud-based data warehouses e.g. Snowflake/BigQuery.
  • Experience with MPP Databases and distributed systems.
  • Experience with Spark, Spark Streaming or similar solutions like Apache Flink or Apache Beam.
  • Experience with Distributed SQL Engines like Presto, Impala

Apply for this Job

* Required