Property Finder is the leading digital real estate platform in the Middle East and North Africa region. A UAE-born startup, Property Finder expanded its operations to Qatar, Bahrain, Saudi Arabia, and Egypt over the years. Recently, it acquired a significant stake in Zingat in Turkey. The company is one of the largest technology start-ups in the region and on a journey to becoming a Unicorn. We are aspiring to create a lighthouse technology company which will have a lasting impact in the entire tech ecosystem in our geography.

 

Being in a rapid growth stage, our Tech Hub in Turkey is a strategic investment to scale our product & tech talent. We are actively looking for passionate and talented individuals in product, tech and design to extend our Tech Hub capabilities. Diversity and great talents meet at Property Finder and is a big part of a great culture.


You will be part of a cross-functional team of data professionals working together to build high-quality data solutions for data science, business analytics, and product development teams. You will enable these teams to work as autonomously as possible across a well-architected, secure, resilient, and performant data estate. You will provide them with tools that allow them increasingly to self-provision and self-configure pipelines, reports, and datasets. You will be responsible for data integrations, curating the datasets for business users.
You will help consumers and businesses to make the best possible decisions with easy-to-use tools giving accurate and deep insights

The Data team is expected to deploy the latest techniques, tooling, best in class 3rd party products, and supporting methodologies to deliver the most advanced data-driven products any brand presents to customers in MENA. The Data team initiatives are managed with engineering rigor and careful consideration of security, privacy, and regulatory requirements.
Our data stack consists of various AWS Cloud Services, Snowflake, Kubernetes, Python / Airflow, Fivetran, DBT, Spark / Snowplow, Jupyter Notebooks, and custom software as necessary.

RESPONSIBILITIES

  1. Manage the infrastructure required to service data products.
  2. Work closely with our data science, business analytics, and product teams, providing insights and support to create the best architecture for our data products.
  3. Develop and maintain data pipelines and self-service data tooling.
  4. Ensure a consistent flow of high-quality data into our environment using stream, batch, CDC processes.
  5. Maintain and develop data APIs.
  6. Create analytics engineering workflows that moves, cleans, and transforms raw data into consumable information and business logic.
  7. Maintain a large, multi-terabyte data warehouse which includes performance tuning and data retention, purge processes.
  8. Tune query performance to optimize data load, materialization, and transformation times.
  9. Curate datasets to enable business users to self-service.
  10. Champion data quality, integrity, and reliability throughout the department by designing and promoting best practices.
  11. Ensure data accuracy and data quality by creating a “single version of the truth” and maintaining appropriate documentation.
  12. Collaborate and communicate effectively with stakeholders to deliver solutions.
  13. Share knowledge and best practices to improve processes.

YOUR PROFILE

Essential

  1. Robust SQL, data modeling skills, and optimization knowledge for different analytics workloads and scenarios.
  2. Experience with modern cloud data warehousing, data lake solutions like Snowflake, BigQuery, Redshift, Databricks.
  3. Experience with ETL/ELT, batch, streaming data processing pipelines.
  4. Experience with containerization technologies like Docker.
  5. Ability to research and troubleshoot data issues, providing fixes and proposing both short- and long-term solutions.
  6. Professional knowledge of Python, Pep8, high code quality, and engineering principles and guidelines.
  7. Experience with pub-sub, queuing, and streaming frameworks such as AWS Kinesis, Kafka, SQS.
  8. Experience with different storage solutions and formats: Parquet, Avro, ORC, Iceberg.
  9. Experience with pipelines orchestration tools like Airflow.
  10. Experience with AWS services and concepts (EC2, RDS, EMR, EKS, VPC, IAM).

Nice to have

  1. Experience with workloads orchestration technologies like Kubernetes.
  2. Experience with ELT tools like Fivetran, Matillion, Airbytes or other data ingestion, transformation tools like DBT.
  3. Experience with Spark, Spark Streaming, or similar solutions like Apache Flink or Apache Beam.
  4. Familiar with data warehousing and dimensional data modeling guidelines and best practices, and defines new when required.
  5. Familiar with dimensional data modeling, concepts, i.e 3NF.
  6. Familiar with Terraform.
  7. Familiar with GCP and Google Analytics.
  8. Experience with Realtime analytics solutions like Clickhouse, Rockset, Pinot, or Druid.
  9. Familiar with Consumer behavior data collection tools like Tealium, Snowplow, Segment.io.

Apply for this Job

* Required

resume chosen  
(File types: pdf, doc, docx, txt, rtf)
cover_letter chosen  
(File types: pdf, doc, docx, txt, rtf)