Property Finder is the leading digital real estate platform in the Middle East and North Africa region. A UAE-born startup, Property Finder expanded its operations to Qatar, Bahrain, Saudi Arabia, and Egypt over the years. Recently, it acquired a significant stake in Zingat in Turkey. The company is one of the largest technology start-ups in the region and on a journey to becoming a Unicorn. We are aspiring to create a lighthouse technology company which will have a lasting impact in the entire tech ecosystem in our geography.
PropertyFinder is the market-leading property portal across the Middle East and North Africa. We seek to leverage vast amounts of valuable data about properties for purchase or rental, customers (website visitors), real estate agents, brokers, and substantial property developments.
This cornucopia of information allows our development teams to craft advanced applications that help consumers and businesses to make the best possible decisions with easy-to-use tools giving accurate and deep insights.
You will be part of a growing team of qualified data professionals (8 today) in the cross-functional data team. You will be working to deliver high-quality data solutions to all our data science, business intelligence, and product development teams. You will enable these teams to work as autonomously as possible across a well-architected, secure, resilient, and performant data estate. You will provide them with tools that allow them increasingly to self-provision and self-configure pipelines, reports, and datasets.
The Data team deploys the latest techniques, tooling, best-in-class 3rd party products, and supporting methodologies to deliver the most advanced B2C/B2B data-driven products any brand presents to customers in MENA.
Our new data stack consists of Airflow/Fivetran, Snowflake, DBT & Looker, supporting AWS tech & custom software development as necessary.
The role focuses on supporting data products and pipelines developed by data engineers, bi engineers, data scientists. These initiatives will be managed with engineering rigor and careful consideration of security, privacy, and regulatory requirements.
You will also be responsible for continuously maintaining and innovating our data infrastructure required to run Data Warehouse, Data Lake, data ingestion (ETLs) procedures, and internal & external reporting apps (APIs). You will ensure that the growth in the volume and variety of data doesn’t compromise existing service commitments.
You will work closely with data engineers, backend engineers to ensure that the data team delivers reliable and performant data products and pipelines to enable widespread consumption of data and works with the most appropriate technologies in the market.
- Implementing CI/CD processes
- Work closely with the data team by supporting, scaling, deploying Data Pipelines, APIs, ML models, and ETLs solutions.
- Take ownership of AWS infrastructure required to service Data Pipelines and APIs.
- Implementing good DevOps practices
- Capturing provisioned infrastructure as a code
- Ensuring a consistent flow of high-quality data into our environment (batch, CDC, API)
- AWS cloud experience services and concepts (VPC, IAM, RDS, EC2)
- Containerization technologies (Docker)
- Infrastructure provisioning (Terraform, AWS CDK)
- CI/CD (Jenkins, Gitlab)
- Monitoring (DataDog, Cloudwatch)
- Distributed publish-subscribe messaging systems AWS Kinesis, Kafka, SQS, RabbitMQ
- Database Administration (RDS, MongoDB/DocumentDB, Elasticsearch, Redshift)
Nice to have
- Knowledge of Python, PHP8 and coding quality and standards
- Experience of Cloud-based data warehouses, i.e., Snowflake/BigQuery.
- Experience with MPP Databases and distributed systems.
- Experience with Spark, Spark Streaming, or similar solutions like Apache Flink or Apache Beam.
- Experience with Distributed SQL Engines like Presto, Impala