Who are we?

Careem is the leading technology platform for the greater Middle East. A pioneer of the region’s ride-hailing economy, Careem is expanding services across its platform to include payments, delivery and mass transportation. Careem’s mission is to simplify and improve the lives of people and build a lasting institution that inspires. Established in July 2012, Careem operates in more than 130 cities across 16 countries and has created more than one million job opportunities in the region

About your new team

We are on a mission to extend and build a much larger dream data warehouse team here at Careem and as a team motto, we truly believe in the famous “no brilliant jerks” policy a brief of which is outlined below:
On a dream team, there are no “brilliant jerks.” The cost to teamwork is just too high. Our view is that brilliant people are also capable of decent human interactions, and we insist upon that. When highly capable people work together in a collaborative context, they inspire each other to be more creative, more productive and ultimately more successful as a team than they could be as a collection of individuals.

The Data Warehouse team at Careem builds and supports solutions to organize, process and visualize large amounts of data. You will be working with technologies such as Hive, Spark, Spark streaming, Kafka, Python, Redash, Presto, Tableau and many others to help Careem become a data-informed company. Some of the problems the team is working on are: Customer 360, ELT engineering, reporting infrastructure, data reliability, data discovery and access management.

About your new role

  • Work with engineering and business stakeholders to understand data requirements
  • Build and refine data flow and ETL processes
  • Performing data cleansing and enhancing data quality
  • Produce and maintain strong documentation
  • Take action with quality, performance, scalability, and maintainability in mind
  • Work in a collaborative, fast paced Agile team environment

You have:

  • 5+ years of experience with designing, building and maintaining scalable ETL pipelines
  • good understanding of data warehousing concepts and modeling techniques
  • Hands-on experience automating repetitive tasks in any scripting language preferably python
  • Hands-on experience satisfying data needs using Spark SQL
  • Ability to dig deeper into the issues of the ETL pipelines, understand the business logic and provide permanent fixes
  • Desire to learn about new tools and technologies

Good to have:

  •  Experience with CICD using Jenkins, Terraform or other related technologies
  •  Familiarity with Docker and Kubernetes
  •  Experience working with real time data processing using Kafka, Spark Streaming or similar technology
  • Experience with workflow processing engines like Airflow, Luigi

Benefit Summary:

  • Competitive remuneration 
  • Premium medical insurance (including spouse and children)
  • Unlimited leave* 
  • Discounted Careem rides
  • Entrepreneurial working environment
  • Flexible working arrangements
  • Mentorship and career growth

Apply for this Job

* Required