About us:

Careem is the leading technology platform for the greater Middle East. A pioneer of the region’s ride-hailing economy, Careem is expanding services across its platform to include payments, delivery and mass transportation. Careem’s mission is to simplify and improve the lives of people and build a lasting institution that inspires. Established in July 2012, Careem operates in more than 120 cities across 16 countries and has created more than one million job opportunities in the region 🌎.

About the role:

As a Data Ops Engineer at Careem, you’ll be part of a team that builds solutions and tools to enable, organize and process large amounts of data. You will be working with batch and real-time technologies such as Hadoop, Hive, Spark, Spark streaming, Kafka, Cloud computing, and storage to help Careem becoming a data-driven company.

Some of the problems the team is working on are: building a tool to enable business areas to create real-time metrics, automating jobs and pipelines and delivering fast and reliable software.

Requirements:

Essential:

  • 2+ years of hands-on experience in building and managing Scalable Big Data Systems  
  • 2+ years of experience working with big data technologies like Spark and/or Kafka
  • Proficiency in at least one of the following scripting languages: python or bash
  • Ability to debug critical issues faced in the Big Data ecosystem and come up with a clear solution and fixes.
  • Understanding of distributed processing tools such as Hadoop, Hive, Zookeeper, Presto, Zeppelin, Airflow etc
  • Ability to dig deeper into the issues of the production critical systems and provide permanent fixes for the system
  • Exposure to open source services of Big Data - Spark, Hive , Presto , Kafka etc
  • Experience implementing CI/CD, and maintaining big data ecosystem
  • Experience with one of the following automation tools: chef, ansible, or puppet
  • 1+ years of experience on Packer / Terraform

 Desirable:

  • Experience working with Data Science/Analytical teams and building scalable and stable systems
  • Exposure to enterprise level service such as Cloudera, Databricks, AWS, etc
  • Knowledge of containerization (Docker), and supporting technologies
  • Exposure to AWS data services and technologies (EC2, S3, EMR, Kinesis, Lambda, Glue, Data Pipeline, DynamoDB)
  • Knowledge of Relational Databases and Non Relational Databases, such as Maria, Mysql, Hbase or MongoDB
  • Understanding of the Elastic Search

…Oh, and also, we’re looking for someone who can take ownership, who is of service and who shoots to the moon and beyond. Is this you? We’re looking forward to seeing your application! 🚀

What do we offer you?

Working in an international environment with colleagues from 70+ nationalities, a flat hierarchy, flexible working hours, unlimited (paid!) holidays and, the latest technologies and full ownership!

Apply for this Job

* Required

File   X
File   X