Do you want to change the world? At Cabify, that’s what we’re doing. We aim to make cities better places to live by improving mobility for the people living in them, connecting riders to drivers at the touch of a button. Maybe one day cities will be places where nobody needs a private car. But we’ve still got a long way to go…fancy joining us?

Do you want to be part of the team responsible for enabling a data-driven company, with up to 200k events per second and hundreds of different messages? Our mission is to build a platform that works at scale to provide trusted data for the rest of the company.

You will:

At data-engineering we operate dozens of services (Scala, Golang, Python) and pipelines (Apache Beam/Dataflow), and our in-house developed machine learning platform that includes a feature store. We are a hands-on team: we manage our own infrastructure (GCE and AWS) and Kubernetes clusters (GKE). We are looking for new members, and this is how you will play an important part in helping us achieve our mission:

  • Designing and developing end-to-end data solutions and modern data architectures for Cabify products and teams (streaming ingestion, data lake, data warehouse...)
  • Extracting data from internal and external sources to empower our Data Analytics team.
  • Evolving and maintaining Lykeion, a Machine Learning platform that we have developed along with the Data Science team. 
  • Collaborating with other technical teams to define, execute and release new services and features.
  • Designing and maintaining complex APIs exposing data at scale, that helps other teams to make better decisions.
  • Managing and evolving our infrastructure.
  • Continuously identifying, evaluating and implementing new tools and approaches to maximise development speed and cost efficiency.

What’s it like to work at Cabify?:

  • Great alignment with our principles, we take this very seriously.
  • At least 4 years tenure in coding and delivering complex data projects
  • Fluent in different programming languages (we work with Python, Scala and Go)
  • Experience with message delivery systems and streaming processing (Kafka, RabbitMQ, Akka streams, Apache Beam…)
  • Deep understanding and application of modern data processing technology stacks and distributed processing (Hadoop, Spark, Apache Beam, Apache Flink...)
  • Deep understanding of different storage technologies (file-based, relational, columnar, document-based, key-value...)
  • Experience with orchestration tools such as Airflow, Luigi, Azkaban.
  • Be familiar with machine learning, specially with its lifecycle (features, models, training & evaluation processes, productionizing)
  • Experience with cloud infrastructures (GCP, AWS, Azure)
  • Be comfortable with automation tools (Terraform, Puppet, Ansible…)
  • Bonus points: 
    • Experience with Google Cloud BigData products (PubSub, Dataflow, BigTable, BigQuery…)
    • Experience with Kubernetes.
    • Experience with Apache Beam and Scio.

The good stuff:

We’re a company full of happy, motivated people and we never want that to change. Here are some more reasons why it rocks to be part of our family.

  • Excellent Salary conditions: L4: 45k - 65k
  • We also offer a very competitive stock options plan.
  • Recharge day: Every 3rd Friday monthly off!
  • Hybrid model
  • Flexible work environment & hours.
  • Regular team events.
  • Cabify staff free rides.
  • Personal development programs based on our career paths.
  • Annual budget for training.
  • Flexible compensation plan: Restaurant tickets, transport tickets, healthcare and childcare
  • All the equipment you need (you only have to bring your talent).
  • A pet room ,so you don’t have to leave your furry friend at home
  • And last but not least...free coffee and fruit!

       Join us!

Apply for this Job

* Required