Lana is a new project born out Cabify's and Maxi Mobility's New Business department. The aim is to create an alternative banking platform optimised to empower "gig economy" workers to get paid more quickly, reliably, for saving and making payments. Lana is a small, dedicated, and highly skilled team focussed on building an awesome product with the potential to grow into a company that changes the lives of millions of people in Latin America who don't currently have access to electronic banking.
The New Business department and Lana are very closely joined with Cabify and Easy Taxi teams with the aim of adding value and investigating business areas that are directly or indirectly related to the core products.
If we’re going to turn our vision into a reality, we’re going to need plenty more bright, ambitious people to join us!
About the position:

We are looking for new members to help us build from scratch our Data Engineering features around Financial Services, and here is how you will play an important part in helping us achieve our mission:

  • Streamline data processing of the original event sources and consolidate them in source of truth event logs
  • Build and maintain real-time/batch data pipelines that can consolidate and clean up usage analytics
  • Collaborating with cross-functional teams to define, execute and release new services and features
  • Continuously identifying, evaluating and implementing new tools and approaches to maximise development efficiency
  • Productionizing Machine Learning models
What we’re looking for:
  • At least 3 years experience in coding and delivering complex data projects
  • Advanced knowledge of at least one high-level programming language, and are happy to learn more
  • Proven track record in Data Engineering
  • Hands-on and continuous delivery attitude
  • Deep understanding and application of modern data processing technology stacks (Hadoop ecosystem, Google Cloud, AWS)
  • Deep understanding of streaming data architectures and technologies for real-time and low-latency data processing (Apache Spark/Beam)
  • Proficiency with databases and SQL expertise is required. Experience in building data pipelines. Understanding of NoSQL technologies including columnar, document, and key-value data storage technologies (other non relational database will be considered)
  • Working knowledge of Google Cloud Big Data products (Pub/Sub, Dataproc, Dataflow…)
  • You believe that you can achieve more on a team — that the whole is greater than the sum of its parts. You rely on others' candid feedback for continuous improvement.
  • Bonus points: Understanding of how to develop and architect machine learning models
The good stuff:
We’re a company full of happy, motivated people and we never want that to change. Here are some more reasons why it rocks to be part of our family.
  • Flexible work environment & hours
  • Regular fun team events
  • Cabify staff discount
  • Personal development programmes
  • All the gear you need - just bring yourself.
  • And last but not coffee!

Apply for this Job

* Required