The Data team at Traveloka consists of Analytics, Data Platform, Data Science, Machine Learning / AI groups, individually comprising an expansively diverse team of data engineers, data analysts, data scientists, and product managers.

In the Data Warehouse Engineering Team, we're looking for hands-on and motivated individuals who are passionate about using data to provide insights on every area of the business and influence decision-making. You will get to work alongside other highly experienced data engineers on big data pipelines that process petabytes of data daily. 

Responsibilities

Build and maintain data warehouse: 

  • Design, implement and manage end to end data pipelines with an end on top GCP environment
  • Develop ETL framework (ETL development tool, CI/CD, infrastructure)
  • Hands on in designing tracking spec and schema for product features together the Data Analyst
  • Continuously seek ways to optimize existing data processing to be cost and time efficient
  • Ensure good data governance and quality through build monitoring system to monitor data quality in data warehouse.
Requirements :
  • Minimum 3 year hands-on experience in building data pipeline on top of cloud technology (preferably GCP)
  • Fluent in Python and advance-SQL
  • Familiar with data processing framework (eg: spark, dataflow, dataproc, DBT etc)
  • Familiar with orchestration tool (eg: airflow, azkaban)
  • Experienced in performing data modeling and data quality check
  • Familiar with docker image and kubernetes
  • Having knowledge in machine learning will be an advantage

Apply for this Job

* Required