Traveloka is a technology company based in Jakarta, Indonesia. Founded in 2012 by ex-Silicon Valley engineers and aims to revolutionize human mobility with technology. Today Traveloka is expanding its reach by operating in 8 countries and experimenting with new endeavors that will create large impact in the markets and industries we touch.

Job Description

The DevOps team, a branch of the Data Platform group, builds, maintains, and streamlines reliable, robust, and secure data infrastructure on top of GCP to support high-intensity data processing and cutting-edge ML models training. We deliver fully automated multi-stage Kubernetes clusters and we help our fellow data scientists to rapidly deploy data applications with automatic roll-out / rollback to production environment using CI/CD. We also practice automated GitOp-style deployments for both Kubernetes and Terraform.

Responsibilities

  • Automate: Architect secure and maintainable Kubernetes clusters on GCP and implement reliable CI/CD pipelines to support the operation of our infrastructure and services.
  • Advise: Work closely with development teams to automate their build, test, packaging and development procedures and help them become more autonomous in delivering products to production.
  • Amplify: Establish DevOps engineering best practices and be an advocate to spread DevOps culture in and out of the company.
  • Codify: Perform the “everything as code” philosophy and manage infrastructure in a consistent and auditable style.
  • Secure: Apply security best practices to AWS and GCP infrastructure and fix security findings.
  • Operate: Share pager duty for the rare instances of something serious happening.
  • Diagnose: Identify and fix production issues at any level of infrastructure, cluster, network, and service.
  • Optimize: Observe and improve performance at any level of infrastructure, cluster, network, and service. Reduce infrastructure cost.

Technical Qualifications

  • Good knowledge of at least one high level programming language such as Go,  Python, or Java. Being comfortable working with Shell Script.
  • Decent working knowledge of GCP, AWS, or other cloud platforms.
  • Solid hands-on experience with Docker, Kubernetes, ECS, or similar clustering solutions.
  • Modest understanding of the advantages of Infrastructure-as-Code with experience in designing and implementing cloud infrastructure using IAC tools like Terraform or its equivalents..
  • Adequate understanding of the importance of comprehensive monitoring and alerting with experience in at least one monitoring solution such as Prometheus or Datadog.
  • Moderate experience with one or more CI/CD framework.

Cross Discipline Skills

  • Curiosity to explore creative solutions and try new things, while striving for simplicity.
  • Solid verbal and written communication skills in English.
  • Good at holding technical and non-technical conversations including a healthy debate of the pros and cons of your choices with your peers and coherently explain problems in detail and offer solutions.

Nice-to-Haves

  • Working experience with *nix systems and a deep knowledge of all layers of the networking stack.
  • Possess knowledge of integrating security into GCP and AWS infrastructure.
  • Have managed at least one latency-critical real-time data pipeline that ingested & served millions of events.

Apply for this Job

* Required