TripAdvisor, the world’s largest travel site, operates at scale with over 700 million reviews, opinions, photos, and videos reaching over 490 million unique visitors each month. We are a data driven company, and we have lots and lots of data! The Data Warehouse team is responsible for building and managing the infrastructure and tools that enables the rest of the company to interact with the petabytes of data in our data lake.

TripAdvisor’s Data Warehouse team is responsible for building and managing the infrastructure and tools that enable the rest of the company to interact with the petabytes of data in our data lake. Our mission statement is to create a new world-class analytics infrastructure for TripAdvisor.

Along with building new tools, we are making a big push to automate and reduce our operational responsibilities; this includes everything from enabling self-service ETL releases to building new fault tolerance mechanisms into our data pipelines so they recover without intervention. This is a great opportunity for individuals with a DevOps background who wants to make use of their current expertise and develop new experience with Big Data systems.

The Data Warehouse team works closely with our Analytics team to manage all stages of the data pipeline. We make use of technologies like Spark, Hive, Presto, and Snowflake in our ETL pipelines which expose the data to our analyst and machine learning applications.  

We are looking for someone with a strong sense of responsibility: taking pride in your work, leveraging others, and owning the problem.

What you will bring to the team:

  • Bachelor of Science in Computer Science, Engineering or equivalent
  • 3+ years of large scale, DevOps / Software Engineering Experience
  • General software engineering and programming experience; most of our larger projects are in Java, but we also have lots of scripts in Python, Bash.
  • Linux experience
  • Continuous Integration and Continuous Delivery (CI/CD) such as Jenkins and Sonar
  • Automation tools such as Ansible, Puppet and Chef

Nice Extra:

  • In-depth technical experience with big data technologies such as
    • Hadoop (HDFS, Hive, Map/Reduce),
    • Spark,
    • Kafka/Samza
    • HBase, Cassandra
  • ETL and SQL expertise is a plus
  • Experience with relevant AWS technologies: EMR, S3,

#LM-JM1

Apply for this Job

* Required
(Optional)
Almost there! Review your information then click 'Submit Application' to apply.

File   X
File   X