Use your background in big data management to create tools for our stats and logging infrastructure. Help us to migrate from our co-located data-centers to Google cloud and improve the real-time aspects of the system. You’d work alongside our API and application engineering teams to lead the implementation, maintenance, and evolution of newer data-rich stats products for Vimeo users. If this sounds exciting to you, we’d like to hear from you.
What you’ll do:
Within the first 30 days, you’ll understand our stats & logging infrastructure (Hadoop, Kafka, HBase, Scribe. Flume, MapReduce)
Within 90 days, you’ll be making infrastructure improvements, reliability improvements, and expanding the scope of data collected. You’ll be working on a migration from metal to a Google cloud environment
By the end of your first year, we’ll roll out our deeper video analytics for Vimeo creators, and you’ll have been the main driver of those efforts
Day-to-day, you will help manage large, rapidly-growing logging and user stats infrastructure, code in a variety of languages and systems supported by the JDK, and help refine our engineering process as our team expands
Skills & knowledge you should possess
Experience with large clustered data stores such as HBase, Cassandra, Riak, Couch, etc ...
Some experience with serialization and building RPC Services (Thrift, Msgpack, Protocol Buffers)
Expertise in Java, Scala, other functionally capable languages
Experience with RabbitMQ, Kafka, Hadoop
Vimeo empowers video creators to tell exceptional stories and connect with their audiences and communities. Home to more than 60 million members in over 150 countries, Vimeo is the world’s largest ad-free open video platform and provides powerful tools to host, share and sell videos in the highest quality possible. Founded in 2004 and based in New York City, Vimeo is an operating business of IAC (NASDAQ: IAC). Learn more at www.vimeo.com.