Cars.com is one of Chicago’s original tech companies. Our online platform makes it easier for consumers to shop for, sell and service their cars. With our expert content, mobile app features, millions of new and used vehicle listings, a comprehensive set of research tools and the largest database of consumer reviews in the industry, Cars.com offers innovative products to connect consumers with dealers across the country.

Data is the driver for our future at Cars. We’re searching for collaborative, analytical, and innovative engineers to help utilize the almost 20 years of data we have at our disposal. If you are passionate about using data to solve problems and build game changing products, we’d love to work with you.

Cars.com is seeking an Lead Big Data Engineer to join our organization in providing leadership for the overall, design, development and deployment of a Hadoop based platform. A leader who can efficiently execute our efforts to scale our Machine Learning products and advanced analytical products using spark as a part of Big Data team.

Required Skills:

  • Experience designing and implementing large, scalable distributed systems.
  • Strong understanding of OO and functional programming (preferably in Java/Scala).
  • Understanding of Hadoop Ecosystem (HDFS, Yarn, MapReduce, Spark, Hive, Flume, Impala) and should be able to coach the other members of the team.
  • Experience with effectively modeling and storing data in HDFS and using sqoop to import/export data out of HDFS.
  • Ability to write crisp and clear technical documents on data architect and execution pipeline.
  • Experience with writing optimized (for memory and process time) spark jobs to process streaming data from Kafka.
  • Have a good understanding of various file formats and compression techniques in HDFS and can establish and guide patterns.
  • Experience with building and or using state of the art methods for scaling machine learnt predictive models.
  • Proficient in scripting on Linux based operating systems.

Required Experience:

  • Bachelor’s or Master’s Degree in a technology related field (e.g. Engineering, Computer
    Science, etc.) required.
  • 7+ years of experience as a Java developer.
  • 1.5+ years of hands on experience with Apache Spark.
  • 3+ years of experience in implementing Hadoop solutions in data analytics space

Preferred:

  • Experience working with MPP databases like Teradata, NoSql databases like Couchbase.
  • Tuning HIVE and Impala queries.
  • Strong fundamentals in Machine Learning and have used Spark ML/R and related tools.
  • Developed big data applications in cloud (AWS, Azure, Google Cloud)
Apply for this Job
* Required
(Optional)
Almost there! Review your information then click 'Submit Application' to apply.

File   X
File   X
+ Add Another


Share this job: