About the Company:
Cars.com is one of Chicago’s original tech companies. Our online platform makes it easier for consumers to shop for, sell, and service their cars. With our expert content, mobile app features, millions of new and used vehicle listings, a comprehensive set of research tools and the largest database of consumer reviews in the industry, Cars.com offers innovative products to connect consumers with dealers across the country.
Data is the driver of our future at Cars. We’re searching for highly collaborative, analytical, and innovative engineers to build and scale our Big Data and Machine Learning platform. If you are passionate about using data to solve problems and build game changing products, we’d love to work with you.
About the Role:
The Big Data & Machine Learning Engineering Team at Cars.com is responsible for building Big Data pipelines and deriving insights out of the data using advanced analytic techniques, streaming and machine learning at scale. Working within a dynamic, forward thinking team environment, you will design, develop, and maintain mission-critical, highly visible Big Data and Machine Learning applications, in direct support of our business objectives. You will deploy ML models into production and integrate them into production applications for use. You will also work in close partnership with other Engineering teams, including Data Science, & cross-functional teams, such as Product Management & Product Design. Furthermore, you will have the opportunity to mentor others on the team, to share your knowledge, and to continue growing in your career.
- Software Engineering | 5+ years of designing & developing complex applications at enterprise scale; specifically Java & Scala.
- Big Data Ecosystem | 2+ years of hands-on, professional experience with Apache Spark / Spark Streaming; Hadoop / EMR; & Kafka.
- AWS Cloud | 2+ years of professional experience in developing Big Data applications in the cloud, specifically AWS.
Required Skills & Experience:
- Ability to develop Spark jobs to cleanse/enrich/process large amounts of data.
- Ability to develop Spark streaming jobs to read data from Kafka.
- Experience with tuning Spark jobs for efficient performance including execution time of the job, execution memory, etc.
- Sound understanding of various file formats and compression techniques.
- Experience with source code management systems such as GIT and developing CI/CD pipelines with tools such as Jenkins for data.
- Ability to understand deeply the entire architecture for a major part of the business and be able to articulate the scaling and reliability limits of that area; design, develop and debug at an enterprise level and design and estimate at a cross-project level.
- Ability to mentor developers and lead projects of medium to high complexity.
- Excellent communication and collaboration skills.
Bonus Skills & Experience:
- Experience in deploying ML models into production and integrating them into production applications for use.
- Experience with Spark ML.
- Experience with machine learning / deep learning using R, Python, Jupyter, Zeppelin, TensorFlow, etc.
- Experience with developing REST APIs.