**This is a US-based Remote Opportunity.**
Dealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting **** done every day, together.
DI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today!
Dealer Inspire is a CARS brand. CARS includes the following brands: Cars.com, Dealer Inspire, DealerRater. Want to learn more? Check us out here!
ABOUT THIS ROLE:
Dealer Inspire is changing the way car dealerships do business through data. We are assembling a team of engineers and data scientists to help build the next generation distributed computing platform to support data driven analytics and predictive modeling.
We are looking for a Data Engineer to join the team and play a critical role in the design and implementing of sophisticated data pipelines and real time analytics streams that serve as the foundation of our data science platform. Candidates should have the following qualifications -
- 2-5 year experience as a data engineer designing and implementing data pipelines
- Experience with big data infrastructure to support data science on linux based systems
- Knowledge of the ETL process and patterns of real time data products
- Working knowledge of SQL
- Working knowledge of Python
- Ability to allocate and utilize AWS resources
- Experience integrating with diverse APIs
- Work closely with data scientists on the data demand side
- Work closely with domain experts and data source owners on the data supply side
- An ability to build a data pipeline monitoring system with robust, scalable dashboards and alerts for 24/7 operations.
- College degree in a technical area (Computer Science, Information Technology, Mathematics or Statistics)
- Experience with Apache Kafka, Spark, Ignite and/or Redis
- Working knowledge of MySQL
- An expert at all things Python, including Jupyter notebooks, Conda, Pandas, and/or Cython
- Experience with node.js and PHP
- Familiarity with common AWS services such as EC2, S3, EBS, Glacier, and RDS
What we are looking for in a candidate:
- Experience with big data tools, data pipelines, databases, cloud services, and Python
- Experience building modular, scalable, cloud-based system infrastructure
- Willingness to learn new technologies and a whatever-it-takes attitude towards building the best possible data science platform
- Looking to get into data science? This is a great gateway position.
- Enthusiasm and a “get it done” attitude!
BENEFITS & PERKS*:
- 18 days of paid time off, plus select paid holidays
- Paid Volunteer Day & Paid Pet Wellness Day
- Work from home Fridays
- Fully stocked kitchen and refrigerator
- Robust Health Insurance Options: BCBS, Delta Dental, EyeMed
- 401k plan with company match
- Subsidized internet access for your home
- Peer-to-Peer Bonus program
- Subsidized gym membership
- Weekly in-office yoga classes
- Parental Leave
- Life & Disability Insurance
- Tuition Reimbursement
*Not a complete, detailed list. Benefits have terms and requirements before employees are eligible.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.