The Analytics team is responsible for making Auctane’s data reliable, trustworthy and easy to use. We do this by creating a data ecosystem that enables Auctane to use data to make our product better, facilitate decision making, and help drive business value. In addition, the Analytics team partners with stakeholders from across the business to deliver both standardized & ad-hoc reporting, analysis & insights to drive business decisions.
The Data Engineer will maintain our existing integrations as well as build new ones as the company adopts new software systems to meet its needs. Data from these integrations needs to be ingested, transformed, and combined in order to provide valuable insights to stakeholders across the organization. Sales, Marketing, Customer Support, and the Product teams currently use a variety of systems that have limited ability to talk to communicate with each other. By consolidating this data into one data warehouse, the various teams can see how their work affects customers and the company.
- Build integrations for all relevant internal systems
- Ensure high level of data quality in data warehouse
- Monitor and support ETL processes
- Provide data to various stakeholders across the company through BI tools and operational applications
Qualifications - To perform this job successfully, an individual must be able to perform each essential job duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
Essential Position Duties (typical monthly, weekly, daily tasks):
- Manage new data integration projects
- Identify relevant data which needs to be extracted
- Create necessary infrastructure to support integrations
- Transform data to increase usability for stakeholders
- Support existing system integrations
- Ensure that integrations are ingesting correct data
- Ensure that integrations run on a regular schedule
- Fix issues in pipelining processes as they arise
- Communicate changes in data warehouse and integrations to relevant parties across the company
- Know which stakeholders need information from which integrations
- Communicate any changes or outages to relevant parties
Skills and Knowledge:
- Proficient in Python
- Hands-on experience implementing ETL (or ELT) best practices at scale.
- Hands-on experience with data pipelining tools (Airflow, Dagster, Prefect, dbt, Meltano)
- Have a deep understanding of SQL, data modeling, and analytical data warehouses, such as Redshift or BigQuery.
- Proactive communicator who can translate between technical and non-technical stakeholders
- Team player who gives and takes feedback in a thoughtful way, and loves to help others.
- Thrive on autonomy and have experience driving long-term, cross-functional projects to completion.
- Use distributed source control such as Git proficiently
Education and/or Experience:
- Bachelor’s degree in Computer Science or Engineering or equivalent years’ experience.
- At least two years’ experience in data engineering or ETL/ELT processes.
- Experience with the specific tools we currently use
- Airflow, Kafka, dbt, Amazon Redshift
- Experience orchestrating machine learning
- Python, SQL
- Github, Atlassian Jira
- 10% or less
Additional Position Duties: – (The following is a list of what all employees, except those with medical accommodation, may be regularly required to do.)
- Sit for prolonged periods of time
- Utilize wrist and hands for a prolonged period of time
- Walk short distances
- Stand for short periods
- Speaking and conversing with others
- Lift up to 25lbs without assistance up to chest height
Equal Opportunity Employer/Veterans/Disabled