At CommerceIQ, we help consumer brands accelerate their retail ecommerce market share growth and profitability through machine learning algorithms. We are building the world’s most complete and sophisticated Retail Ecommerce Management Platform, which connects and intelligently automates the management of retail ecommerce channels like Amazon, Walmart, and Instacart, across the entire ecommerce operational chain of retail media management, sales operations, supply chain, and digital self analytics.
We are in hyper growth mode, having recently raised our Series D funding at unicorn valuation (>$1B) and ended our third year of triple-digit revenue growth. Continued acceleration of our growth is fueled by landing new customers, expanding our platform through new products, managing new retail ecommerce platforms, and delivering exceptional customer service to unlock high net retention rates.
The Role:
The Data Products team’s mission is to organize the vast amount of data ingested by the e.fundamentals data platform every day and turn it into actionable insights for our customers. This involves building data pipelines to clean and model our core data, applying advanced models to derive new data sets and building monitoring and alerting systems that ensure data quality.
What You'll Do:
Build large-scale batch data processing systems with Spark and DBT in the Cloud
Collaborate with engineers and product managers to understand data needs and build exceptional data products used internally and directly by our customers
Manage data warehouse integrations with our customers data science teams
Create concise documentation for consumers of data products
Improve data quality through testing and tooling
Own your changes from development to production and beyond
Participate in engineering and architecture forums to help build a culture of excellence within the engineering organization
What You'll Bring:
Demonstrable experience with modern OLAP databases like BigQuery
Advanced SQL - Beyond just CRUD.
Experience with tools such as Spark or DBT for building pipelines
An understanding of data modeling, data access, data storage, and optimization techniques
Experience working with cloud based technologies and development processes
Nice to Haves:
Experience using visualization tools such as Looker or Tableau