About us

We're on a mission to simplify the everyday lives of consumers. We believe post-purchase is a critical phase of the customer journey. That's why we created Narvar - a platform focused on driving customer loyalty through seamless post-purchase experiences that allow retailers to retain, engage, and delight customers. If you've ever bought something online, there's a good chance you've used our platform!

From the hottest new direct-to-consumer companies to retail’s most renowned brands, Narvar works with Patagonia, GameStop, Neiman Marcus, Sonos, Nike and 650+ other brands. With offices in San Francisco, London, Paris, and Bangalore, we've served over 125 million consumers worldwide across 7 billion interactions, 38 countries, and 55 languages.

Pioneering the post-purchase movement means navigating into the unknown. Our team thrives on this sense of adventure while nurturing a mindset of innovation. We're a home for big hearts and we leave our egos at the door. We work hard but we always make time to celebrate professional wins, baby showers, birthday parties, and everything in between.

The role

Data is at the core of Narvar's competitive advantage, so the work we do has a significant impact on the company, our business partners, and the lives of our end users. In this role, you will marshal a diverse volume of data that powers an essential data platform in commerce technology.

Narvar handles transactional data for a considerable percentage of e-commerce. More than 650 leading retailers worldwide use Narvar's shipment tracking, returns, customer care, bidirectional multi-channel communication, and analytics products to transform their customers' post-purchase experiences. We're integrated with most all delivery carriers in North America and ever-growing worldwide. (We're also the largest consumer of UPS's API.)

Day-to-day

  • Develop and automate large scale, high-performance data processing systems (batch and/or streaming) 
  • Build scalable Spark data pipelines leveraging technologies such as Airflow scheduler/executor framework, Elastic Beanstalk, Kinesis, EMR, Hive, Druid, Cassandra
  • Build scalable and extensible stream processing applications using technologies such as Spark Streaming, Apache Flink
  • Taking ownership of and develop critical new features for our platform and support functions such as sales, marketing, and finance
  • Build data support for our experimentation efforts, solving problems from statistical test automation to building real-time M/L pipelines
  • Contribute to shared Data Engineering tooling & standards to improve engineering productivity across the company

What we’re looking for

  • Proven hands-on experience building complex ETLs in a business environment with large-scale, complex datasets
  • Expert-level understanding of relational databases (columnar and row-based), and NoSQL including dynamo, Cassandra or similar
  • Experience with processing data at scale streaming and batch with e.g. PySpark and lambda functions
  • Expert SQL skills and sound consumer-scale data architecture judgment
  • Scripting experience in python and bash shell required
  • Experience with error handling and data validation
  • Experience working in AWS or other cloud-based environments
  • Bachelors in Computer Science, Engineering or similar

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Apply for this Job

* Required