Who We Are

Narvar offers an enterprise SaaS platform that helps leading brands and retailers build lifelong relationships with their customers beyond the “buy” button. More than 400 leading retailers worldwide–including Anthropologie, Bonobos, Nordstrom, and Sephora – use Narvar's shipment, tracking, returns, and analytics products to transform their customers' post-purchase experiences. More than 60% of Americans online have interacted with Narvar’s platform, most without even realizing it!

The data team is using big data technologies, data science and a massive data set - tens of millions of transactions per month - to build new products and improve all aspects of the platform. Data is at the core of Narvar’s competitive advantage, so the work we do has a large impact across the company, for our business partners, and in the lives of our end users.

What we’re looking for

We are looking for a self-motivated entrepreneurial engineer with experience and interest in data, particularly big data and streaming. You will build and expand the kafka-based infrastructure that enables data availability across the company. You’ll expand a metadata layer to handle arbitrary input schema and multiple output formats (row, columnar, ORC), for ingestion to multiple data stores and query systems like S3, Hive, Redshift, and Presto. You’ll build systems for both streaming and batch processes.  Your systems will make it easy for backend and ETL engineers to move data around and for Data Scientists to assemble datasets.

We also have data products for you to help design, build, and deploy in conjunction with data scientists including recommender systems, natural language processing, an A/B testing platform, and an ad hoc analysis platform. You will be joining a small but stellar team and will be able to make an immediate impact.

This is an excellent opportunity for a solid engineer with experience in one or more areas of the big data ecosystem contribute that knowledge to the team and fill out their knowledge of the rest of the ecosystem, including machine learning.

Qualifications

  • BS and 6+ years work experience or MS and 4+ years work experience
  • History of learning new technologies and problem domains quickly
  • Experience programming with Python and Java
  • Experience in the big data ecosystem including Kafka (or Kinesis)
  • Experience with large data set storage including sharded Postgres, Redshift, S3
  • Experience with RDBMS and NOSQL schema and query design and optimization

Bonus Points

  • Experience with big data querying systems e.g. Hive or Presto
  • Experience with the Hadoop ecosystem: HDFS, Spark, Elasticsearch, etc.
  • AWS console experience with RDS, EC2, cloudwatch, dynamo, etc
  • Sample code projects to showcase your work
  • Experience with query optimization and data visualization
  • Developing and deploying containerized code using Docker

What we offer

  • Competitive salary 
  • Free, daily catered lunches
  • Commuter benefits 
  • Company outings
  • Casual dress code
  • Open vacation policy
  • Get in on the ground floor of a huge opportunity
Apply for this Job
* Required
File   X
File   X


Share this job: