Narrativ is building the commerce and payments infrastructure for the Creator Economy. We make it simple for creators to earn a full time income selling products from brands they love.
50 million people work in the Creator Economy today, but 95% don’t make enough to pay their monthly rent. Creators feel more pressure than ever to keep adding new revenue streams, from fan subscriptions to tips, and churn out a constant stream of new content just to get by. Narrativ gives creators the opportunity to earn meaningful, reliable income in commerce without needing to have millions of fans.
Narrativ is changing this status quo - our platform gives creators commerce, not affiliate tools, to manage their businesses. Creators who were making less than $25K a year through affiliate are now earning $100K+ through commerce on Narrativ. These creators don’t need millions of followers - the majority of creators earning a full time income on Narrativ have between 50-300K subscribers.
Narrativ has been recognized as a World Economic Forum Technology Pioneer and one of Fast Company World’s Most Innovative Companies alongside giants like Google, Microsoft, and Slack. Narrativ is committed to building a diverse team in a technology ecosystem that is anything but - the team is 42% women and 60% people of color today.
Do you share our vision for making the creator economy dream real for millions of people? Come join our team!
You will be a software engineer in the Data Engineering team. You will work with stakeholders like engineers from other teams, product managers, and solution engineers to design, create, and improve data pipelines that are responsible for ingesting, transforming, and exporting data to/from both internal and external systems. In addition to internal customers like sales, product, machine learning, and other application teams, we also send data to external customers. You will build and maintain APIs for data access and validation used by other teams. You will work to improve data quality, improve trust in our data, and help our organization become more data driven.
- 5+ years of software development
- 3+ years of experience designing, building and maintaining enterprise data pipelines and/or warehouses
- Demonstrable knowledge of big data databases (e.g. Snowflake or BigTable) as well as SQL
- Experience with message processing (e.g., Kafka, RabbitMQ)
- Experience working closely with the product team to help prioritize the best solutions to the largest problems.
- Reliable organization and communication skills and follow through on verbal and written commitments.
- Persistent approach to problem-solving and ability to see solutions through to completion even in the face of complexities or unknowns. A proactive mindset that drives you to pursue solutions rather than waiting for the answers to come to you.
- Attention to detail in work and ability to identify ambiguities in specifications.
- Exceptional written and verbal communication skills, especially when communicating trade-offs between technical decisions to non-technical colleagues.
- Flexibility to work and maintain focus in an evolving environment.
- A collaborative personality and a commitment to helping others.
Narrativ Technology and Data Stack
Our data is centralized into a Snowflake database. We replicate data, mainly using Fivetran, from our transaction databases, Google Analytics, Salesforce, and Neptune (graph database). Our events are also streamed into Snowflake after processing through a Storm cluster that uses DynamDB for persistence. We use DBT to build our ELT DAGs and publish dashboards on Looker. We use libraries like Marshmallow and Great Expectations for data validation.
Narrativ’s systems are implemented as a modern microservices architecture running on Linux servers hosted in AWS. We use Kubernetes to manage our containers, we use Flask to construct our Web interfaces. We build interactivity in our web interfaces using React. We use Linux, and in particular Debian, Ubuntu, and Alpine distros.
We test each language with an appropriate unit testing tool - JUnit, PyTest, ScalaTest, ExUnit, and Jasmine. We use Jenkins to run our builds and tests.
We also use Airflow, DataDog, Fivetran, Jira, LaunchDarkly, and StoryBook.