Howl is a creator platform for the customer centric and product obsessed. Creators of every size use Howl’s commerce and payments technology to get better rates and easy payments for their big ideas. Creators on Howl earn more when they open doors for others, and have the tools to turn their craft into a business.
We are a remote-first company with a diverse team that reflects our vision. Howl builds so millions more creators can bring their talent, magic, and service to shoppers around the world.
What You’ll Do
You will be a software engineer in the Data Engineering team. You will work with stakeholders like engineers from other teams, product managers, and solution engineers to design, create, and improve data pipelines that are responsible for ingesting, transforming, and exporting data to/from both internal and external systems. In addition to internal customers like sales, product, machine learning, and other application teams, we also send data to external customers. You will build and maintain APIs for data access and validation used by other teams. You will work to improve data quality, improve trust in our data, and help our organization become more data driven.
What You’ll Bring
- 5+ years of software development
- 3+ years of experience designing, building and maintaining enterprise data pipelines and/or warehouses
- Demonstrable knowledge of big data databases (e.g. Snowflake or BigTable) as well as SQL
- Experience with message processing (e.g., Kafka, RabbitMQ)
- Experience working closely with the product team to help prioritize the best solutions to the largest problems.
- Reliable organization and communication skills and follow through on verbal and written commitments.
- Persistent approach to problem-solving and ability to see solutions through to completion even in the face of complexities or unknowns. A proactive mindset that drives you to pursue solutions rather than waiting for the answers to come to you.
- Attention to detail in work and ability to identify ambiguities in specifications.
- Exceptional written and verbal communication skills, especially when communicating trade-offs between technical decisions to non-technical colleagues.
- Flexibility to work and maintain focus in an evolving environment.
- A collaborative personality and a commitment to helping others.
Howl's Technology and Data Stack
Our data is centralized into a Snowflake database. We replicate data, mainly using Fivetran, from our transaction databases, Google Analytics, Salesforce, and Neptune (graph database). Our events are also streamed into Snowflake after processing through a Storm cluster that uses DynamDB for persistence. We use DBT to build our ELT DAGs and publish dashboards on Looker. We use libraries like Marshmallow and Great Expectations for data validation.
Howl's systems are implemented as a modern microservices architecture running on Linux servers hosted in AWS. We use Kubernetes to manage our containers, we use Flask to construct our Web interfaces. We build interactivity in our web interfaces using React. We use Linux, and in particular Debian, Ubuntu, and Alpine distros.
We test each language with an appropriate unit testing tool - JUnit, PyTest, ScalaTest, ExUnit, and Jasmine. We use Jenkins to run our builds and tests.
We also use Airflow, DataDog, Fivetran, Jira, LaunchDarkly, and StoryBook.
Important Notice: Howl is fully remote organization!
Howl is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or veteran status.