About Good Money
Good Money is the world’s first digital banking platform that will make every customer an owner and allocate 50% of our profits to social and environmental impact. We're building a conscious banking platform providing best-in-class mobile banking and financial services while democratizing ownership to its customers for the first time in history.
About the Data Team
We’re a small analytics team, which means that you have a ton of opportunity to learn and grow along with the team! There are two of us right now but we have plans to scale the team and analytics at Good Money significantly over the next 18 months - so join us on the learning curve.
We utilize modern tools and embrace the vision of the full-stack data scientist. We’ve built a stack with Snowflake, Segment and DBT to empower data analysts to go all the way from raw data to modeling to analysis without waiting for external dependencies. Own the entire problem! We also have the ability to provide you with the tools and training you need to continue to grow in your analytics career.
The analytics engineer at Good Money plays a critical role in maintaining and improving our data pipelines, ml models, databases and data visualization tools. Success in the roles means being flexible and fast moving in adapting to change and staying up to date with the ever shifting world of data engineering. Constant focus on gaps and issues in our data stack, our product and our overall analytics efficiency as an organization is must.
Exceptional candidates will have:
- Build and maintain the analytics layer of our team’s data environment to make data standardized and easily accessible
- Maintain/build derived marketing/sales schemas on our Snowflake cluster and investigate and refactor any expensive queries
- Integrate third party data sources as we add channels, data partners and other vendors
- Working closely with Product and Engineering to ensure upstream product model changes integrate well with our data model; and when it doesn’t build the necessary capabilities to adjust
- Define user roles and permission levels for Snowflake and BI tools
- When needed, performing stakeholder related work, such as dashboards or analysis
- Integrating and productionizing analyst and data science models as needed
- Build data expertise, best practices and own data quality for all analytical data needs
- Define and manage SLA for all data sets in allocated areas of ownership
Things that will help you succeed in the role:
- 2+ years working with a data warehouse/BI-tool (Snowflake experience is a plus)
- 3+ years experience with self-service BI tools (e.g. Tableau, Looker or Domo)
- Advanced knowledge of
- SQL data modeling (dbt experience is a plus)
- REST APIs
- Git and CI/CD best practices
- Exposure to Airflow and/or DAGs in general
- AWS or GCP development and operations experience (EMR, Spanner/Redshift/BigQuery, S3/GCS, data pipelines, etc.)
- Experience working with an analytics team
- Experience with autoML platforms (H2O, DataRobot, Google autoML, etc.)
- A solid math and statistics background
Hiring philosophy & process
Look, we know job searching is hard, time-consuming, and stressful. We aim to let you know everything we can, when we can. We want to make sure you have all the time you need to learn more about us. We also want to have the time we need to learn more about you. Once we surpass the threshold, we want to move forward.
- Distributed team nationwide (pre- and post-COVID)
- Competitive comp and equity packages
- Open and flexible PTO
- Medical, dental, vision, life insurance, and 401(k)
Anticipated process for top candidates:
- Drop your application with us here
- Zoom interviews with People Ops, Data, Engineering
- Regular check ins to see if the role is still right for you
- Reference checks, negotiation, and offer
Feel right for you?
We’d love to hear from you no matter your background. Drop your resume in one click and we’ll be happy to answer your questions from there.