Who are we?
Hugo is working to make financial stability achievable for every American. We're starting with car insurance. 30 million Americans drive uninsured each year - risking major fines, car impoundment, license suspension, and in some places, even jail. Our flagship product is Hugo, the world's first pay as you drive liability insurance. Hugo eliminates large upfront premiums and fees, and instead offers insurance in affordable daily bundles, so drivers can purchase what they need when they need it. Drivers can even pause coverage to save money.
About the role
You’ve designed and implemented efficient data pipelines, worked with domain experts to create elegant data models, and spent hours cleaning and manipulating data. You enjoy building visualizations and helping to translate your work product into data-driven business recommendations. Insurance lives and breathes data, and the capture, processing, transformation, and representation of that data is critical to our ability to provide affordable, accessible on-demand insurance. You’ll deal with a diverse set of data, from customer to internal infrastructure, risk to performance, processing realtime to overnight. This is your chance to build those systems from scratch.
Who are you?
- You’re passionate about changing everyday lives by fixing a broken industry
- You want to move fast and make a big impact
- You value over-communication, rapid collaboration, and candid feedback
- You want to drive product decisions with data and technology opportunities
- You like a healthy mix of organization and shipping code for the sake of learnings
- You enjoy designing thoughtful solutions in a collaborative way
- You ask about the edge cases that live between the lines of a feature spec
- You're excited to write the first lines of code for entirely new services / solutions, using best practices + first principles thinking
What will you do?
- Work alongside a leadership team that brings talent from Wall St. and high-growth consumer startups
- Architect and implement an appropriate data pipeline from the ground-up
- Leverage risk data to identify underwriting and conversion opportunities
- Build an in-depth understanding of the domain so that your data can drive meaningful insights
- Deliver data-driven insights, communicated clearly in writing and visuals
- Strike a balance between speed and sustainability, building for today and for the future
- Evolve our architecture by importing emerging best practices
What experience do you have?
(these are not fixed requirements!)
- An exceptional ability to visualize and communicate complex concepts simply
- Several years of experience in data engineering and analysis
- Proficiency with EMR and Big Data technologies (e.g. Spark, Hadoop, HBase, Presto, Hive)
- Proficiency in a scientific computing language (e.g. Python)
- Experience delivering fault-tolerant, highly available systems
- Experience with frameworks like scikit-learn, pandas, numpy
- Bonus: experience designing and implementing Data Governance programs (PII etc.)
- Bonus: BS or MS degree in Computer Science or a related technical field