Blink Health is a well-funded healthcare technology company on a mission to make prescription drugs more accessible and affordable for everyone. We're scaling up in a highly complex vertical to change the way Americans access the prescription drugs they need.
Our proprietary platform and supply chain allows us to offer everyone — whether they have insurance or not — amazingly inexpensive prices on over 15,000 medications. With the addition of telemedicine and home delivery for prescriptions, Blink is providing a life-changing experience for people all over the country and fixing how opaque, unfair and overpriced healthcare has become. We are a highly collaborative team of builders and operators who invent new ways of working in an industry that historically has resisted innovation. Join us!
The Blink Health Data Engineering and Analytics team is a small team responsible for building infrastructure, frameworks and tooling to enable data-driven decisions; building and maintaining our data warehouse for security and scale. This role is central to building and executing on a robust and forward-looking data strategy for the company, and the successful candidate blends top-tier software engineering expertise with the ability to look ahead at what we need to build for the future.
About the Role
As a Principal Data Engineer, you will be a thought leader within the data engineering team that is designing and building our next generation of data tools and frameworks, in addition to developing and maintaining data products and infrastructure. You will proactively assess production DW support trends to determine and implement short- and long-term solutions, and be able to design for data integrity, reliability, and performance. You will set a high bar for clean and correct code, setting code standards, and performing peer code and architecture reviews.
- You have 8+ years hands-on experience and demonstrated strength with:
- Python software development.
- Building and maintaining robust and scalable data integration (ETL) pipelines using SQL, EMR, Python and Spark.
- Writing complex, highly-optimized SQL queries across large data sets.
- Designing and maintaining columnar databases (e.g., Redshift, Snowflake)
- Distributed data processing (Hadoop, Spark, Hive)
- ETL with batch (AWS Data Pipeline, Airflow) and streaming (Kinesis)
- Integration and design for Business Intelligence tools (e.g., Looker, QuickSight)
- Creating scalable data models for analytics.
- You have experience designing and refactoring large enterprise data warehouses and associated ETLs, with continuous improvement examples for automation and simplification across all aspects of the DW environment, inclusive of both engineering and business reporting.
- Experience owning features from design through delivery along with ongoing support.
- Proven success with communicating effectively across diverse disciplines (including product engineering, infrastructure, analytics, data science, finance, marketing, customer support, etc.) to collect requirements and describe data engineering strategy and decisions.
- Experience providing clear data engineering technical leadership, mentoring, and best practices for data management and quality within and across teams.