Blink Health is a well-funded healthcare technology company on a mission to make prescription drugs more accessible and affordable for everyone. We're scaling up in a highly complex vertical to change the way Americans access the prescription drugs they need.

Our proprietary platform and supply chain allows us to offer everyone — whether they have insurance or not — amazingly inexpensive prices on over 15,000 medications. With the addition of telemedicine and home delivery for prescriptions, Blink is providing a life-changing experience for people all over the country and fixing how opaque, unfair and overpriced healthcare has become. We are a highly collaborative team of builders and operators who invent new ways of working in an industry that historically has resisted innovation. Join us!

About The Team

Blink Engineering strives to build trusted, highly observable, data-driven products to bring affordable, accessible healthcare to all Americans. We understand healthcare is the most complex system most of us will ever fix. We believe in solving this complexity through the use of simple, well-known technologies. We are a highly collaborative team that believes in owning outcomes over owning code and putting patients at the center of everything we do. 

The Blink Health Data Engineering and Analytics team is a small team responsible for building infrastructure, frameworks and tooling to enable data-driven decisions; building and maintaining our data warehouse for security and scale. This role is central to building and executing on a robust and forward-looking data strategy for the company, and the successful candidate blends top-tier software engineering expertise with the ability to look ahead at what we need to build for the future.

About the Role

As Data Engineer, you will be a helping building our next generation of data tools and frameworks, in addition to developing and maintaining data products and infrastructure.  You will proactively assess production DW support trends to determine and implement short- and long-term solutions, and be able to design for data integrity, reliability, and performance. 

Required Experience

  • You have 4+ years hands-on experience and demonstrated strength with:
    • Python software development. You will be coding.
    • Building and maintaining robust and scalable data integration (ETL) pipelines using SQL, EMR, Python and Spark.
    • Writing complex, highly-optimized SQL queries across large data sets.
    • Designing and maintaining columnar databases (e.g., Redshift, Snowflake)
    • Distributed data processing (Hadoop, Spark, Hive)
    • ETL with batch (AWS Data Pipeline, Airflow) and streaming (Kinesis)
    • Integration and design for Business Intelligence tools (e.g., Looker, QuickSight)
    • Creating scalable data models for analytics.
  • You have experience designing and refactoring large enterprise data warehouses and associated ETLs, with continuous improvement examples for automation and simplification across all aspects of the DW environment, inclusive of both engineering and business reporting. 
  • Proven success with communicating effectively across diverse disciplines (including product engineering, infrastructure, analytics, data science, finance, marketing, customer support, etc.) to collect requirements and describe data engineering strategy and decisions. 
  • Undergraduate or graduate degree in Computer Science

Apply for this Job

* Required