Adaptive Management is a SaaS company building a unified ecosystem for leveraging data. Our cloud-based platform enables companies to interface with thousands of data providers to quickly find answers from data. We give clients the power to Discover, Combine, Visualize, Forecast and Analyze data in an efficient and user-friendly platform.

THE ROLE

Adaptive targets financial service and corporates. Adaptive is developing a state-of-the-art technology stack to ingest, normalize, aggregate and distribute data to clients, and provide analytics back to data vendors.

As a Senior Data Engineer on the Ingest Team, your job will be to build out our data onboarding capabilities and ETL pipelines and support our analysis efforts. You thrive in a rapidly evolving environment and enjoy solving user problems and discovering requirements. You will be adept at designing and implementing ETL processes according to industry standards and best practices.

RESPONSIBILITIES 

  • Learn key vendor data sets and build robust ETLs that address our clients’ needs.
  • Work with and manipulate data, develop aggregations, understand biases in data, and apply visualization techniques in order to better tell stories found in the data.
  • Collaborate with other data engineers on your team and with other engineering teams in the company.
  • Serve internal stakeholders, such as the director of data partnerships, VP of engineering, data scientists, and the sales team.
  • Take on increasing complex projects and maintain a “client first” attitude with all of your work
  • Product high quality results – quality assurance is one’s own work.

BASIC QUALIFICATIONS:

  • Bachelor’s degree in Computer Science or a related field
  • 6+ years of experience as a Data Engineer working with complex datasets
  • Intermediate to advanced experience with Python and SQL (we use Spark SQL)
  • Equally comfortable prototyping in notebook environments (e.g. Zeppelin/Jupyter) and writing production quality code.
  • Experience working with RESTful APIs and a variety of data formats
  • Experience with pandas and common python data visualization tools
  • Excellent verbal and written communication skills
  • Hands-on development mentality with a willingness to solve complex problems

 NICE TO HAVE:

  • Proficiency with Apache NiFi, PostgreSQL, Spark, Hive, Elasticsearch, and Neo4j
  • Experience working with cloud architectures (e.g. AWS, GCP, Azure)
  • Experience working in a startup environment
  • Experience working with financial datasets

Apply for this Job

* Required