ABOUT THE ROLE

Part of our global Data team based from the EMEA Headquarters located in Amsterdam, you will be working with the data team to help build infrastructure, tooling, and data pipelines to power our data-driven organization and support our rapidly evolving and growing data needs. The ideal candidate will be extremely curious and will their data skills and business mindset to make a difference every day. We are looking for people who can operate at a company that grows as fast as ours, by being able to deal with multiple moving pieces while still holding up quality, long term thinking and delivering value to our customers.

RESPONSIBILITIES

  • Design, build, and maintain data integration pipelines for ingress and egress through the data team infrastructure
  • Help design and implement data structures in data pipes and data warehouse to allow accurate, efficient reporting, analysis, and machine learning
  • Develop and implement tooling in the form of Python libraries and deployed systems to allow analysts and scientists to work efficiently and consistently
  • Provide technical best practice in the team via training and documentation to ensure all disciplines are working in the correct manner
  • Manage and develop our data persistence environments (data lakes, storage, etc) to ensure that data is properly available to users and secure
  • Implement and maintain monitoring, alerting, logging, and data quality controls to ensure the accuracy and reliability of our data ecosystem

QUALIFICATIONS

  • Ph.D. or Masters in computer science or equivalent
  • 3+ years of work experience in data engineering
  • Highly experienced and proficient with data infrastructure tooling using Python
  • Highly experienced and proficient with data modeling methodologies and implementation
  • Demonstrable experience of SQL
  • Proficient across the entire stack of technologies used for data management including:
    • Data pipelines (Kafka, Kinesis)
    • Structured big data stores (Redshift, Snowflake, Vertica)
    • Semi-structured big data stores (Hadoop, Presto)
    • ETL systems
  • Experienced with large scale, distributed systems and pipelines for data management

 

Apply for this Job

* Required
(Optional)
Almost there! Review your information then click 'Submit Application' to apply.

File   X
File   X