• Build, test and refine data pipelines and ETL
  • Data modelling, process design, and overall data pipeline architecture
  • Ensure data quality and consistency with monitoring and support
  • Work closely with the product, analytics and data teams to design, build, and test end-to-end solutions



  • Bachelor's degree in Information Systems, Information Technology, Computer Science or Engineering
  • 5+ years of engineering experience integrating and analyzing data
  • 1+ years of working with Cloud technologies in data movement, transformation and retrieval activities
  • Advanced level of proficiency in designing, tuning, and developing SQL including SQL statements, Stored Procedures, Views, ETL packages and pipelines
  • Strong understanding of data and analytics solution architecture, including experience with Big Data, Relational databases, NoSQL, OLAP, streaming and batch data processing
  • Experience with large-scale DBS or big data systems such as Snowflake, Redshift, Databricks, DataLakes, etc.
  • knowledge & expertise in a programming language (Python, Java, SQL)
  • Experience analyzing data to identify deliverables, gaps, and inconsistencies
  • Experience gathering complex business requirements and identifying data needs
  • Excellent oral and written communication skills
  • Proven ability to looking at solutions unconventionally and explore opportunities and devise innovative solutions
  • Self-starter with initiative and ability to work independently or on a small team
  • System Development Life Cycle and Agile Methodology familiarity


Desired Skills

  • Experience with DataLakes, Snowflake DW, MongoDB, Segment or Azure Data Factory  
  • Familiarity with Azure, s3/blob storage, streaming services such as Event Hubs or Kinesis
  • Understanding of Data architecture, Data Governance and Data Security
  • Experience and knowledge analyzing and documenting data taxonomies
  • Experience supporting Data Science initiatives

Candidatar-se a essa vaga de emprego

* Obrigatório