We are seeking an experienced data engineer to lead our data engineering chapter for data solutions. The data solutions teams build data pipelines to ingest and surface both batch and streaming data on GCP to support our team of data analysts, data scientists and various business stakeholders such as product engineering, growth, customer engagement, fraud, risk and compliance.
The data engineering lead role will be responsible for ensuring alignment between data engineers across the various teams with respect to ensuring
- best practices are defined and adopted
- standardisation of tooling and processes where required
- collaborating with data security team members to ensure the data solutions implement security best practices
- continuous innovation happens
- professional development is baked into the team’s way of working
- data quality, data privacy, data discoverability, data lineage, consistency of data definitions and other data governance concerns are well understood and given priority when developing solutions.
- assisting in growing the team in terms of quality and quantity
The data chapter lead will coordinate across the various data solutions teams’ engineering team leads as well as the data engineers within the teams themselves.
The candidate should have at least 7+ years experience, preferably 10 or more years though with at least 3 years leading a team.
Candidates should have experience across the following systems and languages:
- Ideally GCP, but strong experience in another platform such as AWS or Azure will suffice
- Cloud data warehouses such as BigQuery, Redshift or Snowflake
- Knowledge of pub/sub systems such as Kafka
- Data parallel processing frameworks like Spark or Flink on batch and streaming data
- Workflow scheduler such as Apache Airflow
- Experience programming in Scala and Python
- Proficient in SQL
- Comfortable writing detailed design documents
- Working proficiency with Kubernetes
A solid understanding of the retail banking domain is desirable, but not required.