About DKatalis

DKatalis is a financial technology company with multiple offices in the APAC region. In our quest to build a better financial world, one of our key goals is to create an ecosystem linked financial services business.

DKatalis is built and backed by experienced and successful entrepreneurs, bankers, and investors in Singapore and Indonesia who have more than 30 years of financial domain experience and are from top-tier schools like Stanford, Cambridge London Business School, JNU with more than 30 years of building financial services/banking experience from Bank BTPN, Danamon, Citibank, McKinsey & Co, Northstar, Farallon Capital, and HSBC

 

About the role

We are seeking a hands-on data engineer to help us build out and manage our data infrastructure, which will need to operate reliably at scale using a high degree of automation in setup and maintenance. The role will involve setting up and managing data pipelines plus building new systems where required. Responsibilities will extend to building and optimizing key ETL pipelines on both batch and streaming data. The ability to work with the teams from product, engineering, BI/analytics and data science is essential. Care needs to be taken for ensuring the integrity of the data model and for helping provide a high level of data quality.

 

The individual will also need to be able to work with technical leadership to make well informed architectural choices when required. A high degree of empathy is required for the needs of the downstream consumers of the data artefacts produced by the data engineering team, i.e. the business users, software engineers, data scientists, data analysts, etc and the individual needs to be able to produce transparent and easily navigable data pipelines. Value should be assigned to consistently producing high quality metadata to support discoverability and consistency of calculation and interpretation. 

 

Experience in the banking or fintech industries, especially exposure to finance, regulatory reporting or risk related areas will be viewed favourably.

 

Candidates should ideally have experience with the following (although strong candidates without this experience will still be considered):

  • SQL and data warehouses (ideally cloud based) such as BigQuery, Redshift or Snowflake, but traditional OLAP style database experience will also be considered
  • At least one modern programming language such as Python, Java, Scala, or others
  • Ideally GCP, but experience in another platform such as AWS or Azure will suffice. 
  • Workflow scheduler such as Apache Airflow, Dagster or similar tools
  • Docker and containerisation
  • Comfortable writing detailed design documents

 

Experience with other tooling will be appreciated but is not required

  • Fluency in using Kubernetes
  • Event streaming platforms such as Kafka
  • Stream analytics frameworks such as Spark, Flink, GCP Dataflow, etc

 

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)
cover_letter chosen  
(File types: pdf, doc, docx, txt, rtf)