Help us to build scalable, robust data systems.
Annual payment volume at GoCardless exceeds $1 billion, and we’re processing tens of thousands of transactions every day. As the company grows, there is an increasing demand for the data behind these transactions, and our ability to make sound and timely business decisions is hugely dependent on the availability and reliability of this data.
We’re looking for a data engineer who can help us build data systems that scale with this demand.
As our first dedicated Data Engineer, you will help drive our Data Engineering development strategy from day one. You'll be joining a team of two Data Scientists, who sit within the wider Engineering team. You’ll play a vital role in productionising new data systems, as well as scaling and improving existing ones, such as our in-house fraud detection system. You will also work with people across technical and commercial teams to understand their data needs, and implement the best possible infrastructure to meet them.
In terms of our stack, we’re heavy users of Postgres - both our primary application database and analytics database run Postgres 9.4. Our backend technology is built in Ruby, but we use Python for our data projects, and a fair amount of Go in our infrastructure team.
We’re also currently implementing Tableau to give everyone at GoCardless simple, self-service access to our data. The backend for this is a data warehouse in Google BigQuery, which receives data via a pipeline built using the Luigi Python library. You’ll be responsible for improving, scaling and maintaining these systems, as well as leading the adoption of best practices in testing, monitoring and security.
What we’re looking for
We want to work with people who are passionate about building and maintaining reliable, performant systems, and have practical experience of doing so. You should have a solid professional background in software engineering, and a deep understanding of relational databases. Given the versatile nature of the role, we’re looking for someone who can learn fast, enjoys working with others, and is a pragmatic decision-maker.
Bonus points for:
- Strong Python skills.
- Experience designing data warehouses and assembling data pipelines.
- Familiarity with modern data warehousing tools such as Amazon RedShift and Google BigQuery.
- Computer Science degree, or equivalent experience.
- Experience with Python's core data science libraries, e.g. pandas, scikit-learn etc.
Our team come from a variety of backgrounds and we welcome diversity – if you’re unsure, please apply. We offer a competitive salary and options package, commensurate with your experience.
In your application, please include your CV and a link to your GitHub, and a brief description of any interesting projects you’ve worked on that would be relevant to the role.