We're looking for a talented data warehouse Architect to join our growing data team and help unify our data landscape into a single source of truth powering our MI, deep-dive analysis and machine learning predictive models.

This is a fantastic opportunity for a Data Architect with a penchant for data modelling to be an integral part of a thriving, profitable business, and help build a world-class data infrastructure that will be transformational for our people and our customers. The core set of Truth tables you will help build will:

  • Power deep-dive analyses to answer questions that unlock the next growth lever for the business
  • Enable analysts to become data champions in their respective teams
  • Enable fast, easy, self-serve BI in every team, helping everyone to spend time on what really matters
  • Flow straight into our data-science powered personalisation technology

How the magic happens

We collect data from dozens of data sources, ranging from transactional data, availability data, payments data, customer event-level data, voice-of-customer data, third party data and much much more. Our historical data runs into tens of billions of records and grows at a rate of tens of millions of records every day.

Our data is extremely varied, some being very finely-grained, event-level data, other being already aggregated to various degrees. It also arrives on different schedules!

Things you will be working on

  • Design, implement, optimise, evolve, support and deploy new and existing data models to support many and varied use-cases across MI, analysis and data science.
  • Build and maintain complex business logic written in SQL that keeps up with the business.
  • Develop our approach to incremental transformations that is performant in a big-data world, yet flexible when business logic revisions must be implemented.
  • Work cross-functionally with fellow data engineers, data scientists, analysts, and other business users, understanding the structures and relationships in the raw data, as well as deriving the appropriate data models to support our ambitious enablement roadmap in 2019.
  • Work cross-functionally to integrate a modern BI tool with our data warehouse, developing the star schema, aggregations and any data models sitting between the warehouse and business users, taking ownership of implementing and maintaining standards and development best practices.
  • Adapt test driven development-like practices (TDD) to SQL modelling, devising and writing assertions and validations that verify complex business logic.
  • Take ownership and responsibility for the quality of the solutions, testing them thoroughly and ensuring best practices are used throughout - particularly with regard to self-explanatory code, maintainability and scalability.
  • Educate analysts and data scientists on data modelling design principles and best practices.
    Working with fellow data engineers, help productionise analyses that are contributed by analysts and data scientists.

The Deal Breakers

  • Proficient in writing performant, high quality, maintainable SQL code on large and very large datasets.
  • Able to explore large datasets of raw data to uncover their structure and understand the relationships.
  • Understand the practical complexities of implementing deeply custom business logic rules, what can go wrong and preventative measures.
  • Meticulous, careful validation of every join and intermediate table you create.
  • Full comprehension of data modelling techniques (one or more of Star Schema, Kimball, Data Vault etc)
  • Some experience with Python as this is the engineering wrapper for all our SQL (you will be supported by fellow data engineers)

And you are...

  • Pro-active with a self-starter mindset; able to identify elegant solutions to difficult problems and able to suggest new and creative approaches.
  • Analytical, problem-solving and an effective communicator ; leveraging technology and subject matter expertise in the business to accelerate our roadmap.
  • Able to document business logic clearly, in plain English, in the code itself, making knowledge easily transferable to the wider team.
  • Highly collaborative in work style
  • Curious and inquisitive
  • Willing to step outside your comfort zone to broaden your skills and learn new technologies.

Nice to have

  • Experience with MPP cloud data warehouses a big plus (Amazon Redshift, Snowflake)
  • Experience with Looker, Tableau and other modern BI tools.
  • Experience or at least familiarity with modelling use-cases such as sessionisation and identity stitching a plus.
  • Good understanding of Agile

Our stack

Our stack is Python for the data pipeline, Airflow for orchestration and Snowflake is our data warehousing technology of choice. This is a chance to work with best-of-breed ETL/data warehousing tools and technologies. Our wider ecosystem of tools and partners includes Snowplow, Spark, Docker. Everything runs in AWS.

Things you should know about us

We’re continually improving as a team and over 20% of our time is allocated to personal objectives and hack days giving us space to also develop individually which is an important part of our team. This is an excellent chance to experiment with technology and deliver that feature no one else thought of.

We are an equal opportunity employer:

We value and actively seek out a richly diverse range of talent, and have policies in place to ensure that every applicant and employee has the best chance to thrive here. We are an equal opportunity employer and all applicants will receive consideration for employment without regard to any characteristic protected by law.

Apply for this Job

* Required