Being a Data Engineer at Hanson involves working on our proprietary streaming services for the engineering and research groups. Our Research teams create strategies that require order book data which needs to be cleaned and accessible by them in a timely manner.
You’ll be responsible for architecting the data pipelines, building the tooling and helping spread domain knowledge to the users to aid strategy creation and ultimately increase PnL. Aside from the longer term projects the data engineers work on continually improving the system in regards to collection, processing, storage and dissemination of data across the business.
Our current tech stack consists of Python, Postgres, Kafka, Presto, and BigQuery.
- You’ll be familiar with writing code in either Python or Scala.
- You’ll enjoy collaborating with Quant Strategists to help better define the systems you’ll be creating
- You’ll have dealt with with big data volumes (at least a few Terabytes) and have a great track record of building automated, scalable and robust data processing systems
- You have a good understanding of database technologies and you know the practical and theoretical difficulties of building distributed systems
- You will have worked with data warehouse systems like BigQuery and both batch and semi-online building blocks like MapReduce, Spark, Kinesis, dataflow etc.
Hanson Applied Sciences is a proprietary research firm that focuses on providing liquidity for sporting events around the world. Our core philosophy of tech-driven trading has allowed us to become one of the largest sports market maker by volume in the world.
Our team is our greatest asset, giving them complete autonomy and the support they need to make an impact. Our structure encourages a culture of edge, mastery, and collaboration, all pulling together to solve some of the most complex undiscovered computer science challenges.