The Dispatching team is responsible for improving time efficiency for our customers, couriers, and partner stores. The team leverages machine learning and operations research techniques to build services that make smart dispatching decisions and minimise the staleness of couriers.
In this role you will be responsible for leading the efforts to build data products and internal solutions for some of the hardest problems in routing and optimisation. You will see projects through from start to finish performing research, prototyping solutions, production deployment at scale and A/B testing to validate your solutions. 

You will

  • Develop, test and productionize algorithms in order to improve the time efficiency of our service exploiting different data sources and utilising advanced statistical machine learning techniques involving estimation, deep learning, reinforcement learning, and graphical models.
  • Evaluate algorithms through controlled offline (e.g. cross validation with holdouts strategy) and online experiments (e.g. cross validation, A/B testing).
  • Design experiments and interpret the results to draw detailed and actionable conclusions.
  • Generate and execute on ideas for exploratory analysis to shape future projects and provide recommendations for actions.
  • Create dashboards and reports to regularly communicate results and monitor key metrics.
  • Collaborate with cross-functional teams across disciplines such as product, engineering, operations, marketing amongst others; identify use cases for data science application.
  • Mentor and empower other data scientists.


  • A Ph.D. or M.Sc. in a relevant field such as computer science or machine learning (preferred but not required)
  • A minimum of 5 years of experience in a full-time industry position (not academic)
  • A deep understanding of the lifecycle of a Data Science Project, with experience in research, solution development, tuning and inspecting models, etc.
  • Experience with Python, Scala or similar languages that are used to ship production models and systems
  • Experience wrangling very large datasets by writing and maintaining data processing pipelines with Hadoop, Spark, BigQuery, Redshift, or similar
  • An ability to tell a story about data, to explore and reveal patterns, and to communicate to stakeholders about your discoveries and hypothesis
  • Worked in cross-functional teams and are able to work in multiple parts of a tech stack
  • Experience working with geospatial data is a plus


  • A ticket to the moon sitting on the fastest rocket - an adventure filled with challenges and professional growth
  • Social benefits (such as fresh fruit every day, free lunches from our yummy partners once a week, beers on Fridays, Culture Days every 6 weeks, the best coffee machine in the world,...)
  • Private Health Insurance
  • Unlimited Glovos (zero delivery fee on your Glovo orders)
  • Attractive compensation and equity plan
  • Gym membership discounts
  • Back to School Fridays (it’s all about learning and sharing knowledge)
  • Team building activities
  • Relocation package
  • International and talented team, used to working in a fast paced and vibrant way!

Apply for this Job

* Required