Neuron is the leading shared micromobility operator in APAC and has also expanded into the United Kingdom, South Korea and Canada. We partner with cities to connect people and places in a safe, convenient and fun way. We are driven to help the world build a more prosperous and sustainable future through new ways of moving and connecting.
Neuron launched the world’s first docked e-scooter in Singapore in 2016 and the first shared e-scooter programme in 2017. We develop our own industry-leading e-scooters, which are purpose built for sharing and safety. We also set the industry standard for technology that manages them. Neuron has launched an impressive range of world firsts, particularly when it comes to safety. We currently operate in several cities across ANZ, three in the United Kingdom, two in South Korea and most recently launched in Canada.
What You’ll Do:
- Build the data tools & pipelines to enable company-wide tracking of key performance metrics for C-suite and dedicated metrics for product, operations and finance teams
- Design and enhance data infrastructure using frameworks such as Hadoop, Spark or data warehouse products such as BigQuery or Redshift
- Architect data models and design reliable data pipelines that will efficiently move data to our data lake
- Design and develop new systems and tools that will enable teams to utilize, understand and process data at faster speeds
- Manage the infrastructure and services on AWS to provide the Data Team with high availability and efficient extraction of the data.
What You'll Need:
- Qualifications in Computer Science, Statistics, Analytics or other relevant degree
- Understanding of computer science fundamentals, data structures and algorithms
- Proficient in these programming languages: Python (preferred), Java, Scala
- 3 years + of experience exclusively in a technical data domain and role
- Demonstrable ability to build stable and reliable data pipelines
- Experience with cloud deployments using AWS
- Ability to communicate in Chinese is preferred to effectively interact with the back-end engineering team in China
- Ability to work in a fast-paced environment with ambiguity is a plus
- Experience in large scale data processing using Hadoop or Spark is a plus
- Experience with streaming analytics and machine learning model productionization is a plus
- Self-motivated, independent learner, and a passion for data and technology
- A keen interest in shared mobility operations and a growth mindset is essential to this role
Make It Happen
Drive to deliver results with the highest impact and be committed to follow through. Continuously optimise through collaboration to achieve the best collective outcomes.
Take active steps and be the driver to improve things, for yourself, your team and the cities we serve. Relish the chance if this leads you outside your normal scope.
Trust Facts Over Opinions
Decisions should be evidence-based. Assess situations fairly, using reason and logic rather than unverified opinions wherever possible.
Ask questions, challenge assumptions, learn from mistakes and be ready to leave behind what you thought you knew before.
Do More With Less
Both time and resources are finite. Balance your priorities and think strategically about how you can maximise your impact and the return on investment of your resources.
If you are passionate about making a real-time impact and want the opportunity to play an instrumental part in our growth story, we want you riding for our team!