Do you have a good grasp on Python or SQL, or experience building and running apps in cloud environments? Come join our fast-growing team! With opportunities in both Singapore and Hong Kong, we’re hiring Data Engineers that are an essential conduit to other engineering teams across our consumer-facing company, as well as our Data Insights team.
About the Data Engineering Team
Our team acts as a central nexus to connect various data producers with consumers across the company. Our customers are:
- Other engineering teams across the company that produce or consume data that need to be combined with other data sources.
- Analysts on the Data Insights team.
We are accountable for delivering:
- A centralized data warehouse that enables engineers and analysts across the company to ingest, anonymize, and enrich with other data sources from anywhere else in the company, and to persist, analyze, purge, and otherwise process their data.
- Tools, training, and coordination.
- Data applications that don’t fall into any one business unit, or where the business units don’t have sufficient capabilities themselves. For example, we team up with the Data Insights team to build and operate churn-prediction models used by both humans and other systems at scale.
- Data Catalog for documenting the sources of data and what is available for use by other teams.
Our responsibilities include:
- Building and operating the data platform service, including defining and tracking its SLA.
- Guiding various engineering teams to design models and schemas of the data to be fed into the platform, making sure they can be processed in a scalable way and used by analysts efficiently.
- Guiding data analysts on the use of the data platform.
- Building libraries/modules and reference implementations of data ingesters on several common tech stacks.
- Partnering with other teams on projects to build data engineering solutions such as for churn-prediction, payment fraud management, and other company-wide challenges.
Other notes about our team:
- Our tech stack currently mostly focuses on AWS Redshift, Google BigQuery, Apache Airflow and Tableau, but we imagine it will evolve significantly over time.
- We have an ever-expanding range of engineering roles on the team, covering people with backgrounds in software development, infrastructure operations, and data science.
Your responsibilities will include:
- Understand the need of your internal customers, and convert them to optimized and maintainable tech designs.
- Use your data engineering skills to design and build the ingestion, processing, storage and consumption system for data to enable other business units to make business and operational decisions using data.
- Maintain and operate the data platform which many business units rely on to fulfill their service level targets.
- Proficiency in Python and SQL, with a good understanding in runtime complexities
- Experience in building and running applications in cloud environments (AWS, Azure, or GCP)
- Knowledge in large scale batch / stream processing frameworks such as Apache Spark / Flink / Storm is a big plus
- Experience in data analytics and data visualization tools is a big plus
- Good command of written and spoken English
What we can offer you
- Full-time employment with flexible working hours
- Challenging work in a fun and collaborative environment
- Attractive compensation and time-off benefits
- Spacious open-concept and centrally located offices
- Financially successful and profitable company
- Fully stocked pantry with healthy foods and fresh fruit
- Team lunches and company events every quarter
- Multicultural teams represented by 30+ nationalities
Note: please do not include any salary information and submit your resume in PDF format.