Job Description:

  • Design,build and maintain the ingestion system to support various types of data(e.g. User behavior, RDS, NoSQLdb and others) to be ingested to the data warehouse more timely and accurately
  • Translate data requirements into scalable technical data service with low latency and high concurrency 
  • Design,build and maintain the batch or real time data pipeline in production using Hadoop big data technology
  • Analyze and improve efficiency, scalability, and stability of the system
  • Define and manage SLA,Data quality for all data sets in allocated areas of ownership

Requirements:

  • Minimum B.S. degree in Computer Science or a related technical field
  • 2+ years of working experience in programming languages,such as Java,scala,Python
  • Familiar with Hadoop, Spark and Flink data processing, experience of TB data processing experience is a plus
  • Familiar designing and operating of a robust distributed system is a plus
  • Understand data mining or machine learning
  • Excited to work intimately with data
  • Passionate, self-motivated, and takes ownership

Apply for this Job

* Required