Job Description:

  • Develop and enhance data infrastructure using frameworks such as Hadoop, Spark and Flume
  • Design and build new data models and architects that will provide intuitive analytics
  • Design and build reliable data pipelines that will efficiently move data to our Data Warehouse
  • Design and develop new systems and tools that will enable teams to utilize, understand and process data at faster speeds


  • ┬áMinimum B.S. degree in Computer Science or a related technical field
  • 2+ years of Python development and Unix/linux system experience
  • 2+ years of SQL (Mysql, PostgreSQL, Hive, etc) experience
  • Familiar with Hadoop, Spark, big data experience is a plus
  • Excellent communication skills with the ability to identify and communicate data driven insights

You must also possess at least 2 of the additional requirements as below:

  • 2+ years of working experience in software development/programming in one of Java, C/C++ under Linux/Unix
  • 2+ years of working experience with distributed databases or distributed systems
  • 2+ years of working experience with dimensional data modelling & schema design in Data Warehouses
  • 2+ years of working experience working on BigData analytics pipelines (Hadoop, Hive, ETL, RDBMS, Hadoop data management tools like Sqoop)

Apply for this Job

* Required