Title: Solutions Architect, Data Engineering Team

Location: Bangalore, India

 

About phData

 

We build and support next generation strategic platforms helping customers to save money and unlock real business value. If you are inspired by innovation, hard work, and a passion for data, we want to hear from you.

  •     Our Commitment – you will be working in a fast-moving environment with the brightest and most experienced minds in technology. We’re committed to constant learning and innovation.
  •     Our Work – you’ll be helping companies answer questions and create products and solutions that, until now, were too big, too expensive, and too complex to accomplish.
  •     Our Technology – we focus on building and deploying disruptive big data technologies. If you have experience in or passion for Hadoop, Spark and its supporting technologies, we want to hear from you!

 

In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, long term incentive plans, extensive training, paid Cloudera certifications - besides generous PTO and other benefits like comprehensive Insurance, fitness allowance and ability and flexibility to move between platform support teams, Data Engineering and Machine Learning Operations.

 

Passionate about Big Data and Infrastructure? At phData be part of a team of industry pioneering experts that operate some of the largest analytics and data science infrastructure systems. We are currently seeking qualified Data Engineers to join our growing team. phData Data Engineers work on the most desirable projects at Fortune 500 customers.

 

12-16 years’ of Industry Experience

 

  •   As a phData Solutions Architect in the Data Engineering team, you will act as key leader of a virtual team delivering big data projects based on Apache Hadoop and Apache Spark to completion.
  •   Strong Core Java, Python and J2EE experience with willingness to learn latest technology stack in BigData.
  •   Experience with core Hadoop (HDFS, Hive, YARN, Sqoop) and also knowledge on one or more ecosystem products/languages such as HBase, Spark, Impala, Search, Spark,Kudu, etc. is mandatory
  •   Knowledge of PySpark is preferred.
  •   Understanding of how to work with customers.  Can translate business requirements and high level architecture designs into a Hadoop solution, including ingestion of data sources, ETL processing, data access and consumption, as well as custom analytics.
  •   Ability to program and write test cases in Java/Scala/Python/Pyspark.
  •   Experience with Linux/UNIX, including shell scripting.
  •   Knowledge of distributed systems and databases, including experience with SQL.
  •   Knowledge of either ETL systems or large scale online systems.
  •   Work with Customers and other Solution Architects to help establish and grow our partnership relationships.
  •   Presence on Slack.  Ask questions. Contribute to answers.

 

Apply for this Job

* Required