As a member of the Yieldmo data team of engineers you are expected to build innovative data pipelines for processing and analyzing our Big Data (250 billion + events per month). A unique challenge with the role is being comfortable in shifting gears and developing in varied technologies including: coding in Scala, Java, transforming and analyzing in SQL, building pipelines in Spark, Kinesis and engineering custom transformation/integration apps in Java and Scala.

 

Responsibilities:

  • Program/ develop Data pipelines in Spark  to transfer massive amounts of data (over 20TB/ month) most efficiently between systems
  • Engineer complex and efficient automated data transformation solutions using programming language such as Java, Scala, SQL, Python
  • Experience in building distributed infrastructure solutions for optimized processing of large data sets
  • Required to research, plan, design, develop, document, test, implement and support Yieldmo proprietary software applications
  • Subject market expert in engineering, maintaining, and evangelizing metric definitions through the organization
  • Analytical data validation for accuracy and completeness of reported business metrics
  • Team of 4 rotates for 24x7 on-call support the SLA on enterprise data systems
  • Understand the business problem and engineer/architect/build an efficient, cost effective and scalable technology infrastructure solution:
  • Write technical documentation recording clearly and in details the solution design, coding solutions and stepwise support sequence:
  • Monitor system performance after implementation and iteratively devise solutions to improve performance and user experience
  • Research and innovate new data product ideas to grow Yieldmo’s revenue opportunities and contribute to company’s intellectual property: 10% of time
  • Conduct user training, perform periodic system updates, interact with users for future enhancements; and resolve software application problems

 

Requirements:

  • BS, MS or higher degree in computer science, engineering or other related field
  • 5+ years of experience in engineering data pipelines for Big Data Systems
  • 5+ years of experience of developing in Java/ Scala
  • Proficient in SQL. You should be very familiar with SQL. Have some experience performing data transformations and data analysis using SQL
  • Demonstrate a keen attention to details
  • An eye for detecting data defects and anomalies
  • Comfortable in juggling multiple technologies and high priority tasks
  • Nice to have: experience with Distributed columnar databases like Veritca, Greenplum, Redshift, or Snowflake
  • Nice to have: Have worked on Spark, Spark ML
  • You are a self-starter, and you enjoy learning new technologies

It is our policy to provide equal employment opportunities to all individuals based on job-related qualifications and ability to perform a job, without regard to age, gender, gender identity, sexual orientation, race, color, religion, creed, national origin, disability, genetic information, veteran status, citizenship or marital status, and to maintain a non-discriminatory environment free from intimidation, harassment or bias based upon these grounds.

Apply for this Job

* Required
File   X
File   X