About Our Senior Big Data Developer

Dynamic Yield is on the lookout for an outstanding Senior Big Data Developer with strong OOP capabilities, a deep understanding of distributed systems and the ability to deliver in a technically diverse and fast-paced environment.

As part of our team, you’ll be responsible for all engineering aspects of our Big Data pipeline. You’ll be expected to utilize advanced technical skills and critical thinking abilities while using a range of technological stacks: Spark, Flink, Kafka, Redis, Elasticsearch. We code mainly in Java and Scala. 

The Task-at-Hand:

  • Design, code, and maintain Big Data solutions - both batch and stream processing
  • Be fully responsible for the product’s lifecycle - from design and development to deployment
  • Bring a strong opinion to the table and be proactively involved with product planning
  • Work in teams and collaborate with others
  • Improve application performance
  • Troubleshoot and resolve data issues

Optimal Skills for Success:

  • At least 3 years of software development experience
  • Experience with Big Data/NoSQL/Stream processing technologies (e.g Spark, Kafka, Flink, Redis)
  • Proven experience in leading and delivering complex software projects
  • Excellent knowledge of an OO language
  • Working experience with Docker and Kubernetes - an advantage
  • Experienced in AWS cloud technologies and administration - an advantage
  • Being a Team player, and fun to work with!
  • A passion for clean, robust code and performance tuning

Apply for this Job

* Required