About Our Big Data Developer
Dynamic Yield is on the lookout for an outstanding Big Data Developer with strong OOP capabilities, deep understanding of distributed systems and ability to deliver in a technically diverse and fast paced environment.
As part of our team, you’ll be responsible for all engineering aspects of our Big Data pipeline. You’ll be expected to utilize advanced technical skills and critical thinking abilities while using a range of technological stacks: Spark, Flink, HBase, Kafka, Redis, Elasticsearch, Akka (we code mainly in Java and Scala).
- Design, code, and maintain Big Data solutions - both batch and stream processing
- Be fully responsible for the product’s lifecycle - from design and development to deployment
- Bring a strong opinion to the table and be proactively involved with product planning
- Work in teams and collaborate with others
- Improve application performance
- Troubleshoot and resolve data issues
Optimal Skills for Success:
- At least 3 years of software development experience
- Experience with Big Data/NoSQL/Stream processing technologies (e.g Spark,Flink, Redis, Kafka)
- Proven experience in leading and delivering complex software projects
- Excellent knowledge of an OO language
- Being a Team player, and fun to work with!
- A passion for clean, robust code and performance tuning
- Degree in Computer Science or a related discipline from a top university, or relevant experience.