Anaplan is a leader and innovator in connected planning – a unique technology that connects all data and all plans for the largest companies in the world. At Anaplan, our mission is to make all planning for all people a reality. In today’s data driven environment, especially with the difficulties presented by COVID, it becomes harder to create reliable plans and forecasts, which makes Anaplan even more essential. Our Israeli office is the AI center of Anaplan and is responsible for all AI and Big Data solutions that solve these problems.
Senior Data Engineer - Big Data
The Big Data team is to own and control all data activities, we own the core data collection system that drive hundred of millions data points, develop and maintain data pipelines, that include AI models, to bring quality data and signals for our customers.
As a member of the team, you’ll build services, libraries and tools to create these data pipelines. You’ll be working closely with all engineering, data science and analysts teams.
What Are We Looking For
- Passion for data and the massive effect of it on customers and business
- You love to know what is happening in your applications at any given time
- You are eager to solve technological and business challenges
- Well-rounded hands-on experience data engineer, with good grasp of the AI and infrastructure world
- People’s person, you’ll have to showcase, support and mentor developers from different teams and inside the team
- Positive energy and enthusiasm
Your Day To Day
- Own data process that you and others built - develop and maintain
- Data Pipeline
- Design and implement core components that will be used by data-related teams
- Perform ongoing optimization improvements
- Implement best practices and follow coding high-standards
- Use python, spark and SQL
You Should Have
- B.Sc or M.Sc graduate in quantitative field
- Engineering, Computer Science, Information Systems, etc.
- At least 5 years of experience with Python, Java or equivalent OOP language
- Deep understanding of data, metrics, data modelling and business needs
- Good familiarity with cloud providers (AWS/GCP/Azure)
- You are able to write simple, clean and testable code
- Familiarity with big data tools
- Spark, AWS Redshift, MongoDB, Redis, AWS Athena, etc.
- Past experience with container-orchestration systems;
- Kubernetes (K8s)
- Docker (DCOS)
- Solid understanding of tasks orchestration systems like
- Airflow, Luigi, etc.