Pecan is an automated AI-based predictive analytics platform. It simplifies and accelerates the process of building and deploying predictive models in various business use-cases, such as life-time value, Churn, demand forecast and more. Pecan connects to the raw data and completely automates the data preparation, engineering and prepossessing phases, as well as the model training and evaluation lifecycle. It was acknowledged as one of Israel's 50 most promising startups two years in a row.

Company Highlights: 

  • Series C company with over $117M raised to date. Tier-1 investors: Google Ventures (GV), Insight Partners, GGV, Dell Ventures, Mindset and S Capital. 
  • 90+ employees and growing very quickly
  • HQ in Tel Aviv with growing sales and marketing organization in the US 
  • Customers across CPG, retail, healthcare, mobile apps, fintech, insurance, and consumer services. Marquee customers include Johnson & Johnson, Nestle, and SciPlay.

What will you do:

Work on pecans’s data and machine learning infrastructure, which is at the heart of Pecan's  offering. You will create innovative solutions for data ingestion and normalization from multiple data sources, as well as feature engineering and feature selection for our ML models. You will also design and implement technology solutions to protect Pecan's customers data and privacy, using rigorous data isolation and security technologies - such as de-identification pipelines, data obfuscation etc.

Who You Are:

A problem solver at heart, you have a passion for excellence, you love to learn but know when it’s time to deliver and make ends meet. You aren’t threatened by a complex, dynamic and demanding environment. “There is no I in team”, is a motto you believe in deeply and you are always looking out for your peers. You know how to take ownership and drive projects to completion.

What We're Looking For:

  • 3+  years of experience as a big data engineer.
  • 5+ years of experience as a software engineer.
  • Strong understanding of distributed systems.
  • Proven experience with data pipeline/backend services.
  • Experience with Microservices architecture, cloud technologies, Docker/K8s
  • Hands-on experience with Spark, SparkSQL, Spark streaming and other Spark related projects.
  • Experience working with multiple DB technologies and vendors (Big-data solutions like BigQuery, RDBMS, NoSql, Columnar databases, etc…)

Bonus:

  • Building data pipelines using Apache Airflow.
  • Understanding ML fundamentals.
  • B.Sc. or higher in Computer Science or similar.

Apply for this Job

* Required