is searching for a Senior or Staff Data Engineer to help in the development of our next-generation ML platform to support all of our internal machine learning operational needs at large scale.

We’re a social impact business (a public benefit company), and the largest tech platform focused on civic action in the world with 80m monthly users, 50,000 campaigns launched on the site every month, 150 staff, and a new revenue model that has grown by 500% in 2 years. We’re growing quickly, and our users win campaigns for change once every hour. From strengthening hate crime legislation in South Africa; fighting corruption in Indonesia, Italy, and Brazil; to fighting violence against women in India.

We are looking for a Senior or Staff Data Engineer who is great at ML workflow orchestration, distributed data processing at scale and has a passion for working on an ML platform to mobilize hundreds of millions of people to take deeper civic action.

You will be based out of our San Francisco headquarters and report to our Engineering Manager, Balaji Chandramohan.  As a key member of our ML&AI Platform Squad - ML&AI Pack, you’ll be building core functionalities of our machine learning platform, and empower us to rapidly build and deploy Content Understanding, Recommendation Services and democratized access to ML for all staff.

You’re a perfect fit for this role if your superpower is orchestration of complex machine learning workflows on Kubernetes, developing high-quality reliable distributed computation code; engineering craftsmanship with excellent coding skills in Scala, Java or Python. 


  • Support and improve our recommendation pipelines and platform utilizing Spark & (Scala / Python), tooling and integrations with cloud services 
  • Work alongside a team of data scientists, research engineers and machine learning engineers to help with the deployment and serving of models which are easy to maintain, scale, monitor and debug
  • Enable research scientists to accelerate experimentation using platforms and tools. 
  • Create pipelines using container native workflow orchestration.
  • Respond to on-calls and alerts on our live pipelines  
  • Participate in discussions, team rituals and be an amplifier to the work of the team 

The most Important capabilities of the role:

  • Experience with K8s workflow orchestration: ability to manage tools for enabling machine learning pipelines to orchestrate complicated workflows running on Kubernetes at scale
  • Big data compute: A deep understanding of foundational big data concepts and Hadoop fundamentals, and at least 3 years of experience working with some of the industry best cloud platforms (AWS / Azure / GCP)
  • Coding Skills & Performance: Writing high-quality code with at least two of the following: Spark, MapReduce, Hadoop and languages: Scala, Python, Java. Demonstrated capacity of building data pipelines (typically at least 3 years of hands on experience)
  • Communication and collaboration skills: A low-ego, growth oriented mindset, with a bias to thoughtful action, curiosity, connection and relationships building

Interested? Great! Here's what you should know:

This is a full-time role based in San Francisco (remote for now). Our team is high impact, low ego, and has an amazing culture to be part of.

We are accepting applications until June 30th

We especially encourage applicants of different backgrounds, cultures, genders, experiences, abilities and perspectives to apply. We’re actively working to increase the diversity of experience and perspectives on our team and are looking for someone who can help continue to lead that process. is committed to being a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, national origin, or disability or veteran status. 

Apply for this Job

* Required