Data Scientist

In a digital world, human connections matter. Sailthru is a SaaS platform that enables marketing teams to build and maintain customer relationships through personalized communication in email, on-site, and in their mobile applications. The Sailthru personalization engine works with over a billion user profiles and tracks billions of interactions every month. Data Science is at the center of what we do. It's how we build products and features that enable our clients to communicate with their customers with the best content, at the best time, and in the best channel (email, web, mobile).

Sailthru is looking for a skilled and creative Data Scientist to help us craft data driven products that enable companies to build personal relationships with their customers. This is a high impact position where you can apply your experience with machine learning and statistical modeling at large scale to predict user behavior and recommend content will for hundreds of companies that we work with.

The Data Science team collaborates closely with other engineering teams to use data to enable practical real world use cases. We are a tight knit team that constantly seeks to learn from each other. We focus on innovation and are always trying to improve how we work and the results we achieve. If you love challenging others and being challenged you will fit in well. You will have the opportunity to make major contributions to the team and the company by improving our existing recommendation systems and prediction models. You will also have the chance to innovate by developing new products that use data in novel ways to engage customers.

Keys to success

  • You have unending curiosity
  • You excel with autonomy
  • You believe personalization is the best way to build sustainable businesses on the internet
  • You are passionate about learning the latest advances in data science like deep learning and image recognition
  • You can translate mathematics and technology into practical solutions that drive customer lifetime value
  • You get charged up working with large scale data and distributed processing environments (Kafka, Spark)

Technical Requirements

  • 4+ years of experience writing and deploying production quality code
  • Experience with common machine learning libraries in R, Python (scikit-learn), and Spark (MLlib)
  • Solid understanding of micro-service architecture
  • Hands on experience with systems for large scale data processing like Kafka, Hadoop, MapReduce, Spark and NoSQL data stores like Cassandra/HBase, Redis, ElasticSearch a plus
  • Experience working in an agile environment a plus

Apply for this Job

* Required

File   X
File   X