Overview

At Segment, we believe companies should be able to send their data wherever they want, whenever they want, with no fuss. Unfortunately, most product managers, analysts, and marketers spend too much time searching for the data they need, while engineers are stuck in integrating the tools they want to use. Segment standardizes and streamlines data infrastructure with a single platform that collects, unifies, and sends data to hundreds of business tools with the flip of a switch. That way, our customers can focus on building amazing products and personalized messages for their customers, letting us take care of the complexities of processing their customer data reliably at scale. We’re in the running to power the entire customer data ecosystem, and we need the best people to take the market.
 
Data Engineering enables Segment to derive insights about our customers and product usage effectively and efficiently, and it is the backbone of all data-driven decisions we make to move the business forward. 
 
As a member of the Data Engineering team, you will be responsible for building out APIs to interact with our data lake, creating automations that run critical parts of our internal operations, and developing efficient ETL pipelines to deliver data and insights to partners across the company. Additionally, you have the opportunity to provide feedback to Product and Engineering teams that will help shape the future of our products.

What you’ll do:

    • Build, enhance and maintain our ETL infrastructure and data assets primarily hosted on AWS
    • Design, build and launch efficient and reliable data processing pipelines
    • Build scalable data APIs and services for internal applications teams to consume
    • Build tools and self-service frameworks, such as ETL and data quality tools, to facilitate efficient and reliable data processing
    • Work with partners including Analytics, Product, and Operations teams to assist with their data needs

You’re a great fit if you have…

    • 3+ years of hands-on experience with Python, Java, or Scala for data processing
    • 3+ years building scalable APIs with web frameworks like Flask or Django
    • 3+ years of working experience with relational SQL and NoSQL databases
    • 3+ years of working on cloud infrastructure platforms like AWS and GCP
    • Experience with Terraform and other infrastructure-as-code platforms
    • Experience with versioning, continuous integration, and build and deployment tools such as Github and CircleCI
    • You are eager and willing to learn and work in a dynamic and fast-paced shop
    • You have strong interpersonal skills
    • You have a BS or MS degree in Computer Science or a related technical field

Bonus points:

    • Experience building scalable data pipelines with Apache Airflow
    • Experience with data warehousing technologies, such as Redshift or Snowflake
    • Knowledge of distributed systems like Apache Spark and Hadoop
    • Knowledge of different parts of data streaming eco-system, such as Kafka and Spark Streaming

 

Segment is an equal opportunity employer. We believe that everyone should receive equal consideration and treatment. Recruitment, hiring, placements, transfers, and promotions will happen based on qualifications for the positions being filled regardless of sex, gender identity, race, religious creed, color, national origin ancestry, age, physical disability, pregnancy, mental disability, or medical condition. 

Apply for this Job

* Required