Postmates runs one of the largest real-time delivery fleets in the country. Building a software platform that is reliable, scales and stays agile under demanding product needs is a serious technical challenge. Postmates isn’t just another ad platform or mobile app for delivering static user generated content: We have real customers paying real money for a real service, all under an hour.
As a Data Engineer, you’ll be part of a team responsible for the integrity and accessibility of all of Postmates business-critical data. You’ll contribute to our data-pipelines, our analytics tools, and our data science and machine learning infrastructure, as well as help design and scale our architecture to meet future needs. You’ll work with teams across the organization, making sure that engineers have the tools to generate and store data and that business and data science consumers have the information they need at their fingertips.
We’re looking for engineers with a proven track record of shipping high-impact data systems. We care much more that you understand how to build simple, clear, and reliable tools than you have experience with any given toolset or pattern. We love learning, and we expect that you will learn new things and teach us new things as we build out the Postmates data infrastructure.
- Design and build reliable, easy to use data pipelines and data systems
- Roll out new tools and features on existing big data storage, processing, and machine learning systems
- Triage, identify, and fix scaling challenges
- Perform cost-benefit analyses of short-term needs vs long-term data scaling and company growth
- Educate product managers, analysts, and other engineers about how best to use our systems to answer hard business questions and make better decisions using data
- 3+ years of professional experience building and deploying large scale data intensive applications in production.
- Bachelor's degree (or equivalent experience) required
- You have a curiosity about how things work
- You possess strong computer science fundamentals: data structures, algorithms, programming languages, distributed systems, and information retrieval.
- You’ve built large-scale data pipeline and ETL tooling before, and have strong opinions about writing beautiful, maintainable, understandable code.
- You’ve worked professionally with both streaming and batch data processing tools, and understand the tradeoffs.
- You understand the challenges of working with schema-based and unstructured data, and enjoy the challenge of collecting data flexibly and accurately.
- You have extensive experience with at least one RDBMS platform (Postgres, Transact-SQL, MySQL, etc.)
- You are a strong communicator. Explaining complex technical concepts to product managers, support, and other engineers is no problem for you.
- You love it when things work, you understand that things break, and when things do fail you dive in to understand the root causes of failure and fix whatever needs work.
- A Masters degree (or higher) in a technical field (C.S., Math, Physics, Engineering…)
- AWS development and operations experience (EMR, s3, data pipelines, etc.)
- Experience with the Apache Ecosystem - Kafka, Spark, Storm, Zookeeper, Etc
- Experience with Amazon Redshift data warehouse
- A solid math and statistics background
- Competitive salary and generous stock option plan
- Medical, dental and vision insurance
- We'll provide equipment you need to work efficiently and creatively
- Paid parental leave, vacation time and sick time
- Catered lunches and open snack bar
- Impact-first work environment (no politics, no pandering)
- Huge company vision (we need you to build the future, not just maintain the status quo)
- Full support to contribute to open source projects
- Awesome office located in Financial District just minutes from BART, Muni, AC Transit, and SamTrans