Envoy makes workplaces work better. With a focus on the details, we craft beautiful, modern software that elevates the workplace experience. Companies like Google, Tesla, GitHub, Slack, Stripe and Pinterest, have worked with Envoy to welcome over 30 million visitors to more than 10,000 locations around the world. We are proudly backed by Andreessen Horowitz, Menlo Ventures, Initialized Capital, and many others.
Our mission is to challenge the status quo of workplace technology. This idea started at the front desk, where we set a new standard for visitor sign-in. Now, we’re looking around the office—to the mailroom, meeting rooms and beyond—and asking how can we make this better, too? We envision a world where technology is woven through our workplaces, all of it working together to make our time there delightful.
If this world sounds exciting, we’d love for you to help us build the Office OS.
Build out our data transformation pipeline (we use Airflow, dbt, Spark, Redshift) that powers all our analysis, research, and reporting, and ensure it is reliable, fast, and scalable
Build full-stack data products that power key tools and apps (Flask, React, Vega)
Drive key decisions on standards, frameworks, and tools across the data team
Serve as the domain expert on the technologies that power our data stack, and ensure that best practices are always being followed in both code and process
Learn and implement innovative technologies to continuously improve the value that our data platform can deliver
Explore data from all sides of the business (Sales, Success, Marketing, Finance) and drive insights through modeling (marketing attribution, churn prediction, account health, lead scoring, etc.)
5+ years of software engineering experience or coding experience
BS / MS in Computer Science or a related technical field
Mastery of SQL
Experience with Airflow (or another orchestration framework)
Experience with dbt, ideally worked on a large dbt project
Experience with Python
Have built ETL pipelines from scratch and scaled them up to process high volumes of data
Have built and maintained APIs
Have built real-time data pipelines using streaming technologies (Kafka, Kinesis)
Deployed ML models in a production environment
Experience with AWS data services (Redshift, EMR, S3, etc.)
If this kind of work sounds interesting, we'd love to hear from you! We're open to all backgrounds and levels of experience, and we believe that great people can always find a place. People do their best work when they can be themselves, so we value uniqueness. We never discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status or disability status.