Venmo was founded on the principles of breaking down the intimidating barriers around financial transactions to make them intuitive, friendly, and even fun. And it worked: people love sending money with Venmo, and we’re growing by leaps and bounds!
But we’re only just getting started. We want to take that magic of sending money with Venmo and cascade it into every place where people use money. That means connecting people to their money in the most intuitive and fun way possible, then connecting people with each other. Users already love Venmo, but we know there are lots of things we haven’t thought of to make the experience of using Venmo even more delightful and valuable. All that’s going to take a lot of figuring out. Let’s figure it out together!
Engineering at Venmo
At Venmo, we are creating a product that people love. We strive to create a delightful user experience while connecting the world and empowering people through payments. We are looking for intellectually curious people who want to be inspired and inspire others to change the world.
Engineering is a craft, and at Venmo we want the internals of our software to be as elegant as the end user experience we are designing. We spend our days scaling our infrastructure and building new features to meet and exceed our user’s needs and wants. We teach and learn from one another, and push each other to be at our creative and analytical bests.
As a Software Engineer in Data Platform, you will empower teams to leverage their data with confidence. You are excited by the prospect of designing Venmo's data infrastructure strategy and building the infrastructure for other teams and Venmo products to rely on.
Specific responsibilities include:
- Improve and maintain Venmo's data pipeline architecture.
- Design, implement, and maintain our data infrastructure services, like Kafka, at Venmo.
- Build data tools that utilize the data pipeline to provide actionable insights into our customers, operational efficiency and other key business performance metrics.
- Continuously improving data operations: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
What we are looking for:
- Bachelors and/or Masters in computer science OR related field of study
- 2+ years experience in software development or a related field
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience improving and optimizing data pipelines, architectures and data sets.
- Proficiency at manipulating, processing and extracting value from large disconnected datasets.
- Experience with big data infrastructure: Hadoop, Spark, Kafka, etc.
- Expertise with relational SQL and NoSQL databases.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Proficiency with one object-oriented/object function scripting language: Python, Java, C++, Scala, etc.
- Strong communication skills with the ability to understand and explain technical issues to a non-technical audience