VoiceOps uses AI to improve call center rep performance with world-class coaching.
Our average customer makes tens of thousands of calls per week. In a world without VoiceOps, they have literally no idea what their sales reps are doing on the phone. It's a total (and scary) black box.
By applying ML and a great UI to this problem, call center leadership has all the data they need about customer conversations at their fingertips, and can coach their reps more effectively and efficiently.
The technical problem is interesting, and gets more interesting as we grow. Our core challenge is how to take billions of audio recordings (and messy, unstructured human conversations) and make sense out of that data in a way that is: a) accurate b) cost efficient, and c) highly scalable. The corresponding product problem is how to take well-structured data and make it actionable for the end-user.
Call center recordings are one of the richest/largest untapped datasets in the world (literally, billions of calls stored in AWS buckets that no one is touching right now). We're going to be the best in the world at structuring that data and putting it to use to make businesses work better.
About the Role
VoiceOps is looking for an accomplished, enthusiastic, and driven engineer with experience building data processing and storage systems. Our ideal candidates have architected and deployed systems to support multiple (small) engineering teams with specific needs and they enjoy a large degree of autonomy and ownership a company's data infrastructure.
- Design and develop data pipelines, ETL, storage solutions, and workflows that are optimized for speed, fault-tolerance, and scalability
- Work with Application, Machine Learning, and Site Reliability/DevOps engineers to create systems that support their varied data needs while allowing for independent manipulation and iteration of data
- Define robust data schemas for the rapid intake and processing of customer data with diverse structures
- Support product-focused engineering teams with data infrastructure, APIs, and scalable deployments
- Architect and author internal libraries for use by fellow engineers
- Help create data analytics tools for software telemetry and business intelligence purposes
- Cultivate a better understanding of data handling best practices across engineering teams
- Collaborate on security efforts for customer data
- 4+ years of experience writing code in one of: Python, Ruby, Go, Java, or similar languages at a SaaS company
- Strong understanding of relational and non-relational databases such as PostgreSQL, ElasticSearch, and Redis
- Ability to organize and model data to support varied use cases
- Experience creating and deploying container-based software
- Familiarity with asynchronous data processing patterns with an added focus on monitoring and logging
- Prior experience working with AWS or a similar cloud provider
- A BS/MS in computer science or related field of study, or equivalent experience
- Ability to communicate ideas to technical and non-technical colleagues
- Experience designing, building, and maintaining highly distributed or event-driven systems
- Experience supporting Machine Learning engineers with data preparation, validation, annotation, and model evaluation
- Previous work with workflow management and/or task scheduling systems
- Prior use of Terraform/Ansible/Infrastructure as Code tools
- You have strong opinions about technology and the facts to back it up
- You welcome healthy but respectful debate
- You know the differences between: data warehouses and data lakes, schema-on-read and schema-on-write, relational and non-relational databases, batch and stream processing
- The thought of code sitting undeployed for more than a week sends shivers up your spine
- You want to be go-to subject matter expert for data-related questions