About the Team
We are a small team that emphasizes on moving fast and delivering impactful products through technical excellence. Our tech stack includes a highly scalable and available architecture, processing more than 10K API requests per second, to big data processing and data analytics with 1.5B events per day, as well as a web platform to make our data available to customers and provide them insights that will help their business thrive. The team is focused on building out the next generation of eCommerce products for affiliate marketing and content monetization to serve publishers on the Web. We use cutting-edge technologies and algorithms including advanced implementations of Cassandra, Kafka, Spark and Natural Language Processing (NLP), among many others.
About the Job
Our Commerce team works on our core suite of products and scales our infrastructure to meet increasing demands of new traffic. We are building out our web platform to allow our customers to view their data and make decisions about their configuration. You will work on this platform that is central to our customer’s experience. There will be lots of opportunities to grow in technical leadership and gain experience and exposure across our advanced tech stack, as well as drive initiatives to improve the business and platform.
Our current tech stack
As a team we manage our full tech stack, from front-end React, through back-end low-latency Java APIs, databases and all the infrastructure on AWS. Data is a big focus as it drives our business. We use tools such as Spark, Kafka, Data Pipelines and a large variety of databases including MySQL, Redshift, ElasticSearch and a few others to ensure we get all the insights to our customers.
What we’re building
We are completely re-thinking how we make our data available to our customers and internal applications. Moving billions of events, joining multiple streams of data on the fly, aggregating and making that data useful in record time is no small feat, but that’s what makes it interesting. We are building a streaming data architecture using MSK (streaming kafka) and Apache Flink, plus setting up new databases and data structures that will help us visualize that data however we want. This will not only help us get data to our customers faster, enable world class machine learning on our applications, but also allow us to get to market faster building the applications our customers need.
What You’ll Be Doing:
- Data Engineering/Systems development using Spark, Kafka and AWS data pipelines
- Design and build scalable data pipelines that make use of cutting edge technologies to process billions of events per day
- Architect and build next-generation infrastructure to support robust and efficient pipelines
- Lead projects across large volumes of code, with multiple branches, formal integration procedures and test cycles
- Contribute software engineering best practices and technical excellence through code reviews and mentorship of junior engineers
- Demonstrate end-to-end ownership of projects and initiatives
You are a self starter and able to get the job done without micro management. You enjoy being part of a collaborative team but are also independent in terms of getting things done. You love technical challenges and solving problems all while striving to make better software, products and the team. You believe that the engineering team owns the quality of the product and designs with this in mind.
The successful candidate will have:
- 5+ years significant experience as a data engineer in a formal product development environment
- AWS Experience in data pipelines, analytics, or ELT/ELT processing
- Experience in designing and performing data manipulation on both SQL and NoSQL databases
- Experience building and optimizing ‘big data’ real-time and batch data pipelines, architectures and data sets
- Experience in architecting, driving technology adoption and best practices of big data platforms
- Expertise in one or more of the following areas: AWS, Spark, Kafka, Data Warehouse technical architectures (Redshift, Snowflake), EMR
- Streaming data analytics using Kinesis or Kafka
Position Reports to: Director, Data Engineering
Publishers create the content the world depends on for education, entertainment, and commerce. Sovrn provides services to tens of thousands of online publishers to help them grow, operate their business, understand their readership, and manage consumer data. Sovrn is headquartered in Boulder, Colorado with offices in San Francisco, New York, and London.
Compensation and Benefits
In accordance with the Colorado Equal Pay for Equal Work Act, the approximate compensation range for this role in Boulder, Colorado is $131,000- $160,000 including base salary and any related bonuses or commissions. Final compensation for this role will be determined by factors such as a candidate’s relevant work experience, certifications and geographic location.
Sovrn offers a full slate of benefits from great compensation packages, stock options, medical, dental and vision coverage, short- and long-term disability, life insurance, 11 paid holidays, flexible vacation, commuter benefits, a 401(k) plan and match, and a paid parental leave program.
Equal Opportunity Employer
Sovrn is proud to be an Equal Opportunity Employer and provides equal employment opportunities to all employees and applicants regardless of race, color, religion, gender, gender identity, age, national origin, disability, parental or pregnancy status, marriage and civil partnership, sexual orientation, veteran status, or any other characteristic protected by law. Reasonable accommodations will be made to meet the requirements of the Americans with Disabilities Act and will be provided as requested by candidates taking part in all aspects of the selection process.
Sovrn does not accept agency resumes. Please do not forward resumes to our jobs alias or Sovrn employees. Sovrn is not responsible for any fees related to unsolicited resumes.