Who we are:
DoubleVerify is a big data analytics company, which went public in April 2021 (NYSE: DV). We track and analyze tens of billions of ads every day for the biggest brands in the world like Nike, AT&T, Disney, Vodafone, and most of the Fortune 500 companies, if you’ve seen an online ad via Web, Mobile, or CTV device then there is a big chance that it was measured by us.
We operate at a massive scale, handling over 100B events per day and over 1M RPS at peak. We process events in real-time at low latencies (ms) to help our clients make decisions before, during and after the ad is served. We verify that all ads are fraud free, appear next to appropriate content, appear to people in the right geography and measure the viewability and user’s engagement throughout the ad’s lifecycle.
We are a global company, with R&D centers in Tel Aviv, New York, Helsinki, Berlin, Ghent and San Diego. We work in a fast-paced environment and have a lot of challenges to solve. If you like to work in a huge scale environment and want to help us build products that have a huge impact on the industry, and the web - then your place is with us.
What will you do:
As a Manager of the Data Platform team, you will be taking a central role orchestrating, overseeing and leading the many different aspects of our challenging journey towards a new and modernized big data Lakehouse platform (PB’s of data), built using Databricks on Google cloud (GCP). Among the challenges - ingestion at stream in massive scale, providing a platform for processing structured and unstructured data, security and compliance at Enterprise scale, data governance, optimizing performance, storage and cost and many more..
You will lead, mentor, guide, recruit and manage a team of experienced Data Engineers and be responsible for the enablement of our Big Data platform serving developers, data engineers, data analysts, product managers, data science and ML Engineering
You will work closely with our Data PM on leading our Data Strategy and as such, you will learn how data serves our goals, come up with ways to improve our TBs of daily data processes while maintaining high data quality; guide other R&D teams and provide best practices; conduct POCs with latest data tools; and by that, help our clients make smarter decisions that continuously improve their ad-impression quality.
FInd your way to influence and impact a team that utilizes a wide array of languages and technologies, among them - GCP, Databricks, Spark, Python, Scala, SQL, BigQuery, Vertica, Kafka, Docker, Kubernetes, Terraform, Prometheus, Gitlab and more.
Who you are:
- 4+ years of both people and technical management experience, leading a platform/infra backend/data engineering team in high-scale companies
- A versatile “go to” tech geek, passionate about learning and sharing the latest and greatest Big Data technologies out there, and using them to deliver state of the art cost effective solutions
- A team player with great interpersonal and communication skills
- A leader by example
- Actively seek ways to improve development velocity, processes, remove bottlenecks and help those surrounding you grow
- 4+ years of experience with one of the following languages: Python, Scala or Java
- Able to take hard decisions with a can-do attitude
- Hands-on in depth experience with at least one streaming/batching technology such as: Kafka/Kinesis/Pulsar and Stream Processing technologies such as: Kafka Streams/Spark/Flink
- Familiarity with SQL/NoSQL databases and the different main data architectures
- Experience working with a public cloud provider such as GCP/AWS/Azure