Who are we
DoubleVerify is a big data and analytics company. We track and analyze tens of billions of ads every day for the biggest brands in the world like Apple, Nike, AT&T, Disney, Vodafone, and most of the Fortune 500 companies. If you ever saw an Ad online via Web, Mobile, or CTV device then there are big chances that it was analyzed and tracked by us.
We operate at a massive scale, our backend handles over 100B+ events per day, we analyze and process those events in real-time while making decisions on the environment where the ad is running and all the user interactions during the Ad display lifecycle. We verify that all Ads are Fraud Free, Brand Safe, in the right Geo and highly likely to be viewed and engaged, all that in less than a fraction of a second.
We are global, we have R&D centers in Tel Aviv, New York, Finland, Belgium, and San Diego, we work in a fast-paced environment and have a lot of challenges to solve. If you like to solve big data challenges and want to help us build a better industry then your place is with us.
What will you do
You will work as part of an engineering group of multiple teams working on different challenges in order to maximize the impact of the group both in product delivery and in clean scalable technology to support that. boosting the productivity and development experience of our group's engineers.
- You will assist with leading/ handling/ planning/ solving cross team challenges
- Mentor the groups engineers to understand the behaviors and practices expected from a senior engineer
- Help the group tackle the hardest problems by working on designs, plans and implementation, contributing to various effort to maximize impact
- Suggest new tools, architectures and methodologies and incorporate them in the development flow where it fits
You will work with huge scales in all fashions - real time, stream processing and batched jobs - leveraging variety of programing languages (Scala, C#, Python, JavaScript, SQL) and technologies (Kafka, Vertica, Aerospike, Kafka Streams, Spark, Hadoop, Docker, Kubernetes, etc.) to get your products running smoothly and efficiently in production.
Who you are
- BSc in Computer Science or equivalent
- 5+ years of experience with at least one of the following languages: Scala/ Java/ NodeJS/Python
- Deep understanding of Computer Science fundamentals: object-oriented design, functional programming, data structures, multi-threading and distributed systems
- Experience in working with SQL (PostgreSQL, MySQL) and Columnar/NoSQL Databases such as (DataBricks, BigQuery, Vertica, Snowflake, Couchbase, Cassandra, etc.).
- Experience working with Docker/Kubernetes(GKE/Operators & CRDs), and public cloud providers such as GCP or AWS
- Experience working with infrastructure is management tools such as Terraform/Helm/Skaffold and monitoring tools such as Prometheus/Thanos Grafana and Loki
- Experience with Agile development, CI/CD pipelines (Git/GitOps, GitLab CI/CD, or ArgoCD)
- Great interpersonal and communication skills
- A versatile engineer with a “getting-things-done” attitude
Having one of these in addition is an advantage:
- Experience with high-performance KV-stores such as Aerospike/Redis and messaging systems such as Apache Kafka/Apache Pulsar/Redpanda, etc.
- Experience developing scalable micro-services exposing/communicating via gRPC/Protobuf, REST API and GraphQL interfaces
- Experience working in a BigData environment and building scalable distributed systems with stream processing technologies such as KStreams/Akka Streams Spark/Flink
- Previous experience in AdTech is a plus
- Deep understanding of web technologies, standards, protocols, etc.