Who we are 

DoubleVerify is a big data analytics company that tracks and analyzes tens of billions of ads daily for major brands like Nike, AT&T, and Disney. We operate at a massive scale, handling over 100B events per day and over 1M RPS at peak. We process events in real-time to ensure ads are fraud-free, appropriately placed, and effectively measured. Our global presence includes R&D centers in Tel Aviv, New York, Helsinki, Berlin, Ghent, and San Diego, offering you the chance to collaborate with a diverse and talented team across different locations.

If you're seeking an opportunity to work with diverse professionals, tackle complex challenges, and make a meaningful impact in a fast-paced, large-scale environment, we encourage you to apply. 

We're looking for a Machine Learning Engineer to join our great team working in Finland. Our Helsinki office, located in central Helsinki, provides a comfortable workspace for collaboration. In this role, you'll cooperate with data scientists and other teams at DoubleVerify in Tel Aviv and New York, contributing to our challenging goals. If you're seeking an opportunity to work with diverse professionals and make a meaningful impact, we encourage you to apply.

 

What will you do

  • Join a team of experienced engineers to build backend infrastructure (data processing jobs, micro-services) and create automated workflows to process large datasets for machine learning purposes.
  • Design and develop MLOps infrastructure to support our ML/AI models at scale, including CI/CD, automation, evaluation, and monitoring.
  • Lead projects by architecting, designing, and implementing solutions that will impact the core components of our system.
  • Develop and maintain scalable distributed systems in a BigData environment using stream processing technologies such as Akka Streams, Kafka Streams, or Spark.

 

Who you are

  • 5+ years of experience coding in an industry-standard language such as Kotlin, Java, Scala, or Python
  • Demonstrated interest in machine learning and the advancements in the field, including familiarity with MLOps tools and frameworks.
  • Deep understanding of Computer Science fundamentals: object-oriented design, functional programming, data structures, multi-threading, and distributed systems
  • Experience working with various Big data technologies and tools (DataBricks, Snowflake, BigQuery, Kafka, Spark, Airflow, Argo) at scale
  • Experience working in Docker/Kubernetes and cloud providers (GCP/AWS/Azure)
  • Result-oriented contributor with a "can do" attitude - take a task from concept to full implementation.
  • A team player with excellent collaboration and communication skills.

 

Nice to have

  • Hands-on experience with MLOps tools (MLFlow, Ray, Seldon, Kubeflow, Vertex.AI, SageMaker, etc.), ML algorithms and frameworks (PyTorch, TensorFlow, HuggingFace, Sklearn, etc.).
  • Experience in the AdTech world

Apply for this Job

* Required

resume chosen  
(File types: pdf, doc, docx, txt, rtf)
cover_letter chosen  
(File types: pdf, doc, docx, txt, rtf)
When autocomplete results are available use up and down arrows to review
+ Add another education


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.