Data Reliability Engineer II
Location - New York, NY
Hybrid Model - 3x per week
Who we are
DoubleVerify is the leading independent provider of marketing measurement software, data and analytics that authenticates the quality and effectiveness of digital media for the world's largest brands and media platforms. DV provides media transparency and accountability to deliver the highest level of impression quality for maximum advertising performance. Since 2008, DV has helped hundreds of Fortune 500 companies gain the most from their media spend by delivering best-in-class solutions across the digital ecosystem, helping to build a better industry. Learn more at www.doubleverify.com.
Position Overview:
The Data Reliability Engineer III is an integral part of the Data Reliability (DRE) Team, responsible for analyzing and externalizing DoubleVerify’s data internally as well as monitoring, troubleshooting, and improving the various company’s data pipelines and technologies.
Responsibilities:
- You will gain in-depth knowledge of how data is collected, processed, and externalized to clients within DoubleVerify’s architecture
- You will script in Python and SQL extensively
- You will work with data analysis tools such as Splunk/Grafana to create reports and data visualization
- You will work with Databricks, BigQuery, Snowflake, OLTP, MongoDB, etc
- You will work with Kubernetes, Docker, Terraform, Helm charts, etc
- You will be thrilled at the prospect of building strong relationships with different teams in the company, solving operational issues, and implementing quality improvements
- You will be part of the on-call rotation
Requirements:
- Bachelor's degree in CS or equivalent experience. Degree in a technical field preferred
- 3+ years of experience writing Advanced SQL and scripting languages Python, bash, etc
- 3+ years of Linux experience
- 3+ years of experience working with SQL/NoSQL Databases and data warehouses such as Databricks, BigQuery, Snowflake, MongoDB, etc.
- Knowledge of Cloud computing fundamentals and experience working with public cloud providers such as GCP, AWS, Azure, etc.
- Experience working with Github, CI/CD, GitLab or other automation/delivery tools
- Good understanding of BI and Data Warehousing concepts (ETL, OLAP vs. OLTP, Slowly Changing Dimensions)
- Demonstrated ability to adapt quickly, learn new skill sets, and be able to understand operational challenges
- Strong analytical, problem-solving, negotiation, and organizational skills with a clear focus under pressure
- Must be proactive with a proven ability to execute multiple tasks simultaneously
- Excellent interpersonal skills, including relationship building with diverse, global, cross-functional team
- Good understanding of process automation
Nice to have:
- Previous experience in AdTech is a plus
- Experience working with Kubernetes, Docker, Terraform, Helm charts, and fundamentals of DevOps