Job Summary:
The DataOps Engineer/Data Platform Engineer is responsible for building, deploying, and maintaining data lake infrastructure to ensure efficient data ingestion, storage, processing, and retrieval. The engineer will collaborate with data engineers, software developers, and operations teams to create scalable data platforms and enable advanced analytics.

 

Key Responsibilities:

 

Data Pipeline Infrastructure Setup:

  • Design and manage cloud-based or on-premises scalable and high-performance data lakes using AWS S3, Azure Data Lake, or Hadoop-based systems.
  • Configure distributed storage and processing tools like Spark, Kafka, Druid, Hive, and Presto.

 

Data Pipeline Automation:

  • Automate data ingestion, ETL, and processing workflows using Dagster, Airflow, and CI/CD pipelines.
  • Implement Infrastructure as Code (IaC) with Terraform or CloudFormation.
  • Ensure automated backup, archiving, and disaster recovery.

 

Monitoring & Optimization:

  • Monitor and optimize infrastructure performance and cost efficiency.
  • Implement logging, monitoring, and alerting using CloudWatch, Prometheus, Grafana, ELK Stack.
  • Utilize APM tools like AppD, Datadog, or New Relic.

 

Security & Compliance:

  • Enforce security best practices (encryption, IAM, network security, RBAC).
  • Ensure compliance with GDPR, HIPAA, and CCPA.
  • Conduct security audits and vulnerability assessments.

 

Technical Skills:

  • Strong experience with cloud platforms like AWS, Azure, or GCP, particularly in data storage and compute services (AWS S3, EMR, Azure Data Lake, BigQuery, etc.).
  • Proficiency in infrastructure automation and configuration management tools (Terraform, Pulumi, Jenkins, Kubernetes, Docker).
  • Hands-on experience with data processing and storage frameworks like Hadoop, Spark, Kafka, Druid, Hive and Presto.
  • Strong programming skills in Python, Java, .NET, or similar languages.
  • Familiarity with version control tools (Git) and CI/CD tools (Jenkins, GitHub, TeamCity, ArgoCD ).
  • Hands-on experience with data workflow orchestration tools like Dagster and Apache Airflow.

 

Soft Skills & Qualifications:

  • Strong troubleshooting, performance optimization, and problem-solving skills.
  • Ability to manage multiple priorities in a fast-paced environment.
  • Preferred: Experience with streaming data (Kafka, Kinesis) and cloud certifications.
  • Experience: 5+ years in DevOps or cloud engineering, including 2-3 years in Data Engineering/Ops.

 

Apply for this Job

* Required

resume chosen  
(File types: pdf, doc, docx, txt, rtf)
cover_letter chosen  
(File types: pdf, doc, docx, txt, rtf)


Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.