Rapidly growing life sciences technology company

Ultima Genomics is a rapidly growing company that is developing ground-breaking genomics technologies. Our mission is to continuously drive the scale of genomic information to enable unprecedented advances in biology and improvements in human health. We have developed a foundational new approach to sequencing at scale that overcomes limitations due to the high costs of current technologies. We are well-funded and have raised approximately $600 million from global top-tier investors. Our team brings together unique and diverse expertise across multiple disciplines, from healthcare and life sciences, to engineering, to technology and software and beyond. We are a collaborative group of more than 350 employees, including successful entrepreneurs, chemists, hardware and software engineers, genomics and biotechnology experts, molecular and computational biologists, software and algorithm experts, and operations and commercial leaders. Join us to develop and commercialize technologies that unleash the power of genomics at scale and empower the future of human health. 

We are looking for a highly motivated Senior DevOps Engineer to join our team!

An experienced DevOps engineer who will take a significant role in developing high throughput data pipeline, training environment for machine learning, orchestrating the genomic production solution, CI/CD solution, monitoring and alerting mechanism.  
Serve as the key person in designing and building the next breakthrough in biotechnology and genomic. 

How You’ll Contribute

  • Lead the devops development of hybrid (cloud and on-prem) production solution utilizing Kubernetes, dockers, monitoring and alerting mechanism. 
  • Lead the devops of cloud containerized training environment, to serve machine learning and deep learning. 
  • Lead the data pipeline devops serving analysis and BI. 
  • Build CI/CD mechanism. 
  • Control the cloud usage and maintain its costs. 
  • Work closely with multiple stakeholders to define the scope, requirements timelines and project plans of complex, multi-disciplinary machine learning projects 
  • Support development and research teams and guide them in DevOps best practice. 
  • Supply secured solution to protect our customers privacy. 


Qualifications, Skills, Knowledge & Abilities

  • 3+ years of hands-on experience with AWS or GCP solutions.  
  • 2+ years of hands-on experience with dockers and Kubernetes, spot nodes, auto-scaling, writing deployments (helm, kustomize), working with persistent volumes etc.
  • Experienced with EC2, autoscaling groups, S3, lambda, managed databases of at least one kind (and the normal networking/VPC/IAM etc.)
  • Fluency in infra as code: terraform, cloudformations etc
  • Experience with managing deployment and operations of a distributed system - multiple containerized services and DB's
  • Experience with CI/CD processes and versioning of software artefacts (packages, dockers)
  • Centralized logging and monitoring (e.g. logz.io/corralogix datadog/new relic/prometheus) 
  • Any experience with data pipelines or ML pipelines - big advantage
  • Scripting in python - big advantage
  • Experience in the healthcare sector and securing infrastructure to standards or compliance frameworks such as HIPAA, GDPR, and/or ISO.  



We provide equal employment opportunity for all applicants and employees. We do not discriminate on the basis of race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, ancestry, citizenship, age, physical or mental disability, military or veteran status, marital status, domestic partner status, sexual orientation, genetic information, or any other basis protected by applicable laws.

Apply for this Job

* Required