The role:

  • Design, build, and iterate on research infrastructure in close collaboration with research engineers.
  • Build tools to automate and maintain computing clusters and data parsing pipelines. 
  • Design and build software and APIs that enable internal and external access to our AI systems.
  • 4 month minimum (3 months for exceptional candidates).
  • Will be located in the Bay Area (remote work is an option for exceptional candidates).

Ideal background:

  • Adaptability and openness to work on multiple problem areas (e.g. data processing pipelines, front-end software development).
  • Fast learner with a broad understanding of infrastructure, front-end, databases, CI, etc. 
  • Currently pursuing a Bachelors or Graduate Degree in Computer Science, or a related field (not a hard requirement for an exceptional candidate).
  • Strong track record in programming (industry experience, internships, competitions, open-source projects).
  • Proficient in Python and C++.
  • Experience with cloud computing platforms (e.g., GCP).
  • Data science or engineering experience (e.g. ML engineering, data infrastructure engineering internships).
  • Is located in the Bay Area (remote work is an option for exceptional candidates).

Pluses:

  • Experience with Kubernetes or other cluster management platforms.
  • Front-end experience and proficient in HTML/CSS/Javascript, React, etc.

What we offer:

  • The opportunity to work on cutting-edge AI with leading researchers with an application and productization focus.

Apply for this Job

* Required
resume chosen  
(File types: pdf, doc, docx, txt, rtf)
cover_letter chosen  
(File types: pdf, doc, docx, txt, rtf)