At DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives, and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, maternity or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

At DeepMind, we've built a unique culture and work environment where long-term ambitious research can flourish. The Alignment team is responsible for identifying potential failures of intent alignment, and conducting technical research to prevent such scenarios from occurring. As a Research Engineer, you will design, implement, and empirically validate algorithms that aim to mitigate potential long-term risks.

About us

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Alignment research at DeepMind investigates how we can avoid failures of intent alignment, which we operationalize as a situation in which an AI system knowingly acts against the wishes of its designers. We achieve this goal by first understanding and forecasting risks, and then producing and empirically validating algorithm designs and deployment norms that reduce such risks. Proactive research in these areas is essential to the fulfilment of the long-term goal of DeepMind Research: to build safe and socially beneficial AI systems.

Research on technical AI safety draws on expertise in deep learning, reinforcement learning, statistics, and foundations of agent models. Research Engineers work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating possible long-term risks, in close collaboration with other AI research groups within and outside of DeepMind.

The role

Alignment Research Engineers at DeepMind work directly on a wide range of conceptual, theoretical and empirical research projects, typically in collaboration with Research Scientists. You will apply your engineering and research skills to accelerate research progress through developing prototypes, designing and scaling up algorithms, overcoming technical obstacles, and designing, running, and analysing experiments.

Key responsibilities

  • Understand and investigate possible failure modes for current and future AI systems
  • Collaborate on projects within the team’s broader technical agenda to research technical alignment mechanisms that address potential failure modes
  • Collaborate with research teams externally and internally to ensure that AI research is informed by and adheres to the most advanced alignment research and protocols

About you

Essential:

  • Bachelor's degree in a technical subject (e.g. machine learning, AI, computer science, mathematics, physics, statistics), or equivalent experience.
  • Ability to write code in at least one programming language, preferably Python or C++.
  • Knowledge of mathematics, statistics and machine learning concepts needed to understand research papers in the field.
  • Ability to communicate technical ideas effectively, e.g. through discussions, whiteboard sessions, written documentation.

Nice to have:

  • Knowledge of ML/scientific libraries such as TensorFlow, JAX, PyTorch, NumPy and Pandas.
  • Machine learning and research experience in industry, academia and personal projects.
  • Familiarity with distributed scientific computation, whether CPU, GPU, TPU, or heterogenous.
  • Experience with large scale system design.
  • A real passion for AGI safety.

 

 

Apply for this Job

* Required