At DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives, and harness these qualities to create outstanding impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, maternity or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
At DeepMind, we've built a unique culture and work environment where long-term ambitious research can flourish. The Alignment team is responsible for identifying potential failures of intent alignment, and conducting technical research to prevent such scenarios from occurring. Our approach encourages collaboration with both the external AI alignment community, and all groups within the Research team at DeepMind.
Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Alignment research at DeepMind investigates how we can avoid failures of intent alignment, which we operationalize as a situation in which an AI system knowingly acts against the wishes of its designers. We achieve this goal by first understanding and forecasting risks, and then producing and empirically validating algorithm designs and deployment norms that reduce such risks. Proactive research in these areas is essential to the fulfilment of the long-term goal of DeepMind Research: to build safe and socially beneficial AI systems.
Research on technical AI safety draws on expertise in deep learning, reinforcement learning, statistics, and foundations of agent models. Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating possible long-term risks, in close collaboration with other AI research groups within and outside of DeepMind.
Alignment Research Scientists at DeepMind lead our efforts in developing novel algorithms and principles towards the end goal of solving AI alignment.
Having pioneered research in the world's leading academic and industrial labs in PhDs, Post-docs or Professorships, Research Scientists join DeepMind to work collaboratively within and across research fields. They develop solutions to fundamental questions in machine learning, AI, and philosophy, drawing on expertise from a variety of disciplines.
- Identify and investigate possible failure modes for current and future AI systems.
- Conduct empirical, conceptual, or theoretical research into technical alignment mechanisms that address these failure modes, in coordination with the team’s broader technical agenda.
- Collaborate with research teams externally and internally to ensure that AI research is informed by and adheres to the most advanced alignment research and protocols.
- Report and present research findings and developments to internal and external collaborators with effective written and verbal communication.
We look for the following skills and experience:
- PhD in a technical field or equivalent practical experience.
In addition, the following would be an advantage:
- PhD in machine learning, computer science, statistics, computational neuroscience, mathematics, or physics.
- Research experience in and / or technical knowledge of AI alignment.
- A real passion for AGI safety.
Competitive salary applies.