At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Snapshot
This role is for an engineering manager working on responsibility and safety assurance evaluations at Google DeepMind. These are the evaluations which allow decision-makers to ensure that our model releases are safe and responsible. The role involves leading an engineering team in developing and maintaining these evaluations and the infrastructure that supports them.
About us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The role
In this role, you will be responsible for leading and building a high-performing team of engineers with a diverse expertise and skill set, to deliver on responsibility and safety assurance evaluations while maintaining healthy and positive team dynamics. You will provide technical and strategic guidance to the team, encouraging innovation and ensuring successful project delivery.
Partnering and collaborating with related engineering teams, product & program management teams will form a critical part of the role in order to deliver solutions in a timely manner as well as developing a longer term roadmap.
Key responsibilities
- Lead and grow a research engineering team designing, building and executing safety evaluations of AI models, across risk areas including child safety, hate speech, harassment, representational harms, misinformation, and chemical, biological, radiological and nuclear risks.
- Work in close partnership with the Responsible Development & Innovation (ReDI) team on prioritisation, roadmap and strategy to ensure evaluations effectively meet needs of decisionmakers.
- Execute and deliver on the roadmap, including overseeing the design and development evaluations to test the safety of cutting edge AI models.
- Develop and maintain infrastructure for these evaluations.
- Manage the running evaluations prior to releases for new AI models, and where appropriate, automation of these.
- Clearly communicate progress and outcomes of evaluations work.
- Collaborate with engineering groups across Google DeepMind and experts in various fields of AI ethics, policy and safety, and develop new cross-team collaborations for project delivery.
About you
In order to set you up for success as a Research Engineering Manager at Google DeepMind, we look for the following skills and experience:
- Bachelor's degree or greater in a technical subject (e.g. machine learning, AI, computer science, mathematics, physics, statistics), or equivalent experience.
- Experience leading and developing engineering teams delivering on work with tight deadlines, high levels of change and uncertainty.
- Experience in working with machine learning or high performance computing at scale.
- Strong knowledge and experience of Python.
- Experience with deployment in production environments.
- Experience working with researchers and engineers in research domains.
- Knowledge of mathematics, statistics and machine learning concepts useful for understanding research papers in the field.
- Ability to present technical concepts and statistical results clearly to a range of audiences.
- A deep interest in the responsibility and safety of AI systems, and in AI policy
In addition, the following would be an advantage:
- Experience designing and building evaluations for AI models
- Expertise in the ethics and safety of AI systems
- Experience with crowd computing (e.g. designing experiments, working with human raters).
- Experience with data analysis tools & libraries.
- Experience with web application development and user experience design
Application deadline: 5pm GMT, Friday 3rd January 2025
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.