Research Scientist / Research Engineer - Autonomous Systems (AI Safety Institute)

Department for Science, Innovation & Technology

Research Scientist / Research Engineer - Autonomous Systems (AI Safety Institute)

£135000

Department for Science, Innovation & Technology, City of Westminster

  • Full time
  • Temporary
  • Remote working

Posted 1 week ago, 18 May | Get your application in now before you miss out!

Closing date: Closing date not specified

job Ref: b37534660535452ab853e1b27b5ee4eb

Full Job Description

The mission of the Autonomous Systems team is to prevent catastrophic risks from autonomous AI.

The way we do this is by studying the space of potential risks from autonomous systems. We then build tools that measure and forecast this risk by interacting with frontier models. For example, we could be investigating the various ways an autonomous AI could prevent shutdown by exfiltrating its own weights and replicating itself on other hardware. We can then build tools to measure this risk as frontier models keep improving and conduct research into when exactly we believe the risk will present a material danger. Finally, we interact with other teams within the institute to make sure our research has real-world impacts on AI safety - through interaction with the key labs and policy recommendations.

The Autonomous Systems Team is looking for exceptionally motivated and talented Research Scientists (RS) and Research Engineers (RE) to help scale up our team focussed on catastrophic risks from autonomous AI. Senior RS and RE positions are available for candidates with the required seniority and experience.

You will work in one of the following research sub-teams:

  • Agents team. By giving frontier models the power to do things like chain-of-thought reasoning, running Python code or browsing the internet - models can already accomplish a surprisingly large variety of tasks. To help our other research teams investigate risks from agentic systems, it is therefore vital that we have in-house agentic systems that exceed state of the art from both academia and other open-source frameworks. This is where the agents team steps in - researching & engineering agent systems that outperform publicly available state-of-the-art systems.

  • Self-Improvement. Within the self-improvement team, we study and evaluate risks from uncontrolled self-improvement. This is the risk that models will increasingly become able to improve their own capabilities continuously and rapidly.

  • Auto-replication. The autonomous replication and adaptation team research loss-of-control style risks from autonomous AI replicating itself on other hardware or devices. Within this team you'll be studying this threat model, collaboratively designing appropriate evaluations to measure this, and implementing these.

  • Manipulation & Deception. Can we effectively detect when autonomous systems are deceiving or manipulating their human overseers? Within the Manipulation & Deception team you'll be driving forward state-of-the-art research on these threat models.


  • As a Research Scientist/Engineer, you will work in a small person team within one of the above fields. Your team is given huge amounts of autonomy to chase research directions & build evaluations that relate to your team's over-arching threat model. This includes coming up with ways of breaking down the space of risks, as well as designing & building ways to evaluate them. All of this is done within an extremely collaborative environment, where everyone does a bit of everything.

    Within your team you will contribute to steering the team's research direction and finding solutions to complex technical problems. Research Scientists will be expected to help improve the scientific rigour and quality of our research, so that it can be confidently used in influencing the actions of labs and our international partners. Research Engineers will spend most of their time doing collaborative research and writing high-quality research code.

    You'll receive mentorship and coaching from your manager and will regularly interact with world-famous researchers and other incredible staff (including alumni from DeepMind, OpenAI and ML professors from Oxford and Cambridge)., The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your workstream lead. The process will culminate in a conversation with members of the senior team here at AISI.

    Sift and interview process

    This is not a definitive list, candidates should expect to go through some or all of the following stages once an application has been submitted:-
  • Coding test

  • Initial interview

  • Technical take home test

  • Second interview and review of take home test

  • Third stage interview

  • Final stage interview, If successful and transferring from another Government Department a criminal record check may be carried out.


  • Please note terms and conditions are attached. Please take time to read the document to determine how these may affect you.

    Any move to the Department for Science, Innovation and Technology from another employer will mean you can no longer access childcare vouchers. This includes moves between government departments. You may however be eligible for other government schemes, including Tax Free Childcare. Determine your eligibility https://www.childcarechoices.gov.uk

    DSIT does not normally offer full home working (i.e. working at home); but we do offer a variety of flexible working options (including occasionally working from home).

    In order to process applications without delay, we will be sending a Criminal Record Check to Disclosure and Barring Service on your behalf.

    However, we recognise in exceptional circumstances some candidates will want to send their completed forms direct. If you will be doing this, please advise Government Recruitment Service of your intention by emailing Pre-EmploymentChecks.grs@cabinetoffice.gov.uk stating the job reference number in the subject heading.

    Applicants who are successful at interview will be, as part of pre-employment screening, subject to a check on the Internal Fraud Database (IFD). This check will provide information about employees who have been dismissed for fraud or dishonesty offences. This check also applies to employees who resign or otherwise leave before being dismissed for fraud or dishonesty had their employment continued. Any applicant's details held on the IFD will be refused employment.

    A candidate is not eligible to apply for a role within the Civil Service if the application is made within a 5 year period following a dismissal for carrying out internal fraud against government.

    Feedback
    Feedback will only be provided if you attend an interview or assessment.,
  • UK nationals

  • nationals of the Republic of Ireland

  • nationals of Commonwealth countries who have the right to work in the UK

  • nationals of the EU, Switzerland, Norway, Iceland or Liechtenstein and family members of those nationalities with settled or pre-settled status under the European Union Settlement Scheme (EUSS) (opens in a new window)

  • nationals of the EU, Switzerland, Norway, Iceland or Liechtenstein and family members of those nationalities who have made a valid application for settled or pre-settled status under the European Union Settlement Scheme (EUSS)

  • individuals with limited leave to remain or indefinite leave to remain who were eligible to apply for EUSS on or before 31 December 2020

  • Turkish nationals, and certain family members of Turkish nationals, who have accrued the right to work in the Civil Service

    We are looking for some of the following skills, experience and attitudes. Some skills may lean towards more Research Scientist or Engineer profiles.


  • For more engineer-leaning candidates:
  • Writing production quality code (at least 4 years' experience for a truly exceptional candidate, typically at least 10).

  • Strong track record of designing, shipping, and maintaining complex tech products and/or scientific and academic excellent (e.g. papers at top-tier conferences)

  • Evidence of an exceptional ability to drive progress and build and maintain momentum

  • Improving standards across a team, through mentoring and feedback

  • Working across multiple research or engineering teams and helping to improve technical excellence and engineering culture and/or experience working within a research team that has delivered multiple exceptional scientific breakthroughs, in deep-learning or a related field.

  • Strong written and verbal communication skills


  • We are hiring individuals at a range of seniority and experience within this team, including in Senior Research Engineer / Research Scientiaff.st positions. Calibration on final title, seniority and pay will take place as part of the recruitment process. We encourage all candidates who would be interested in joining to apply.,
  • Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.

  • Direct research experience (e.g. PhD in a technical field and/or spotlight papers at NeurIPS/ICML/ICLR).

  • Experience working with world-class multi-disciplinary teams, including both scientists and engineers (e.g. in a top-3 lab).

  • Acting as a bar raiser for interviews, Successful candidates must undergo a criminal record check.

  • People working with government assets must complete baseline personnel security standard (opens in new window) checks.

    The AI Safety Institute is the first state-backed organisation focused on advancing AI safety for the public interest. We launched at the Bletchley Park AI Safety Summit in 2023 because we believe taking responsible action on this extraordinary technology requires a capable and empowered group of technical experts within government.

    We have ambitious goals and need to move fast.
  • Develop and conduct evaluations on advanced AI systems. We will characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.

  • Develop novel tools for AI governance. We will create practical frameworks and novel methods to evaluate the safety and societal impacts of advanced AI systems, and anticipate how future technical safety research will feed into AI governance.

  • Facilitate information exchange. We will establish clear information-sharing channels between the Institute and other national and international actors. These include stakeholders such as policymakers and international partners.


  • Our staff includes senior alumni from OpenAI, Google DeepMind, start-ups and the UK government, and ML professors from leading universities. We are now calling on the world's top technical talent to join us. This is a truly unique opportunity to help shape AI safety at an international level.

    As more powerful models are expected to hit the market over the course of 2024, AISI's mission to push for safe and responsible development and deployment of AI is more important than ever.

    What we value:
  • Diverse Perspectives: We believe that a range of experiences and backgrounds is essential to our success. We welcome individuals from underrepresented groups to join us in this crucial mission.

  • Collaborative Spirit: We thrive on teamwork and open collaboration, valuing every contribution, big or small.

  • Innovation and Impact: We are dedicated to making a real-world difference in the field of frontier AI safety and capability, and we encourage innovative thinking and bold ideas.

  • Our Inclusive Environment: We are building an inclusive culture to make the Department a brilliant place to work where our people feel valued, have a voice and can be their authentic selves. We value difference and diversity, not only because we believe it is the right thing to do, but because it will help us be more innovative and make better decisions.

    The Department for Science, Innovation and Technology offers a competitive mix of benefits including:

  • A culture of flexible working, such as job sharing, homeworking and compressed hours.

  • Automatic enrolment into the Civil Service Pension Scheme, with an average employer contribution of 27%.

  • A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30.

  • An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue.

  • Access to a range of retail, travel and lifestyle employee discounts.

  • The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period.


  • Things you need to know