About the department The Department of Industry, Science and Resources and our broader portfolio are integral to the Australian Government's economic agenda. Our purpose is to help the government build a better future for all Australians through enabling a productive, resilient and sustainable economy, enriched by science and technology. We do this by: Growing innovative & competitive businesses, industries and regions Investing in science and technology Strengthening the resources sector. The APS and the department offer a clear direction and meaningful work. You will be able to create positive impact in people's lives whilst contributing to improved outcomes for Australia and our people. If you would like to feel a strong connection to your work and you are accountable, committed and open to change, join us in shaping Australia's future. Please see the APSC's APS Employee Value Proposition for more information on the benefits and value of employment within the APS. About the team About the AI Safety Institute The Australian Government is establishing an Australian AI Safety Institute (AISI) to support the Government's ongoing response to emerging risks and harms associated with AI technologies. The AISI will be the government's hub of AI safety expertise, operating with transparency, responsiveness and technical rigour. The AISI will conduct technical assessments, support coordinated government action, foster international engagement on AI safety, and publish research to inform industry, academia and the Australian people. About the Division The AISI is part of the department's Technology and Digital Policy Division. The division is responsible for providing policy advice to government, delivering programs and engaging domestically and internationally on enabling and critical technologies as well as the digitisation of the economy. The division's priorities include implementing the National AI Plan, providing advice on the safe and responsible use of AI, robotics and automation, the role of critical technologies to support economic security, data policy and emerging digital economy issues. The opportunity We're building a motivated and capable team who will be defining the AISI's future. As a founding member of the team, you will help shape how Australia monitors, tests and governs AI. You will assess risks from frontier models, including CBRN misuse, enhanced cyber capabilities, loss-of-control scenarios, information integrity and influence risks, and broader systemic risks arising from the deployment of increasingly capable general-purpose AI systems. This is a unique opportunity to work at the frontier of AI, collaborate with domestic and international experts to shape emerging global AI safety standards and help keep Australians safe from AI-related risks and harms. You'll have the opportunity to drive positive change, contribute to impactful projects, and develop your expertise in a rapidly evolving field. Our ideal candidate We're seeking candidates with deep technical expertise and hands-on experience doing frontier AI safety research. Senior AI Safety Research Scientist - Science & Technical stream pay scale 8 and 9 Our ideal candidate for this role would have: Demonstrated experience leading empirical AI research on frontier AI systems and safety-relevant behaviours. This could include work on model evaluation, adversarial testing, safety tuning, interpretability, robustness, agentic behaviour or human influence. Experience designing safety-relevant evaluations for frontier AI systems, including assessments of model behaviour, reliability, robustness and other risk-relevant capabilities. A track record of rigorous research contributions. This could include peer-reviewed publications, high-quality preprints, or equivalent research outputs. Strong ability to translate technical research findings into clear, accessible insights for policymakers and other non-technical audiences. Experience contributing to international research collaborations or standards development. The ability to manage competing priorities, deliver complex projects, and thrive in a fast-paced, constantly changing environment. A collaborative mindset, with experience working in multidisciplinary teams. A deep understanding of frontier AI risks and mitigation strategies. We expect these skills will be held by people with 5 years of rigorous empirical research experience, typically in machine learning, data science, computer science or related quantitative fields, or through relevant empirical work in adjacent disciplines including applied statistics, psychometrics, behavioural science, cognitive science, human-computer interaction, cybersecurity research or systems engineering. AI Safety Research Scientist - Science & Technical stream pay scale 7 and 8 Our ideal candidate for this role would have: Demonstrated experience contributing to empirical AI research on frontier AI systems and safety-relevant behaviours. This could include work on model evaluation, adversarial testing, safety tuning, interpretability, robustness, agentic behaviour or human influence. Experience contributing to the design of safety-relevant evaluations for frontier AI systems, including assessments of model behaviour, reliability, robustness and other risk-relevant capabilities. Strong analytical and problem-solving skills, with experience in research and experimental design. Experience drafting research publications and technical reports. The ability to effectively communicate research findings to diverse audiences. Experience managing competing priorities and supporting the delivery of time-sensitive and complex projects. A collaborative mindset, with experience working in multidisciplinary teams. A strong interest in international research collaboration. We expect these skills will be held by people with 3 years of rigorous empirical research experience, typically in machine learning, data science, computer science or related quantitative fields, or through relevant empirical work in adjacent disciplines including applied statistics, psychometrics, behavioural science, cognitive science, human-computer interaction, cybersecurity research or systems engineering. Our department has a commitment to inclusion and diversity, with an ambition of being the best possible place to work. This reflects the importance we place on our people and on creating a workplace culture where every one of us is valued and respected for our contributions. Our ideal candidate would add to this culture and our workplace in their own way. The department also offers flexible work arrangements. The key duties of the position include As a Senior AI Safety Research Scientist , you will: Lead the design of empirical methods to assess the safety of frontier AI models and systems. Lead the analysis and interpretation of results from evaluations to identify safety-relevant behaviours and generate clear findings and insights. Represent Australia in international AI safety engagements, including joint testing exercises. Provide strategic advice to policymakers and regulators on emerging AI capabilities, risks and harms. Collaborate with domestic and international partners across government, industry, civil society and academia to strengthen the science and practice of AI safety. Work with the Head of AI Safety Research and Testing to develop and deliver a program of research and testing. Lead the development of research publications and technical reports. Take ownership in building the culture and reputation of the AISI. As an AI Safety Research Scientist , you will: Contribute to the design of empirical methods to assess the safety of frontier AI models and systems. Analyse and interpret results from evaluations to identify safety-relevant behaviours and generate clear findings and insights. Represent Australia in international AI safety engagements, including joint testing exercises. Translate technical findings into actionable insights for policymakers and regulators. Collaborate with domestic and international partners across industry, civil society and academia to strengthen the science and practice of AI safety. Contribute to the development of research publications and technical reports. Take ownership in building the culture and reputation of the AISI