About the department The Department of Industry, Science and Resources and our broader portfolio are integral to the Australian Government's economic agenda. Our purpose is to help the government build a better future for all Australians through enabling a productive, resilient and sustainable economy, enriched by science and technology. We do this by: Growing innovative & competitive businesses, industries and regions Investing in science and technology Strengthening the resources sector. The APS and the department offer a clear direction and meaningful work. You will be able to create positive impact in people's lives whilst contributing to improved outcomes for Australia and our people. If you would like to feel a strong connection to your work and you are accountable, committed and open to change, join us in shaping Australia's future. Please see the APSC's APS Employee Value Proposition for more information on the benefits and value of employment within the APS. About the team About the AI Safety Institute The Australian Government is establishing an Australian AI Safety Institute (AISI) to support the Government's ongoing response to emerging risks and harms associated with AI technologies. The AISI will be the government's hub of AI safety expertise, operating with transparency, responsiveness and technical rigour. The AISI will conduct technical assessments, support coordinated government action, foster international engagement on AI safety, and publish research to inform industry, academia and the Australian people. About the Division The AISI is part of the department's Technology and Digital Policy Division. The division is responsible for providing policy advice to government, delivering programs and engaging domestically and internationally on enabling and critical technologies as well as the digitisation of the economy. The division's priorities include implementing the National AI Plan, providing advice on the safe and responsible use of AI, robotics and automation, the role of critical technologies to support economic security, data policy and emerging digital economy issues. The opportunity We're building a motivated and capable team who will be defining the AISI's future. As a founding member of the team, you will help shape how Australia monitors, tests and governs AI. You will assess risks from frontier models, including CBRN misuse, enhanced cyber capabilities, loss-of control scenarios, information integrity and influence risks, and broader systemic risks arising from the deployment of increasingly capable general-purpose AI systems. This is a unique opportunity to work at the frontier of AI, collaborate with domestic and international experts to shape emerging global AI safety standards and help keep Australians safe from AI-related risks and harms. You'll have the opportunity to drive positive change, contribute to impactful projects, and develop your expertise in a rapidly evolving field. Our ideal candidate We're looking for candidates with deep technical expertise and hands-on experience working with frontier AI models. Senior AI Safety Engineer - Science & Technical stream pay scale 8 and 9 Our ideal candidate for this role would have: Extensive hands-on experience working with frontier or near-frontier AI models and systems, including LLMs, multimodal systems or agentic frameworks. Demonstrated experience building and running evaluations of frontier AI systems or safety-relevant model behaviours. Experience developing or using safety-related tooling to support evaluations, such as red-teaming frameworks, test harnesses, automated evaluation pipelines, or continuous monitoring systems. Experience implementing and stress-testing technical safeguards or mitigations, including guardrails, filtering systems, access controls, safety-tuning methods and inference-time controls. Demonstrated experience running large-scale behavioural evaluations, including managing logs and datasets, diagnosing evaluation or deployment issues and debugging. A working knowledge of safety-relevant AI failure modes including robustness issues, jailbreak vulnerabilities, unintended behaviours and reliability failures. Strong collaborative skills, including the ability to work closely with research scientists and engineers to operationalise evaluation designs and refine testing procedures. Experience working in multidisciplinary teams and contributing to shared research and engineering workflows. We expect these skills will be held by people with 5 years of industry, academic or equivalent experience working directly on training, tuning, evaluating or operating advanced AI models and systems. AI Safety Engineer - Science & Technical stream pay scale 7 and 8 Our ideal candidate for this role would have: Hands-on experience working with frontier or near-frontier AI models and systems, including LLMs, multimodal systems or agentic frameworks. Experience supporting or contributing to evaluations of frontier AI systems or safety-relevant model behaviours. Experience using safety-related tooling to support evaluations, such as red-teaming frameworks, test harnesses, automated evaluation pipelines, or continuous monitoring systems. Experience implementing or testing safety mitigations, such as guardrails, filtering systems, access controls, safety-tuning methods and inference-time controls. Experience contributing to behavioural evaluations at scale, including working with logs and datasets, supporting issue diagnosis and debugging. An understanding of common safety-relevant AI failure modes, including robustness issues, jailbreak vulnerabilities, unintended behaviours and reliability failures. The ability to work effectively in multidisciplinary teams and contribute to the operational delivery of evaluation work. A willingness to learn, iterate and contribute to shared processes in a fast-paced environment. We expect these skills might be held by people with 3 years of industry, academic or equivalent experience working directly on training, tuning, evaluating or operating advanced AI models and systems. Our department has a commitment to inclusion and diversity, with an ambition of being the best possible place to work. This reflects the importance we place on our people and on creating a workplace culture where every one of us is valued and respected for our contributions. Our ideal candidate adds to this culture and our workplace in their own way. The key duties of the position include As a Senior AI Safety Engineer , you will: Operationalise evaluation designs developed in collaboration with AI safety research scientists, translating conceptual testing methodologies into practical, scalable and reproducible experiments. Build, maintain and operate evaluation and safety-testing tooling for frontier AI systems. Run large-scale behavioural tests and model evaluations, generating high-quality empirical evidence for safety analysis. Diagnose emerging failure modes, identify novel vulnerabilities or anomalous behaviours, and work with AI safety research scientists to interpret patterns and assess safety-relevant risks. Develop and maintain clear and accurate technical documentation, including evaluation logs, testing reports and safeguard assessments. Support the continuous improvement of the AISI's engineering practices, tooling and testing infrastructure in a fast-paced and evolving environment. Collaborate across government, industry, academia and civil society, including participation in international AI safety initiatives and joint evaluation activities. Contribute to technical reports and research outputs. Take ownership in building the culture and reputation of the AISI. As an AI Safety Engineer , you will: Support the implementation of evaluation designs developed in collaboration with AI safety research scientists, helping translate testing methodologies into repeatable and scalable experiments. Support the operation and maintenance of evaluation and safety-testing tooling for frontier AI systems. Assist in running behavioural tests and model evaluations, contributing to the generation of reliable empirical evidence for safety analysis. Help identify emerging failure modes or anomalous behaviours, and work with AI safety research scientists to interpret results and assess potential risks. Maintain clear and accurate technical documentation, including evaluation logs, testing reports and safeguard assessments. Contribute to improving engineering practices, tooling and testing infrastructure as the AISI's work evolves. Collaborate across government, industry, academia and civil society, including participation in international AI safety initiatives and joint evaluation activities. Contribute to technical reports and research outputs. Take ownership in building the culture and reputation of the AISI.