Machine Learning Engineer This range is provided by NineTech. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more. Base pay range A$90.00/hr - A$90.00/hr Direct message the job poster from NineTech We’re urgently seeking a Senior DevOps ML Engineer with a rare combination of DevOps leadership, software engineering depth, and MLOps expertise. The client requires someone with DevOps leadership , coding skills akin to a seasoned backend engineer , and ability to optimize AI systems with precision . You will be responsible for the end-to-end design, deployment, and operation of AI platforms. Expect to split your time approximately as follows: 60% DevOps/AI infrastructure , 30% backend coding , and 10% MLOps/model lifecycle tasks. One week, you may be helping data scientists productionize LLM pipelines. The next, you're deep in Terraform, Helm charts, and CUDA kernels—tuning real-time latency for a Tokkio-powered digital avatar. Key Responsibilities AI Platform & DevOps (60%) Architect, deploy, and maintain GPU-accelerated Kubernetes clusters using Helm, NGC containers, and custom K8s operators. Build and maintain CI/CD pipelines (GitHub Actions, Jenkins, Argo CD) to enable continuous delivery of both software and models. Automate infrastructure across AWS, Azure, and on-prem environments using Terraform or Pulumi. Optimize GPU workloads and ensure reliability across hybrid or multi-cloud AI platforms. Model Lifecycle & MLOps Collaborate with data scientists to containerize, benchmark, and tune LLMs, diffusion models, and multimodal pipelines. Implement data governance and tracking for AI data pipelines (e.g., LakeFS, Feast). Maintain feature and vector stores, ensuring reproducibility and performance of AI applications. Hands-on Engineering Develop backend services and APIs in Python and C++ (CUDA, Triton, TensorRT-LLM) and optionally in TypeScript. Integrate components from the client’s digital human ecosystem: Riva (speech), Tokkio, Maxine (Audio2Face, eye contact), Omniverse. Build reusable SDKs, CLI tools, and internal libraries to accelerate AI/ML workflows across teams. Required Qualifications 10 years of experience building and operating production-grade software systems. 2 years focused specifically on AI/ML platforms or infrastructure. Proven expertise in CI/CD, GitOps, Terraform, and Helm. Strong Kubernetes and Docker experience, including GPU workload scheduling. Advanced Linux administration skills and experience profiling GPU workloads. Expert-level Python plus one systems language (C++, Go, Rust, or Java). Seniority level Mid-Senior level Employment type Contract Job function Information Technology Industries IT Services and IT Consulting J-18808-Ljbffr