Our Client is looking for an experienced Data Engineer for an initial 6-month contract, with likely extensions based on your fit for the role and the project timeline. Reporting to the Head of Data and Analytics, the Data Engineer is responsible for the development of data models, data pipelines, test automation solutions and mapping the business requirements to systems/technical requirements to ensure they are in line with the solution architecture. This role is based in Melbourne Western Suburbs and requires the successful candidate to be there 3 days on site. Generally the connect days in office are Tuesday/Thursday and the other date is flexible for onsite. Please note, to apply applicants must have the following: The requisite skills and experience defined below, At least a Working Visa or Australian Permanent Residence working rights, and At least 3 years of local working experience in the same or similar role. Key responsibilities: Design, develop, and maintain the Data Lakehouse, leveraging Databricks, Data Factory and other Azure cloud data engineering tools. Collaborate with stakeholders to define data engineering requirements and align solutions with business objectives. Development and optimisation of data pipelines, ensuring efficiency, accuracy, and scalability. Implement best practices for data governance, security, and compliance within the Azure ecosystem. Monitor system performance, troubleshoot issues, and implement enhancements to optimise the Lakehouse environment. Stay updated on emerging technologies and methodologies in data engineering, incorporating relevant advancements into the organisations infrastructure. The successful candidate will have: Extensive experience in data engineering with expertise in Azure Databricks, Microsoft Fabric, Python, and the Azure cloud data ecosystem (e.g., Data Factory, ADLS, Event Hub). Experience with CI/CD pipelines and DevOps practices in a cloud environment. Familiarity with machine learning pipelines and integration with data platforms. Proven ability to design and manage large-scale Data Lakehouses and distributed systems. Strong background in ETL/ELT processes, data modelling, and performance optimisation. Familiarity with data governance and DevOps practices within cloud environments. Notes: Only shortlisted candidates will be contacted. Your daily rate will depend on skills and experience. The role is full-time onsite. Start date is ASAP. If you feel this role is for you, then please press "Apply" now >