Senior Data Engineer required to support the development of an AWS-hosted Databricks platform for a leading financial services business. Responsibilities: design and development of pipelines using Databricks and Lakehouse optimisation and maintenance of data workflows, ensuring quality and integrity performance tuning and monitoring set up and manage CI/CD pipelines Notebook development using Python and/or Pyspark adhere best practices for data engineering, including governance and security Requirements: extensive experience building data pipelines using Databricks in an AWS environment well-versed with PySpark and SQL solid experience around ML models and algorithms Click on the 'Apply' button or send your CV to sahuja@fourquarters.com.au