Job Description Data Engineering & Pipeline Development Design, build, and maintain scalable data pipelines using modern data stack tools (dbt, Airflow, Python). Take ownership of troubleshooting and resolving issues in data workflows, ensuring reliable and timely data delivery. Work with Databricks and Snowflake environments to support ingestion, transformation, and modelling of data. Contribute to development and optimisation of Unity Catalog, Delta Live Tables, and Spark within Databricks. Data Modelling & Solution Design Collaborate with senior engineers and analysts to design and implement data models that support analytics and product needs. Translate business requirements into technical solutions, including prototyping and proof-of-concepts. Apply best practices in data modelling (Kimball, Data Vault, etc.) to ensure scalable and performant solutions. Collaboration & Communication Partner with data analysts, engineers, and business stakeholders to clarify requirements and deliver solutions. Clearly communicate progress, challenges, and technical considerations to both technical and non-technical stakeholders. Documentation & Governance Maintain up-to-date documentation for pipelines, workflows, and solutions, ensuring knowledge is shared across the team. Support data governance initiatives by monitoring data quality, applying standards, and flagging anomalies. Performance & Optimisation Proactively identify opportunities to optimise pipelines and improve performance in Databricks and Snowflake. Support ongoing platform improvements and contribute to scaling best practices across environments. Continuous Learning & Contribution Stay across emerging trends and best practices in cloud data engineering. Mentor junior engineers where appropriate and contribute to team knowledge-sharing activities. Actively participate in improving team culture, ways of working, and engineering standards.