About We’re seeking a Data Modeler who thrives at the intersection of data design and engineering. This role is ideal for someone who can not only build robust data models but also roll up their sleeves to develop pipelines that bring those models to life. You’ll be joining a collaborative team working across onshore and offshore locations, with a strong foundation in AWS and a strategic shift underway. This is a hands-on opportunity to influence how data is structured, accessed, and scaled across the business. What You’ll Be Doing Designing and documenting conceptual, logical, and physical data models Building and validating data pipelines to support model implementation Collaborating with engineers, analysts, and business stakeholders to ensure models meet real-world needs Supporting the transition to a modern Lakehouse architecture Leading knowledge-sharing sessions to uplift data modeling capability across the team What You’ll Bring Strong experience with data modeling methodologies (Kimball, Data Vault 2.0) Ability to build and test pipelines in cloud-native environments (AWS preferred) Proficiency with SQL and data modeling tools (e.g. Erwin) Experience working with structured and semi-structured data (JSON, APIs, flat files) Familiarity with Agile, CI/CD, and modern data ops practices Bonus: Experience with Databricks or similar Lakehouse platforms How to Apply If you are interested in the role, please apply via the link or reach out to Aimee Thompson at launchrecruitment.com.au Additional information Blend of data modeling and pipeline engineering Influence standards for scalable, AI-ready data AWS-native envorinment