Job Description The Data Engineer will work within a cross‑functional squad to design, build, and maintain high‑quality data assets that support reporting, analytics, and operational workflows. Owning the end‑to‑end lifecycle of data pipelines, the role focuses on designing and maintaining Databricks (Spark/Delta) pipelines for batch and near real‑time ingestion, as well as implementing transformations, conformance logic, and curated datasets for reporting and analytics. It involves establishing robust data quality processes—such as validation checks and reconciliations—addressing data integrity and performance issues, and supporting release validation, production readiness, monitoring, and incident response. The position also maintains accurate lineage, source‑to‑target mappings, secure role-based access controls aligned access patterns, and documentation, while driving automation of manual loads, snapshots, scheduling, and CI/CD workflows.