Data Engineer
Job description
We are seeking an experienced Data Engineer to help build and enhance our clients modern data platform. If you’re passionate about Databricks, dbt, and shaping high‑quality data models that power analytics and operational applications, we’d love to hear from you.
About the Role
You will play a key role in designing and delivering scalable, production‑grade data pipelines. Working across Databricks, dbt, SQL, and Power BI, you’ll help create analytics‑ready datasets that drive business insights and enable operational workflows, including integration with Power Apps.
This role is available part‑time or full‑time, offering flexibility to suit your working style.
What You’ll Do
Apply now, we’d love to meet you.
About the Role
You will play a key role in designing and delivering scalable, production‑grade data pipelines. Working across Databricks, dbt, SQL, and Power BI, you’ll help create analytics‑ready datasets that drive business insights and enable operational workflows, including integration with Power Apps.
This role is available part‑time or full‑time, offering flexibility to suit your working style.
What You’ll Do
- Design, build, and maintain data pipelines and transformation layers using Databricks, dbt, and SQL.
- Ingest, transform, and model data from multiple source systems to support analytics and reporting needs.
- Develop and maintain dbt models (staging, intermediate, and presentation layers).
- Conduct data gap analyses to identify required tables, attributes, and transformations.
- Build Power BI‑optimised data models, ensuring high performance and well‑structured semantic layers.
- Support integration of Power Apps with Power BI datasets and analytics workflows.
- Collaborate closely with Power BI developers to ensure consistent, reliable, and accessible data.
- Implement data quality testing via dbt tests and monitoring frameworks.
- Optimise Databricks workloads and SQL transformations for efficiency and performance.
- Manage code with Git‑based version control and participate in CI/CD processes.
- Produce clear technical documentation covering sources, transformations, and business logic.
- Hands‑on experience with Databricks (Spark SQL or PySpark) delivering production‑grade pipelines.
- Proven experience using dbt, including modelling across all layers and implementing tests.
- Demonstrated ability to design datasets optimised for Power BI semantic models (performance, relationships, star schemas).
- Experience delivering analytics‑ready data layers in a lakehouse environment (Delta Lake, medallion architecture).
- Advanced SQL skills, preferably in Azure cloud environments.
- Experience structuring datasets for operational apps (Power Apps + Power BI a bonus).
- Practical experience with Git, version control, and CI/CD for data pipelines.
- Scalable data pipelines & transformation frameworks
- Multi‑source data ingestion & modelling
- dbt model development across all layers
- Data gap analysis & remediation
- Power BI‑optimised datasets
- Power Apps + Power BI integration enablement
- Data quality, testing & monitoring frameworks
- Databricks & SQL performance optimisation
- CI/CD and Git‑based version control processes
- Clear and comprehensive technical documentation
Apply now, we’d love to meet you.