Senior Data Engineer

Location: Sydney
Job Type: Permanent
Discipline: Government Technology, Data & Transformation Sydney
Reference: 1601131
Posted: 7 days ago
Key responsibilities include;
  • Design, build, and maintain scalable data pipelines and ETL processes using Azure Data Factory for data ingestion from diverse sources including on-premises databases (Authority), API-based CAMMS, and data files in CSV and JSON formats.
  • Utilize Azure Data Lake Storage (Gen2) and Delta Lake for managing data in different stages (Bronze, Silver, Gold).
  • Implement data processing workflows using Azure Databricks to transform raw data into processed high-quality data assets.
  • Develop and optimize data models in Databricks and Power BI ensuring they meet performance and scalability requirements.
  • Enable advanced analytics capabilities by providing clean, structured, and reliable data for analysis and reporting using Azure Databricks and Power BI.
  • Implement and manage DevOps practices for continuous integration and continuous deployment (CI/CD) to enhance data engineering workflows through Git integration.
  • Establish and enforce data standards, policies, and procedures including a data quality framework to ensure the accuracy, completeness, and consistency of data.
  • Support the Data and Insights teams by providing efficient data access and processing tools on Azure.
  • Monitor, support, and improve the technology landscape, including maintenance and cost savings.
  • Assist the Team Leader with remediation activities identified from internal and external audits.
Key requirements;
  • Bachelor’s degree or above in Computer Science, Data Engineering, Information Systems, or a related field.
  • 5+ years of experience in data engineering, data modelling, data management, and data governance using Azure cloud services.
  • Strong expertise in data architecture, ETL processes, and data modelling on Azure.
  • Proficiency in programming languages such as Python and SQL with demonstratable experience in Azure Data Factory, Azure Databricks, and other Azure data services.
  • Experience with DevOps practices, including CI/CD pipelines.
  • Experience in ingesting data from on-premises RDBMS databases, APIs, and various formats like CSV, XLS and JSON.
  • Data governance in Azure Purview and Unity Catalog.
  • Strong analytical and problem-solving skills with the ability to translate business requirements into technical solutions.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.