Principal Big Data Engineer

Location: Sydney
Job Type: Permanent
Discipline: Technology, Data & Transformation Sydney
Reference: 1272529
Posted: 20 days ago
About our client:   
 
Our client believes in a world where technology exists to help businesses and people communicate without effort. Its mission is to make communication easier, faster, and more reliable for customers while delivering value to its stakeholders. All this while they operate with integrity, take ownership and responsibility, and put customers first.
   
The Role:

As a Principal Big  Data Engineer, you will be working with one of the most critical members of the Networks DaaS Engineering and capability development team. You will play a key role in coaching, developing, inspiring, and working with your peers and leading the new starters in the Data chapter to take on the challenge of ingesting, transforming, and presenting data from different sources in real-time.

Responsibilities:
  
  • Play a key role in Architecture, Design, and implement data pipelines in Bigdata platforms (on-premises and on-cloud)
  • Understand business requirements and develop solution designs to address business requirements.
  • Architect, design & build robust and scalable data infrastructure (both batch processing and real-time) to support the needs of internal and external users
  • Enhance, optimize and maintain existing data ingestion, transformation, and extraction pipelines, and assets built for reporting and analytics on Big Data/ lakehouse platforms
  • Drive optimization, testing, and tooling to improve data quality & data governance
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  The Successful candidate:
  
  • Must have 10 -15 years’ experience working in Data Engineering and Data Warehousing
  • Minimum 4+ years of hands-on experience in Big Data tools (not limited to)
    •  Spark
    •  Python
    •  Informatica BDM
    •  Scala
    •  StreamSets / Apache Nifi
    •  Cloudera and /or Hortonworks toolset
    •   Kafka
    •   Hive/Impala/HBASE
    •   Good knowledge of SQL
  • Experience in requirements engineering, solution architecture, design, and development/deployment.
  • Must have strong verbal and written communication skills for working with both business and technical teams.
  • Experience in streaming frameworks (Kafka/Spark Streaming) is preferred
  
What’s on offer:   
  
An enticing opportunity with a leading group and a salary of up to $175K + Super + 14% bonus. A Sydney-based opportunity with hybrid work flexibility and can consider candidates anywhere from Australia with full working rights.