Data Engineer
Job description
About our client:
Our client is a global leader in technology solutions and has an unrivalled market reputation. In recent years our client has led the way in producing novel and intricate solutions focused on AI and Analytics. With IP garnered from our client's global partners this business is well positioned to have a stratospheric period of growth during 2021 and beyond. With a reputation predicated on quality of service to clients and offering un-paralleled career opportunities to its employees, this company has become a destination of choice for the very best talent in Australia.
The role:
As the Data Engineer in our Data Science team, you will be responsible for designing and deploying scaleable, enterprise grade and machine learning models. You will contribute to a variety of technologies, depending on the requirements of the business based in Sydney and / or Melbourne.
Projects will include but not be limited to analytics, data enablement and machine learning modeling. Additional responsibilities will include but not be limited to: - Use engineering principles to design, develop and implement new processes and applications for data collection, storage, analysis, and visualisation; - Develop, deploy, and operate large scale data storage and processing solutions using distributed and cloud-based platforms for storing data - Design, build, operate relational and non-relational databases (SQL and NoSQL), integrate them with Data Warehouse solutions, ensure effective ELT/ETL, OLTP, OLAP processes for large datasets. - Manage technology and capability to maintain historical information on data handling, including reference to published data and corresponding data sources (data provenance)
The successful candidate:
The Data Engineer will have demonstrable experience of working in a complex Data Science environment, applying Machine Learning techniques, advanced analytics, and statistical modelling. The must haves for this roles include:
- Deep understanding of modern machine learning techniques;
- Design, build and launch extremely efficient and reliable data pipelines to move data across several platforms including Data Lake, online caches and real-time systems
- Experienced in Scala, Python, SQL, SparkSQL and ETL design, implementation, and maintenance.
- Experience with workflow management engines (i.e. Airflow, Luigi, Prefect, Dagster, digdag.io, Google Cloud Composer, AWS Step Functions, Azure Data Factory).
- Azure experience is vital for success in this position.
What's on offer?
Due to current covid working conditions this role is open to remote working for people meeting the above criteria and as such can be done anywhere within Australia. The position is permanent and will pay an annualised base salary of $130,000 - $150,000 base plus super and bonus.