Our client in Sydney is looking for a Senior Data Engineer with experience utilising Scala and Apache Spark on AWS to process large transaction volumes.
You will also be designing and implementing Kafka streaming services using Scala to enrich and transform data for business use cases.
- Data Engineering role utilising Scala and Apache Spark on AWS EMR to process large transaction volumes.
- Leveraging Apache Spark to run Business Rules on past transactions.
- Designing and implementing Kafka streaming services using Scala to enrich and transform data for business use cases.
- 5 yrs experience in data engineering practices (2 to 3 years in cloud based technologies) - ETL
- Experience in conceptual, logical and physical data models
- Experience with Cloud Technologies (AWS ecosystem - S3, ECS, EC2, Lambda, Step Functions).
- Experience in Cloud based Databases - SQL, NoSql (DynamoDB, Redshift, Neo4J, RDS).
- Experience with Data Streaming technologies - Kafka.
- Demonstrated experience in successful delivery of a complex development of work.
- Experience in Agile/Scrum methodologies
If you have relevant experience for the role and would be interested in discussing further, please apply and we'll give you a call within 24 hours!