This role is to build and maintain the data flows and data acquisition processes to deliver well structured, efficient and performant data marts to drive business analysis and processes. Reports to the Data Engineering Lead(s) for direction in terms of priorities or technical support, and support them through delivery of projects/work within deadlines, prompt escalation of risks or issues and ensuring that day to day activities are delivered within SLAs.
The majority of our data flow scripts are currently in SAS (primarily EG Scripts with some DI Studio jobs), and these will require to be maintained for the foreseeable future.
There are current and future projects to migrate those existing data flows into our cloud data platform in AWS, alongside the building flows for any new data to the Bank. In the cloud we'll be using cloud agnostic tools: Zeppelin Notebooks with python & Spark SQL for processing and Airflow for flows & scheduling to build out a new, trusted Enterprise Data Model to join-up and standardise the presentation of all data across the Bank.
The work in AWS will form the minority of the tasks available initially, so there must be the willingness to continue to develop, maintain and support the current SAS world for a considerable period, whilst demonstrating either the ability and willingness to learn, or have prior experience of, python and the optimisation of associated cloud technologies.
If this role sounds like a good fit to your skills, please apply!