Data Engineer

London
1 day ago
Create job alert

We are Data Services, our mission is to unlock the value of data by delivering high-quality, reliable, and secure data services that are accessible, understandable, and actionable. We continuously evolve our offerings, leveraging modern cloud-based technologies, and fostering strong partnerships to help our colleagues in the Bank navigate the complexities of a data-driven world and achieve their strategic objectives.

Active SC Clearance

Job Description:

The world of data in Central Banking is evolving rapidly. With the rise of detailed data collection in financial regulation and the swift advancements in cloud-native data technologies, the demand for visionary data engineers is growing. We’re seeking a senior Data Engineer to join our Data Engineering team and play a pivotal role in shaping the Bank’s strategic cloud-first data platform.

As a senior member of the team, you will play a key role in designing and delivering robust, scalable data solutions that support the Bank’s core responsibilities around monetary policy, financial stability, and regulatory supervision. You’ll contribute to technical design decisions, mentor engineers, and collaborate across teams to ensure our data infrastructure continues to evolve and meet future demands.

Role Responsibilities

  • Lead the design, development, and deployment of scalable, secure, and cost-effective distributed data solutions using Azure services (e.g., Azure Databricks, Azure Data Lake Storage, Azure Data Factory).

  • Architect and implement advanced data pipelines using Databricks, Delta Lake, Python and Spark, ensuring performance, reliability, and maintainability across cloud and on-prem environments.

  • Champion data quality, governance, and observability, ensuring data is accurate, timely, and fit-for-purpose for analytics, BI, and operational use cases.

  • Drive the modernization of legacy systems, leading the migration of data infrastructure to Azure with minimal disruption and long-term scalability.

  • Act as a technical authority on Azure-native data engineering, guiding best practices and setting standards across the team.

  • Mentor and coach junior and mid-level engineers, fostering a culture of continuous learning, innovation, and technical excellence.

  • Collaborate with architects, analysts, and stake holders to align data engineering efforts with strategic business goals and enterprise data strategy.

  • Evaluate and introduce emerging technologies, tools, and methodologies to enhance the Bank’s data capabilities.

  • Own the end-to-end delivery of complex data solutions, from requirements gathering to production deployment and support.

  • Contribute to the development of reusable frameworks, templates, and patterns to accelerate delivery and ensure consistency across projects.

    Minimum Criteria

  • Extensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory.

  • Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing.

  • Proven experience in CI/CD practices using industry-standard tools (e.g., GitHub Actions, Azure DevOps).

  • Strong understanding of data architecture principles and cloud-native design patterns.

    Essential Criteria

  • Demonstrated ability to lead technical delivery, mentor engineering teams and collaborate with stakeholders to ensure alignment between data solutions and business strategy.

  • Proficiency in Linux/Unix environments and shell scripting.

  • Deep understanding of source control, testing strategies, and agile development practices.

  • Self-motivated with a strategic mindset and a passion for driving innovation in data engineering.

    Desirable Criteria

  • Experience delivering data pipelines on Hortonworks/Cloudera on-prem and leading cloud migration initiatives.

  • Familiarity with: Apache Airflow

  • Data modelling and metadata management

  • Experience influencing enterprise data strategy and contributing to architectural governance.

    Changed this now. I was confusing this with PDE role as I am working on that in parallel. Hope this makes sense now.

    data solutions rather than architectures?

    Should add Python here as a key tech we use

    Have mentioned Python in 'Minimum Criteria' section below, but will add here too

    this could be added to Essential Criteria ?

    stakeholder and project management ?

    Have updated #1 in essential criteria below. But I have now used the previous version to create requisition in OBS. Will see if it can be changed.

    What is the difference between "minimum" and "essential" criteria. Both imply that they are mandatory and so could be one list?

    This is a bit confusing. I used to have just one, but this is the standard format of JD that the Bank wants us to follow. Here is the difference:

    Min Criteria:

    This must list the minimum technical skills/experience/qualifications required to do the job and should be measurable/scoreable. The screening questions you select must link to these, in order to allow candidates to best demonstrate their suitability for the role.

    Essential:

    This lists other important technical skills/experience/qualifications, and also more behavioural competencies. These are ones that are better assessed at interview rather than on screening questions on the application form

    Ok, I think we could go back and ask HR about this as it does seem confusing and to me doesn't give a good impression of the Bank to applicants at it looks like 2 lists for the same thing.

    I had checked this earlier, but seems they want us to follow this format. When I advertised last time, I just mentioned Minimum Criteria, but they said it has to be split into Minimum and Essential.

    Don't think we need to mention Atlas or Cloudera Manager as we hardly ever use those. Airflow could be useful so would leave that in

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.