Data Engineer

National Audit Office
Newcastle upon Tyne
1 month ago
Applications closed

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Job Details

Job description Role: Data Engineer


Contract: Permanent


Location: London or Newcastle


Salary: c65,000 plus Civil Service Employer Pension Contribution of 28.9%


Nationality Requirements

  • UK nationals
  • Nationals of Commonwealth countries who have the right to work in the UK
  • Nationals from the EU, EEA or Switzerland with (or eligible for) status under the European Union Settlement Scheme (EUSS)

Please note, we are not able to sponsor work visas. Please contact us should you have any questions on your nationality eligibility.


The closing date for applications is 11.59pm 18 Jan 2026. First stage interviews over MS Teams will take place between WC 25 January 2026. Second stage interviews will take place at our offices in Victoria 2 / 3 February.


About the National Audit Office

The National Audit Office (NAO) is the UK’s main public sector audit body. Independent of government, we have responsibility for auditing the accounts of various public sector bodies, examining the propriety of government spending, assessing risks to financial control and accountability, and reviewing the economy, efficiency and effectiveness of programmes, projects, and activities. We report directly to Parliament, through the Committee of Public Accounts of the House of Commons which uses our reports as the basis of its own investigations. We employ approx. 1,000 people, most of whom are qualified accountants, trainees, or technicians. The organisation comprises two service lines: financial audit, and value for money (VFM) audit and has a strong core of highly talented corporate teams.


The NAO welcomes applications from everyone. We value diversity in all its forms and the difference it makes to our organisation. By removing barriers and creating an inclusive culture all our people can develop and maximise their full potential. As members of the Business Disability Forum and the Disability Confident Scheme we guarantee to interview all disabled applicants who meet the minimum criteria.


The NAO supports flexible working and is happy to discuss this with you at application stage.


Context and main purpose of the job

This is a new vacancy created within NAO’s Digital Services (DS) to expand the data service team, with responsibility for designing, building, and maintaining the infrastructure that enables robust data collection, storage, and access across the organization. This role supports the development and continual improvement of NAO data & technology service composition and provision, enabling scalable and reliable data solutions.


In this capacity, you will build and optimize data pipelines, integrate diverse data sources, and ensure the efficient movement of data across systems. You will work closely with analytics engineers, data scientists, and other stakeholders to ensure data is accessible, high-quality, and fit for purpose. Your work will underpin the NAO’s ability to derive insights and automate processes using corporate and client data.


Key Responsibilities

In this role, you will:



  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Integrate structured and unstructured data from internal and external sources.
  • Ensure data quality, consistency, and security across systems.
  • Collaborate with analytics engineers and subject matter experts to support data modelling and transformation.
  • Monitor and optimize performance of data infrastructure.
  • Document data architecture and engineering processes to ensure transparency and maintainability.

This role reports into the Head of Data Services. This role requires regular attendance at the NAO’s office either in Victoria, London, or at the office in Newcastle.


Detailed Responsibilities

As a data engineer at the NAO, you will play a critical role in building and maintaining the technical foundation that enables data-driven operations and insights. You will be responsible for architecting and managing data infrastructure, ensuring that data flows securely and efficiently across systems, and enabling downstream users to access reliable, well-structured data.


Your key responsibilities will include:



  • Building scalable data infrastructure: Design and implement systems that support the ingestion, storage, and processing of large volumes of structured and unstructured data from internal and external sources.
  • Developing robust data pipelines: Create automated workflows that extract, transform, and load data into centralized platforms, ensuring consistency, reliability, and performance across all stages.
  • Designing and optimizing ETL processes: Build and maintain efficient ETL (Extract, Transform, Load) workflows to move data from source systems into usable formats. Ensure these processes are scalable, well-documented, and aligned with data quality standards.
  • Integrating diverse data sources: Connect and harmonize data from various systems (e.g., operational databases, APIs, cloud services) to create unified datasets for analysis and reporting.
  • Collaborating across teams: Work closely with analytics engineers, data scientists, and business stakeholders to understand data needs and deliver infrastructure that supports analytical and operational use cases.
  • Ensuring data reliability and performance: Monitor data systems for latency, failures, and bottlenecks. Implement performance tuning and system optimizations to maintain high availability and responsiveness.
  • Implementing data governance and security protocols: Apply best practices for data privacy, access control, and compliance. Ensure that sensitive data is protected and handled in accordance with regulatory requirements.
  • Maintaining technical documentation: Produce and update documentation for data architecture, pipeline configurations, and operational procedures to support transparency and continuity.
  • Troubleshooting and incident response: Investigate and resolve data-related issues, from pipeline failures to data integrity concerns. Establish proactive monitoring and alerting systems.
  • Supporting data accessibility: Enable self-service access to clean, well-organized data for analysts and other users through tools, APIs, or data platforms.
  • Keeping pace with technology: Stay informed about emerging tools, frameworks, and methodologies in data engineering. Continuously evaluate and adopt innovations that improve efficiency and scalability.

Key Skills / Competencies

The skill sets listed also include the corresponding skill level (awareness, working, practitioner, expert):



  • Communicating between the technical and non-technical (Skill level: Awareness) – You can explain why it's important to communicate technical concepts in non-technical language. You understand the types of communication used with internal and external stakeholders and their impact.
  • Data Analysis and Synthesis (Skill level: Working) – You can undertake data profiling and source system analysis. You present clear insights to colleagues to support the end use of the data.
  • Data Development Process (Skill level: Working) – You can design, build, and test data products based on feeds from multiple systems, using a range of storage technologies and access methods. You create repeatable and reusable products.
  • Data Innovation (Skill level: Awareness) – You show awareness of opportunities for innovation with new tools and uses of data.
  • Data Integration Design (Skill level: Working) – You deliver data solutions in accordance with agreed organisational standards that ensure services are resilient, scalable, and future-proof.
  • Data Modelling (Skill level: Working) – You understand the concepts and principles of data modelling. You can produce, maintain, and update relevant data models and reverse-engineer models from live systems.
  • Metadata Management (Skill level: Working) – You use metadata repositories to complete complex tasks such as data and systems integration impact analysis. You maintain metadata repositories to ensure accuracy and currency.
  • Problem Management (Skill level: Awareness) – You investigate problems in systems, processes, and services, and contribute to the implementation of remedies and preventative measures.
  • Programming and Build (Data Engineering) (Skill level: Working) – You can design, code, test, correct, and document simple programs or scripts under direction. You follow agreed standards and tools.
  • Technical Understanding (Skill level: Working) – You understand core technical concepts related to the role and apply them with guidance.
  • Testing (Skill level: Working) – You review requirements and specifications, define test conditions, identify issues and risks, and report test activities and results.

Experience Requirements

  • ETL and Data Pipeline Development – Demonstrated experience in designing, building, and maintaining ETL workflows and data pipelines. Skilled in extracting, transforming, and loading data from various sources into centralized platforms.
  • Data Infrastructure and Integration – Proven ability to implement data flows between operational systems and analytics platforms. Experience with cloud-based data services (e.g., AWS, Azure, GCP) and streaming systems is desirable.
  • Database Management and Optimization – Experience managing relational and non-relational databases, including performance tuning, indexing, and query optimization. Familiarity with database design principles and data warehousing solutions.
  • Collaboration and Communication – Ability to work effectively with technical and non-technical stakeholders. Skilled in translating business requirements into technical solutions and supporting cross-functional teams.
  • Problem Solving and Troubleshooting – Capable of identifying and resolving data-related issues, implementing preventative measures, and contributing to system reliability.

How to Apply

Please upload a CV and a covering letter outlining your suitability and interest in the role before the deadline.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.