Data Engineer

TXP
Birmingham
1 week ago
Create job alert

We are TXP. We help businesses and organisations move forward, at pace and at scale. We believe in the transformative power of combining technology and people. By providing consulting expertise, development services and resourcing, we work closely with organisations to solve their most complex business problems.

Our work transforms organisations – and we take that responsibility seriously. We focus on success, pursue excellence and take ownership of everything we do.

We are seeking a Microsoft Fabric Data Engineer to join our technology function. You'll be joining a collaborative team, co-creating tools, supporting each other, providing governance, and building a community. The successful candidate will partner with Public and Private sector clients to deliver end‑to‑end data engineering solutions on Microsoft Fabric, from ingesting and transforming raw data to shaping curated Lakehouse layers that power analytics and reporting. You will transform complex data estates into clean, governed and high‑performing platforms that give clients confidence in the insights they rely on.

We are proud of our culture and values, which guide everything we do:

  • Client Focus – We put our clients at the heart of every decision.
  • Adaptability – We embrace change and thrive in dynamic environments.
  • Responsibility – We take ownership and deliver with integrity.
  • Excellence in Delivery – We are committed to delivering outstanding results.
  • Success & Celebration – We celebrate achievements and learn from every experience.

As a Data Engineer you will embody these values in your leadership, decision‑making, and client interactions.

Key Responsibilities:

ETL and Data Pipeline Development

  • Design and build scalable ETL and ELT pipelines in Microsoft Fabric using PySpark in Fabric Notebooks and Dataflows Gen2 to ingest and transform data from ERP and business applications.
  • Implement robust ingestion patterns and orchestration using Fabric Data Factory capabilities, handling varied formats and refresh frequencies while writing curated outputs to Lakehouse tables.
  • Develop transformation logic that standardises, cleanses and harmonises data across business units, publishing to managed Delta tables for downstream analytics.
  • Apply incremental load and near real time replication strategies, including Mirroring where appropriate, to optimise runtime and latency for operational reliability.

Integration and Quality

  • Integrate data from diverse sources into OneLake and Lakehouse, enforcing data quality checks, validation rules and reconciliation steps in pipelines and notebooks to maintain accuracy and integrity.
  • Build error handling, logging and monitoring into pipelines, and document lineage using Fabric’s item lineage and the OneLake catalogue to support operational reliability and auditability.
  • Collaborate with stakeholders to translate business data requirements into transformation rules and curated models that meet reporting and analytics needs across Fabric workloads.

Lakehouse Development and Performance

  • Design and maintain medallion architecture in the Microsoft Fabric Lakehouse on OneLake with bronze, silver and gold layers, promoting clear contracts and progressive data quality.
  • Optimise storage formats, partitioning and table maintenance for Delta performance, applying practices such as V‑Order and efficient file sizing to improve query speed and cost effectiveness.
  • Prepare gold layer datasets for downstream analytics with Direct Lake connected semantic models to deliver high performance BI without heavy refresh cycles.
  • Create reusable frameworks and templates for common ingestion and transformation patterns to accelerate team delivery in Fabric.

Collaboration, Governance and Continuous Improvement

  • Work with data leaders to implement standards for workspace design, security and lifecycle management across Fabric capacities, using OneLake and Fabric governance features.
  • Contribute to code reviews and CI/CD practices for Fabric items and notebooks, aligning with Microsoft’s analytics engineering guidance and development lifecycle.
  • Support testing activities including unit, integration and user acceptance testing for pipelines, notebooks and data models, ensuring reliable promotion through environments.

Skills & Experience:

  • Strong hands‑on experience building ETL and ELT pipelines with PySpark in Fabric Notebooks, including Spark SQL and Python for large‑scale data processing.
  • Practical experience with Microsoft Fabric Lakehouse on OneLake, including medallion architecture implementation and curation of bronze, silver and gold layers.
  • Proficiency with Dataflows Gen2 for low‑code ingestion and transformation, and familiarity with Fabric Data Factory capabilities for orchestration and scheduling.
  • Deep understanding of Delta tables, table optimisation and storage layout for query performance, including V‑Order and efficient Parquet file strategies.
  • Experience enabling Direct Lake semantic models for high‑performance analytics over Lakehouse and Warehouse data.
  • Strong SQL skills for transformation and validation across Lakehouse and Warehouse experiences within Fabric.
  • Familiarity with Mirroring to replicate operational data into OneLake for near real time analytics when required.
  • Experience integrating data from enterprise systems and APIs into Fabric, and shaping curated datasets for analytics with clear definitions and data contracts.
  • Understanding of dimensional modelling and semantic model best practices to support BI and analytics in Fabric
  • Experience working in Agile or fast‑paced project environments
  • Experience working as part of a technology or professional services consultancy.
  • 25 days annual leave (plus bank holidays)
  • An additional day of paid leave for your birthday (or Christmas eve)
  • 4% Matched employer contributed pension (salary sacrifice)
  • Life assurance (3x)
  • Access to an Employee Assistance Programme
  • Private medical insurance through our partner Aviva.
  • Cycle to work scheme
  • Access to an independent financial advisor
  • 2 x social value days per year to give back to local communities.

Grow with us:

Work on exciting new projects

If you want to avoid getting stuck with the mundane, you’re in the right place. We work in many sectors with fantastic clients, so you’ll always be working on something exciting and challenging.

We recognise that you might have a career path planned out and you might need some support to help you move forward. We’re here to support you and make the most out of your time with us, through challenging work, opportunities to grow and learning and development opportunities.

Be part of the TXP growth journey

We are a high growth, fast paced environment. We currently have 200+ employees and work with clients across the UK. Joining TXP means you’ll be part of that.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer - AI Analytics and EdTech Developments

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.