Data Engineer

Brunelcare
Bristol
1 month ago
Applications closed

Related Jobs

View all jobs

Data Engineer - AI Analytics and EdTech Developments

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

As an adopter of Fabric we are looking for an ambitious data engineer to grow with the our use of the platform. This is an opportunity to build something that truly matters. Your role will transform how we deliver our care, housing, and community services, through the creation of a modern data platform from the ground up.


This isn’t just engineering - It’s a chance to make a real difference, sharpen your cloud skills, and leave a lasting mark on an organisation that improves lives every day.


As an adopter of Fabric we are looking for an ambitious engineer to grow with the our use of the platform.


If you are ready to be part of something more , then apply today!


About the role

This exciting role, within our newly created BI Team, will be hand on as you will be the sole data engineer. You will be hands on creating scalable and automated data pipelines with tools like Microsoft Fabric, Azure Data Factory, SQL, Python, and Spark. You will also :


Design and build repeatable ingestion (APIs, databases, flat files) with incremental and historical loads.


Implement resilient ELT / ETL pipelines with parameterization, orchestration, retry / alerting, and logging.


Create snapshot and slowly changing dimension (SCD) patterns for month end and trend analysis.


Optimise performance (partitioning, indexing, caching) and manage cost-efficient refresh cadences (daily / weekly / Realtime where appropriate).


Develop cleaned / curated layers (e.g., Bronze / Silver / Gold or trusted data marts) and star schema models aligned to business definitions.


Partner with BI developers to ensure visuals are fed by reusable, governed datasets.


Embed data quality rules (validity, timeliness, completeness), reconciliation against source systems, and issue backlogs.


Monitor pipeline health and cost; manage incident response and root cause analysis.


Translate business requirements into technical designs and estimates; run technical workshops and design walkthroughs.


Produce clear documentation (runbooks, diagrams, standards).


About you

A positive self-starter, you will have advanced SQL skills and experience of using Azure, or similar, to build data pipelines. You will also need to have experience configuring APIs, and producing dimensional data modelling (star / snowflake), semantic modelling for BI.


You will have a solid understanding of data security, PII, GDPR and data related compliance and governance. Alongside this, you will have a track record of delivering within time constraints with a continuous improvement mindset.


It’s not essential but it would be great if you also have experience with :



  • Social housing and care data / systems
  • Hands on experience of Fabric
  • Spark, Python or other commonly used languages for wrangling data

You will need to be ambitious, self starting and be able to adapt to and adopt new technology as it comes along.


Job Benefits

  • Flexible Working - The opportunity to work from home up to 3 days a week
  • Equivalent to 25 days of paid annual leave (in addition to bank holidays), increasing to the equivalent of 28 after 5 years’ service (pro-rata)
  • Pension Scheme
  • Annual achievement review with the opportunity for pay progression
  • Blue Light Card discount service, offering online and high street discounts
  • Care First Employee Assistance Programme (provides a range of free, confidential services) and in-house Mental Health First Aiders available
  • Colleague Voice Representatives, enabling you to have your say
  • Cycle to Work Scheme
  • Company Sick Pay – Linked to length of service
  • £200 refer a friend bonus
  • Plus the below ….


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.