Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Senior Data Engineer/ Scientist

Glasgow
3 days ago
Create job alert

Senior Data Engineer - Azure & Databricks Lakehouse

Glasgow (3/4 days onsite) | Exclusive Role with a Leading UK Consumer Business

A rapidly scaling UK consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse.
They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines, Unity Catalog, and Azure Data Factory, and this role sits right at the heart of that transformation.
This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care.
If you want to build a best-in-class Lakehouse from scratch-this is the one.

? What You'll Be Doing

Lakehouse Engineering (Azure + Databricks)

Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold).

Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks.

Apply Lakeflow expectations for data quality, schema validation and operational reliability.

Curated Data Layers & Modelling

Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations).

Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets.

Apply governance, lineage and fine-grained permissions via Unity Catalog.

Orchestration & Observability

Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory.

Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform.

DevOps & Platform Engineering

Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts.

Ensure secure, enterprise-grade platform operation across Dev ? Prod, using private endpoints, managed identities and Key Vault.

Contribute to platform standards, design patterns, code reviews and future roadmap.

Collaboration & Delivery

Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation.

Influence architecture decisions and uplift engineering maturity within a growing data function.

? Tech Stack You'll Work With

Databricks: Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses

Azure: ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints

Languages: PySpark, Spark SQL, Python, Git

DevOps: Azure DevOps Repos, Pipelines, CI/CD

Analytics: Power BI, Fabric

? What We're Looking For

Experience

5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks.

Strong PySpark/Spark SQL and distributed data processing expertise.

Proven Medallion/Lakehouse delivery experience using Delta Lake.

Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies.

Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills.

Mindset

Strong grounding in secure Azure Landing Zone patterns.

Comfort with Git, CI/CD, automated deployments and modern engineering standards.

Clear communicator who can translate technical decisions into business outcomes.

Nice to Have

Databricks Certified Data Engineer Associate

Streaming ingestion experience (Auto Loader, structured streaming, watermarking)

Subscription/entitlement modelling experience

Advanced Unity Catalog security (RLS, ABAC, PII governance)

Terraform/Bicep for IaC

Fabric Semantic Model / Direct Lake optimisation

Related Jobs

View all jobs

Senior Data Engineer - Azure

Senior Data Engineer

Senior Data Engineer - Burton upon Trent

Data Engineer

Senior Python Data Engineer - Experimentation Platform

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Engineering Recruitment Trends 2025 (UK): What Job Seekers Need To Know About Today’s Hiring Process

Summary: UK data engineering hiring has shifted from title‑led CV screens to capability‑driven assessments that emphasise reliable pipelines, modern lakehouse/streaming stacks, data contracts & governance, observability, performance/cost discipline & measurable business outcomes. This guide explains what’s changed, what to expect in interviews & how to prepare—especially for platform‑oriented DEs, analytics engineers, streaming specialists, data reliability engineers, DEs supporting AI/ML platforms & data product managers. Who this is for: Data engineers, analytics engineers, streaming engineers, data reliability/SRE, data platform engineers, data product owners, ML/feature‑store engineers & SQL/ELT specialists targeting roles in the UK.

Why Data Engineering Careers in the UK Are Becoming More Multidisciplinary

For many years, data engineering in the UK meant designing pipelines, moving data between systems, and ensuring analysts had what they needed. Today, the field is expanding. With cloud platforms, machine learning, real-time analytics and the explosion of sensitive personal data, employers expect data engineers to do much more. Modern data engineering is no longer just about code and storage. It requires legal awareness, ethical judgement, psychological insight, linguistic clarity and human-centred design. These disciplines shape how data is collected, processed, explained and trusted. In this article, we’ll explore why data engineering careers in the UK are becoming more multidisciplinary, how law, ethics, psychology, linguistics & design now influence job descriptions, and what job-seekers & employers must do to thrive.

Data Engineering Team Structures Explained: Who Does What in a Modern Data Engineering Department

Data has become the lifeblood of modern organisations. Every sector in the UK—finance, healthcare, retail, government, technology—is increasingly relying on insights derived from data to drive decisions, deliver products, and improve operations. But raw data on its own isn’t enough. To make data useful, reliable, secure, and scalable, companies must build strong data engineering teams. If you’re recruiting for data engineering or seeking a role, understanding the structure of such a team and who does what is essential. This article breaks down the typical roles in a modern data engineering department, how they collaborate, required skills and qualifications, expected UK salaries, common challenges, and advice on structuring and growing a data engineering team.