Senior Data Engineer/ Scientist

Glasgow
3 months ago
Applications closed

Related Jobs

View all jobs

Senior Data Engineer (AWS, Airflow, Python)

Senior Data Engineer (AWS, Airflow, Python)

Lead Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

Senior Data Engineer - Azure & Databricks Lakehouse

Glasgow (3/4 days onsite) | Exclusive Role with a Leading UK Consumer Business

A rapidly scaling UK consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse.
They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines, Unity Catalog, and Azure Data Factory, and this role sits right at the heart of that transformation.
This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care.
If you want to build a best-in-class Lakehouse from scratch-this is the one.

? What You'll Be Doing

Lakehouse Engineering (Azure + Databricks)

Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold).

Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks.

Apply Lakeflow expectations for data quality, schema validation and operational reliability.

Curated Data Layers & Modelling

Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations).

Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets.

Apply governance, lineage and fine-grained permissions via Unity Catalog.

Orchestration & Observability

Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory.

Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform.

DevOps & Platform Engineering

Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts.

Ensure secure, enterprise-grade platform operation across Dev ? Prod, using private endpoints, managed identities and Key Vault.

Contribute to platform standards, design patterns, code reviews and future roadmap.

Collaboration & Delivery

Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation.

Influence architecture decisions and uplift engineering maturity within a growing data function.

? Tech Stack You'll Work With

Databricks: Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses

Azure: ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints

Languages: PySpark, Spark SQL, Python, Git

DevOps: Azure DevOps Repos, Pipelines, CI/CD

Analytics: Power BI, Fabric

? What We're Looking For

Experience

5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks.

Strong PySpark/Spark SQL and distributed data processing expertise.

Proven Medallion/Lakehouse delivery experience using Delta Lake.

Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies.

Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills.

Mindset

Strong grounding in secure Azure Landing Zone patterns.

Comfort with Git, CI/CD, automated deployments and modern engineering standards.

Clear communicator who can translate technical decisions into business outcomes.

Nice to Have

Databricks Certified Data Engineer Associate

Streaming ingestion experience (Auto Loader, structured streaming, watermarking)

Subscription/entitlement modelling experience

Advanced Unity Catalog security (RLS, ABAC, PII governance)

Terraform/Bicep for IaC

Fabric Semantic Model / Direct Lake optimisation

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.