Lead Data Engineer

Glasgow
1 week ago
Create job alert

Lead Data Engineer (Azure / Databricks)

NO VISA REQUIREMENTS

MUST BE BASED NEAR GLASGOW TO WORK 3 DAYS ONSITE

My FMCG client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory.
They are looking for a Lead Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics.

What You'll Do

Lakehouse Engineering (Azure + Databricks)

Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL.

Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets.

Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns.

Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations.

Curated Layers & Data Modelling

Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas.

Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets.

Apply governance, lineage and permissioning through Unity Catalog.

Orchestration & Observability

Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs.

Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability.

Assist in performance tuning and cost optimisation.

DevOps & Platform Engineering

Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets.

Support secure deployment patterns using private endpoints, managed identities and Key Vault.

Participate in code reviews and help improve engineering practices.

Collaboration & Delivery

Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business.

Contribute to architectural discussions and the ongoing data platform roadmap.

Tech You'll Use

Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake

Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints

Languages: PySpark, Spark SQL, Python, Git

DevOps: Azure DevOps Repos & Pipelines, CI/CD

Analytics: Power BI, Fabric

What We're Looking For

Experience

Commercial and proven Lead Data Engineering experience.

Hands-on experience delivering solutions on Azure + Databricks.

Strong PySpark and Spark SQL skills within distributed compute environments.

Experience working in a Lakehouse/Medallion architecture with Delta Lake.

Understanding of dimensional modelling (Kimball), including SCD Type 1/2.

Exposure to operational concepts such as monitoring, retries, idempotency and backfills.

Mindset

Good energy and enthusiasm
Keen to grow within a modern Azure Data Platform environment.
Comfortable with Git, CI/CD and modern engineering workflows.
Able to communicate technical concepts clearly to non-technical stakeholders.
Quality-driven, collaborative and proactive.

Why Join?

Opportunity to shape and build a modern enterprise Lakehouse platform.

Hands-on work with Azure, Databricks and leading-edge engineering practices.

Real progression opportunities within a growing data function.

Direct impact across multiple business domains

Related Jobs

View all jobs

Lead Data Engineer

Lead Data Engineer

Lead Data Engineer

Lead Data Engineer

Lead Data Engineer

Lead Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.