Data Engineer

Glasgow
1 month ago
Create job alert

Mid-Level Data Engineer (Azure / Databricks)

NO VISA REQUIREMENTS

Location: Glasgow (3+ days)
Reports to: Head of IT
My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory.
They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics.

What You'll Do

Lakehouse Engineering (Azure + Databricks)

Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL.

Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets.

Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns.

Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations.

Curated Layers & Data Modelling

Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas.

Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets.

Apply governance, lineage and permissioning through Unity Catalog.

Orchestration & Observability

Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs.

Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability.

Assist in performance tuning and cost optimisation.

DevOps & Platform Engineering

Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets.

Support secure deployment patterns using private endpoints, managed identities and Key Vault.

Participate in code reviews and help improve engineering practices.

Collaboration & Delivery

Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business.

Contribute to architectural discussions and the ongoing data platform roadmap.

Tech You'll Use

Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake

Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints

Languages: PySpark, Spark SQL, Python, Git

DevOps: Azure DevOps Repos & Pipelines, CI/CD

Analytics: Power BI, Fabric

What We're Looking For

Experience

Commercial and proven data engineering experience.

Hands-on experience delivering solutions on Azure + Databricks.

Strong PySpark and Spark SQL skills within distributed compute environments.

Experience working in a Lakehouse/Medallion architecture with Delta Lake.

Understanding of dimensional modelling (Kimball), including SCD Type 1/2.

Exposure to operational concepts such as monitoring, retries, idempotency and backfills.

Mindset

Keen to grow within a modern Azure Data Platform environment.

Comfortable with Git, CI/CD and modern engineering workflows.

Able to communicate technical concepts clearly to non-technical stakeholders.

Quality-driven, collaborative and proactive.

Nice to Have

Databricks Certified Data Engineer Associate.

Experience with streaming ingestion (Auto Loader, event streams, watermarking).

Subscription/entitlement modelling (e.g., ChargeBee).

Unity Catalog advanced security (RLS, PII governance).

Terraform or Bicep for IaC.

Fabric Semantic Models or Direct Lake optimisation experience.

Why Join?

Opportunity to shape and build a modern enterprise Lakehouse platform.

Hands-on work with Azure, Databricks and leading-edge engineering practices.

Real progression opportunities within a growing data function.

Direct impact across multiple business domains

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Engineering Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data engineering in your 30s, 40s or 50s? You’re not alone. In the UK, companies of all sizes — from fintechs to government agencies, retailers to healthcare providers — are building data teams to turn vast amounts of information into insight and value. That means demand for data engineering talent remains strong, but there’s a gap between media hype and the real pathways available to mid-career professionals. This guide gives you the straight UK reality check: which data engineering roles are genuinely open to career switchers, what skills employers actually look for, how long retraining really takes and how to position your experience for success.

How to Write a Data Engineering Job Ad That Attracts the Right People

Data engineering is the backbone of modern data-driven organisations. From analytics and machine learning to business intelligence and real-time platforms, data engineers build the pipelines, platforms and infrastructure that make data usable at scale. Yet many employers struggle to attract the right data engineering candidates. Job adverts often generate high application volumes, but few applicants have the practical skills needed to build and maintain production-grade data systems. At the same time, experienced data engineers skip over adverts that feel vague, unrealistic or misaligned with real-world data engineering work. In most cases, the issue is not a shortage of talent — it is the quality and clarity of the job advert. Data engineers are pragmatic, technically rigorous and highly selective. A poorly written job ad signals immature data practices and unclear expectations. A well-written one signals strong engineering culture and serious intent. This guide explains how to write a data engineering job ad that attracts the right people, improves applicant quality and positions your organisation as a credible data employer.

Maths for Data Engineering Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data engineering jobs in the UK, maths can feel like a vague requirement hiding behind phrases like “strong analytical skills”, “performance mindset” or “ability to reason about systems”. Most of the time, hiring managers are not looking for advanced theory. They want confidence with the handful of maths topics that show up in real pipelines: Rates, units & estimation (throughput, cost, latency, storage growth) Statistics for data quality & observability (distributions, percentiles, outliers, variance) Probability for streaming, sampling & approximate results (sketches like HyperLogLog++ & the logic behind false positives) Discrete maths for DAGs, partitioning & systems thinking (graphs, complexity, hashing) Optimisation intuition for SQL plans & Spark performance (joins, shuffles, partition strategy, “what is the bottleneck”) This article is written for UK job seekers targeting roles like Data Engineer, Analytics Engineer, Platform Data Engineer, Data Warehouse Engineer, Streaming Data Engineer or DataOps Engineer.