National AI Awards 2025Discover AI's trailblazers! Join us to celebrate innovation and nominate industry leaders.

Nominate & Attend

Databricks Engineer

London
2 weeks ago
Create job alert

Data Pipeline Development:

Design and implement end-to-end data pipelines in Azure Databricks, handling ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services.
Write efficient and standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline.
Automate pipeline orchestration using Databricks Workflows or integration with external tools (e.g., Apache Airflow, Azure Data Factory).
Data Ingestion & Transformation:

Build scalable data ingestion processes to handle structured, semi-structured, and unstructured data from various sources (APIs, databases, file systems).
Implement data transformation logic using Spark, ensuring data is cleaned, transformed, and enriched according to business requirements.
Leverage Databricks features such as Delta Lake to manage and track changes to data, enabling better versioning and performance for incremental data loads.
Data Publishing & Integration:

Publish clean, transformed data to Azure Data Lake or other cloud storage solutions for consumption by analytics and reporting tools.
Define and document best practices for managing and maintaining robust, scalable data pipelines.
Data Governance & Security:

Implement and maintain data governance policies using Unity Catalog, ensuring proper organization, access control, and metadata management across data assets.
Ensure data security best practices, such as encryption at rest and in transit, and role-based access control (RBAC) within Azure Databricks and Azure services.
Performance Tuning & Optimization:

Optimize Spark jobs for performance by tuning configurations, partitioning data, and caching intermediate results to minimize processing time and resource consumption.
Continuously monitor and improve pipeline performance, addressing bottlenecks and optimizing for cost efficiency in Azure.
Automation & Monitoring:

Automate data pipeline deployment and management using tools like Terraform, ensuring consistency across environments.
Set up monitoring and alerting mechanisms for pipelines using Databricks built-in features and Azure Monitor to detect and resolve issues proactively.
Requirements

Data Pipeline Expertise: Extensive experience in designing and implementing scalable ETL/ELT data pipelines in Azure Databricks, transforming raw data into usable datasets for analysis.
Azure Databricks Proficiency: Strong knowledge of Spark (SQL, PySpark) for data transformation and processing within Databricks, along with experience building workflows and automation using Databricks Workflows.
Azure Data Services: Hands-on experience with Azure services like Azure Data Lake, Azure Blob Storage, and Azure Synapse for data storage, processing, and publication.

Data Governance & Security: Familiarity with managing data governance and security using Databricks Unity Catalog, ensuring data is appropriately organized, secured, and accessible to authorized users.
Optimization & Performance Tuning: Proven experience in optimizing data pipelines for performance, cost-efficiency, and scalability, including partitioning, caching, and tuning Spark jobs.
Cloud Architecture & Automation: Strong understanding of Azure cloud architecture, including best practices for infrastructure-as-code, automation, and monitoring in data environments

Related Jobs

View all jobs

Data Engineer/Azure/Databricks

Data Engineer - Azure & Databricks

Fabric Data Analytics Specialist - Power BI, DataBricks, DAX

Senior Data Engineer

Principal Data Engineer

Azure Data Engineer

National AI Awards 2025

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Present Data Engineering Solutions to Non-Technical Audiences: A Public Speaking Guide for Job Seekers

As the demand for data engineers grows, so do the expectations. It’s not enough to build robust pipelines or optimise ETL jobs—UK employers now look for candidates who can also communicate clearly with stakeholders, especially those without technical backgrounds. Whether you're applying for a data engineering role in finance, healthcare, retail, or tech, your ability to explain complex systems in plain English is becoming one of the most valued soft skills in interviews and in the workplace. This guide will help you master public speaking for data engineering roles: from structuring your presentation and designing effective visuals, to simplifying terminology, storytelling and confidently answering stakeholder questions.

Data Engineering Jobs UK 2025: 50 Companies Hiring Now

Bookmark this guide—refreshed every quarter—so you always know who’s really expanding their data engineering teams. Driven by the UK’s Digital Economy Strategy, the AI & GenAI boom, cheaper cloud storage and a squeeze on legacy batch pipelines, data engineering hiring is in overdrive for 2025. Employers from hyperscale tech firms to NHS trusts want lake‑house architects, streaming‑platform specialists, ETL developers, MLOps pipeline gurus, analytics engineers & FinOps‑savvy cost guardians—right now. Below you’ll find 50 organisations that posted UK‑based data engineering vacancies or announced head‑count growth in the last eight weeks. They’re grouped into five easy‑scan categories. For each company you’ll see its main UK hub, an example live or recent vacancy, and a quick reason it’s worth your time. Search any employer on DataEngineeringJobs.co.uk to view current ads, or set a free alert so fresh openings land straight in your inbox.

Return-to-Work Pathways: Relaunch Your Data Engineering Career with Returnships, Flexible & Hybrid Roles

Re-entering the workforce after a career break can feel like stepping into a rapidly shifting data pipeline—especially in a specialist field like data engineering. Whether you paused your career for parenting, caring responsibilities or another life chapter, the UK’s data engineering sector now offers a variety of return-to-work pathways. From structured returnships to flexible, hybrid and full-time roles, these programmes recognise the value of your transferable skills and life experience. With tailored mentorship, targeted upskilling and supportive networks, you can confidently relaunch your data engineering career. In this guide, you’ll learn how to: Understand the current demand for data engineers in the UK Leverage your organisational, communication and problem-solving skills in data contexts Overcome common re-entry challenges with practical solutions Refresh your technical knowledge through targeted learning Access returnship and re-entry programmes tailored to data engineering Find roles that fit around family commitments—whether flexible, hybrid or full-time Balance your relaunch with caring responsibilities Master applications, interviews and networking specific to data engineering Draw inspiration from real returner success stories Get answers to common questions in our FAQ section Whether you aim to return as a data pipeline developer, ETL specialist, big-data architect or analytics engineer, this article maps out the steps and resources you need to reignite your data engineering career.