Data Engineer - Python & Azure

Birmingham
8 months ago
Applications closed

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

DATA ENGINEER - PYTHON & AZURE

Contract type: Permanent

Hours: 37.5

Salary: Circa £61,000, depending on experience

Location: Birmingham or Leeds

WFH policy: Employees are required to attend the office 2 days/week

Travel: You will be required to travel to the London office once a month (this is expensed by the business)

Flexible working: Variety of flexible work patterns subject to line manager discretion e.g. Compressed 9-day fortnight.

Reports to: Data Engineer Manager

Deadline Note: We reserve the right to close the advert before the advertised deadline if there are a high volume of applications.

Role Summary:

As the Data Engineer, you will own LCCC’s data ecosystems and will work with developers, solution architects, technical BAs, analysts, data scientists, and other SMEs to define the optimum data platform for the business. You will build the required ETL/ELT pipelines for the ingesting, processing and storing of data and ensure data integrity, accuracy, and reliability throughout the data lifecycle.

You will have a specific focus on developing the data inputs that are utilised by the organisation’s internal forecasting model of the GB power markets and operations; designing a framework that scales effectively along with the source data.

Key Responsibilities:

  • Build and maintain ETL/ELT pipelines to make data accurate and easy to use

  • Work to ingest and transform data sets from a variety of data sources including APIs, web scraping, backup databases and third-party services

  • Analyze data to identify patterns, anomalies, and structure in preparation for Extract, Transform, and Load (ETL) processes

  • Ensure efficient data synchronization and flow between various platforms and systems

  • Explore ways to enhance data quality and reliability

  • Assist with the establishment of a data culture across the organisation

  • Drive better data governance through the creation and embedding of principles and processes e.g. Flow Diagram, Data Dictionary, Data Semantic Layers

  • Set service level indicators and monitor execution of data workflows and configure alerts Identify data quality issues through data profiling, analysis and stakeholder engagement

    Skills Knowledge and Expertise

    Essential:

  • At least 1 year of data engineering experience - building data platforms and supporting modelling and data analytics

  • Hands on experience within the Azure Data ecosystem, with Azure Databricks, Data Factory, Data Lake, and Synapse. Certifications are a plus.

  • Strong competence in Python, ideally with PySpark experience

  • Strong competence in SQL

  • Experience building and maintaining data pipelines

  • Experience managing DevOps Pipelines

  • Strong experience in process optimisation, performance tuning, data modelling and SQL/database design skills.

    Desirable:

  • Experience in data architecture

    Employee Benefits:

    As if contributing to and supporting work that makes life better for millions wasn’t rewarding enough, we offer a full range of benefits too. Key benefits that may be available depending on the role include:

  • Annual performance based bonus, up to 10%

  • 25 days annual leave, plus eight bank holidays

  • Up to 8% pension contribution

  • Financial support and time off for study relevant to your role, plus a professional membership subscription

  • Employee referral scheme (up to £1500), and colleague recognition scheme

  • Family friendly policies, including enhanced maternity leave and shared parental leave

  • Free, confidential employee assistance, including financial management, family care, mental health, and on-call GP service

  • Three paid volunteering days a year

  • Season ticket loan and cycle to work schemes

  • Family savings on days out and English Heritage, gym discounts, cash back and discounts at selected retailers

  • Employee resource groups

    About Low Carbon Contracts Company

    The Low Carbon Contracts Company (LCCC) exists to help decarbonise the generation of electricity and make it more affordable for the future. Our work is central to the delivery of the Government’s objective to achieve Net Zero target by 2050.

    Please take the time to answer the optional diversity questions

    At LCCC, we are dedicated to fostering a diverse and inclusive workplace where everyone can be their authentic selves and contribute to our mission of advancing a flexible energy future. Our aim is to be reflective of the environments where we operate and truly benefit from a rich tapestry of backgrounds and experiences where everyone thrives which of course make us stronger together. Your diversity data is valuable to us, it helps us understand whether we are effectively connecting with underrepresented groups and realising our diversity aims. Please note that your diversity data will remain anonymised to us as it only feeds into high-level reports not connected to the candidates

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Engineering Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Thinking about switching into data engineering in your 30s, 40s or 50s? You’re not alone. In the UK, companies of all sizes — from fintechs to government agencies, retailers to healthcare providers — are building data teams to turn vast amounts of information into insight and value. That means demand for data engineering talent remains strong, but there’s a gap between media hype and the real pathways available to mid-career professionals. This guide gives you the straight UK reality check: which data engineering roles are genuinely open to career switchers, what skills employers actually look for, how long retraining really takes and how to position your experience for success.

How to Write a Data Engineering Job Ad That Attracts the Right People

Data engineering is the backbone of modern data-driven organisations. From analytics and machine learning to business intelligence and real-time platforms, data engineers build the pipelines, platforms and infrastructure that make data usable at scale. Yet many employers struggle to attract the right data engineering candidates. Job adverts often generate high application volumes, but few applicants have the practical skills needed to build and maintain production-grade data systems. At the same time, experienced data engineers skip over adverts that feel vague, unrealistic or misaligned with real-world data engineering work. In most cases, the issue is not a shortage of talent — it is the quality and clarity of the job advert. Data engineers are pragmatic, technically rigorous and highly selective. A poorly written job ad signals immature data practices and unclear expectations. A well-written one signals strong engineering culture and serious intent. This guide explains how to write a data engineering job ad that attracts the right people, improves applicant quality and positions your organisation as a credible data employer.

Maths for Data Engineering Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data engineering jobs in the UK, maths can feel like a vague requirement hiding behind phrases like “strong analytical skills”, “performance mindset” or “ability to reason about systems”. Most of the time, hiring managers are not looking for advanced theory. They want confidence with the handful of maths topics that show up in real pipelines: Rates, units & estimation (throughput, cost, latency, storage growth) Statistics for data quality & observability (distributions, percentiles, outliers, variance) Probability for streaming, sampling & approximate results (sketches like HyperLogLog++ & the logic behind false positives) Discrete maths for DAGs, partitioning & systems thinking (graphs, complexity, hashing) Optimisation intuition for SQL plans & Spark performance (joins, shuffles, partition strategy, “what is the bottleneck”) This article is written for UK job seekers targeting roles like Data Engineer, Analytics Engineer, Platform Data Engineer, Data Warehouse Engineer, Streaming Data Engineer or DataOps Engineer.