Data Engineer (Azure Data Platform)

Risley, Warrington
2 days ago
Create job alert

Data Engineer (Azure Data Platform)

About Synextra
Synextra is a Microsoft-specialist Managed Service Provider headquartered in Warrington, operating as a premium partner to regulated mid-market organisations including law firms, financial services firms, and mortgage lenders. We're deliberately small - around 35 people - because we believe the best outcomes come from technical depth, not headcount. Our AI Services Division is growing fast, and we're building out a serious data and engineering capability to match. This is a chance to get in early and shape how that function operates.

The Role
We're looking for a technically driven Azure Data Engineer to join our data platform team. You'll design, build, and maintain production-grade data pipelines on Microsoft Azure - transforming complex, diverse datasets into analytics-ready formats that power business intelligence and AI initiatives for our clients and internally.

The ideal candidate treats pipelines and infrastructure as code, with a genuine passion for software engineering in a data context. You'll work across the modern Azure data stack - ADF, ADLS Gen2, PySpark, Delta Lake - with increasing exposure to Microsoft Fabric as the platform matures. You'll collaborate closely with customers and internal teams to ensure data is structured and governed for reliable downstream consumption.

This is a hands-on engineering role with room to grow into leadership: you'll champion DevOps best practices, contribute to architectural decisions, and help mentor junior engineers as the team scales.

Responsibilities

  • Architect and write production-grade ELT/ETL data pipelines using PySpark and Python within Azure ecosystem.

  • Build custom, reusable data processing frameworks and libraries in Python/Scala to streamline ingestion and transformation tasks across the engineering team

  • Programmatically ingest large volumes of structured and unstructured data from REST APIs, streaming platforms (e.g. Event Hubs, Kafka), and legacy databases into ADLS Gen2 and OneLake

  • Develop structured data models aligned to Lakehouse, Medallion Architecture, and Delta Lake patterns

  • Continuously profile, debug, and optimise Spark jobs, SQL queries, and Python scripts for maximum performance and cost-efficiency at scale

  • Champion DevOps best practices: implement infrastructure-as-code (Terraform), automated testing, and CI/CD deployment pipelines via Git and Azure DevOps

  • Identifying patterns in recurring issues and engineering permanent solutions

  • Write comprehensive unit and integration tests for all data pipelines to ensure data integrity; enforce data governance protocols, RBAC, and encryption standards across all environments

    Requirements

    Essential Technical Skills

  • Advanced proficiency in Python and PySpark, writing clean, modular, object-oriented code for data transformations

  • Strong command of SQL (T-SQL, Spark SQL) for data exploration, validation, and final-stage modelling

  • Deep hands-on experience with Microsoft Fabric and its tooling such as Azure Data Factory (ADF), and Azure Data Lake Storage (ADLS Gen2)

  • Practical experience with Git, branching strategies, automated testing (e.g. pytest), and CI/CD orchestration via Azure DevOps

  • Proven commercial track record of deploying complex data solutions on the Microsoft Azure platform

  • Experience collaborating with a range of stakeholders to structure data for downstream consumption (e.g. MLflow, Power BI semantic models)

  • Infrastructure-as-code experience with Terraform for Azure resource provisioning

    Desirable Technical Skills

  • Familiarity with streaming data architectures (Spark Structured Streaming)

  • Knowledge of complementary modern data stack tools such as dbt for SQL-based transformations

  • Experience integrating Large Language Models (LLMs) or operationalising AI/ML models

    Personal Qualities

  • Exceptional problem-solving abilities and a persistent, detail-oriented approach to debugging complex code

  • Strong communication skills to effectively translate business requirements into technical architectures

  • A proactive mindset focused on continuous learning and staying ahead of the rapidly evolving data landscape

  • Willingness to review code submissions, enforce coding standards, and mentor junior engineers on the team

    Preferred Background

  • 3–5+ years in software engineering, data engineering, or Big Data environments with a code-first approach

  • Proven commercial experience deploying and maintaining complex data solutions on Microsoft Azure

  • Experience working in cross-functional teams

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.