Data Engineer

Hispanic Alliance for Career Enhancement
Belfast
1 day ago
Create job alert
Description

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation and navigate constant change. Through a combination of strategy, expertise and creativity, we help clients accelerate operational, digital and cultural transformation, enabling the change they need to own their future.


Join our team as the expert you are now and create your future.


Data Engineer


We're seeking a Data Engineer to join the Data Science & Machine Learning team in our Commercial Digital practice, where you'll design, build, and optimize the data infrastructure that powers intelligent systems across Financial Services, Manufacturing, Energy & Utilities, and other commercial industries.


This isn't a maintenance role or a ticket queue—you'll own the full data lifecycle from source integration through analytics-ready delivery. You'll build pipelines that matter: real-time data architectures that feed mission-critical ML models, transformation layers that turn messy enterprise data into trusted datasets, and orchestration systems that ensure reliability at scale. Our clients are Fortune 500 companies looking for partners who can engineer solutions, not just write SQL.


The variety is real. In your first year, you might architect a lakehouse solution for a global manufacturer's IoT data, build a real-time streaming pipeline for a financial services firm's trading analytics, and design a data mesh implementation for a utility company's distribution systems. If you thrive on solving complex data challenges and shipping production systems that ML teams and analysts depend on, this role is for you.


What You’ll Do

  • Design and build end-to-end data pipelines (batch and streaming) from source extraction and ingestion through transformation, quality validation, and delivery. You own the data infrastructure, not just a piece of it.
  • Develop modern data transformation layers using dbt, implementing modular SQL models, testing frameworks, documentation, and CI/CD practices that ensure data quality and maintainability.
  • Build and orchestrate workflows using Microsoft Fabric, Apache Airflow, Dagster, Databricks Workflows, or similar tools to automate complex data processing at scale.
  • Architect lakehouse solutions using open table formats (Delta Lake, Apache Iceberg) on Microsoft Fabric, Snowflake, and Databricks—designing schemas, optimizing performance, and implementing governance frameworks.
  • Ensure data quality and observability—implementing testing frameworks (dbt tests, Great Expectations), monitoring, alerting, and lineage tracking that maintain trust in data assets.
  • Collaborate directly with clients to understand business requirements, translate data needs into technical solutions, and communicate architecture decisions to both technical and executive audiences.

Required Qualifications

  • 2+ years (3+ years for Senior Associate) of hands‑on experience building and deploying data pipelines in production—not just ad-hoc queries and exports. You've built ETL/ELT systems that run reliably and scale.
  • Strong SQL and Python programming skills with experience in PySpark for distributed data processing. SQL for analytics and data modeling; Python/PySpark for pipeline development and large-scale transformations.
  • Experience building data pipelines that serve AI/ML systems, including feature engineering workflows, vector embeddings for retrieval‑augmented generation (RAG), and data quality frameworks that ensure model reproducibility.
  • Experience with modern data transformation tools, particularly dbt (data build tool). You understand modular SQL development, testing, and documentation practices.
  • Experience with cloud data platforms and lakehouse architectures—Snowflake, Databricks, and familiarity with open table formats (Delta Lake, Apache Iceberg). Platform‑flexible but Microsoft‑preferred.
  • Familiarity with workflow orchestration tools such as Apache Airflow, Dagster, Prefect, or Microsoft Data Factory. You understand DAGs, scheduling, and dependency management.
  • Solid understanding of data modeling concepts: dimensional modeling, data vault, normalization/denormalization, and knowing when different approaches are appropriate.
  • Ability to communicate technical concepts to non‑technical stakeholders and work effectively with cross‑functional teams including data scientists, analysts, and business users.
  • Bachelor's degree in Computer Science, Engineering, Mathematics, or related technical field (or equivalent practical experience).
  • Flexibility to work in a hybrid model with periodic travel to client sites as needed.

Preferred Qualifications

  • Experience in Financial Services, Manufacturing, or Energy & Utilities industries.
  • Background in building data infrastructure for ML/AI systems-feature stores (Feast, Databricks Feature Store), training data pipelines, vector databases for RAG/LLM workloads, or model serving architectures.
  • Experience with real‑time and streaming data architectures using Kafka, Spark Streaming, Flink, or Azure Event Hubs, including CDC patterns for data synchronization.
  • Familiarity with MCP (Model Context Protocol) or similar standards for AI system data integration.
  • Experience with data quality and observability frameworks such as Great Expectations, Soda, Monte Carlo, or dbt tests.
  • Experience with high‑performance Python data tools such as Polars or DuckDB for efficient data processing.
  • Knowledge of data governance, cataloging, and lineage tools (Unity Catalog, Purview, Alation, or similar).
  • Familiarity with DataOps and CI/CD practices for data pipelines—version control, automated testing, and deployment automation.
  • Cloud certifications (Snowflake SnowPro, Databricks Data Engineer, Azure Data Engineer, or AWS Data Analytics).
  • Consulting experience or demonstrated ability to work across multiple domains and adapt quickly to new problem spaces.
  • Contributions to open‑source data engineering projects or active participation in the dbt/data community.
  • Master's degree in a technical field.

Why Huron

Variety that accelerates your growth. In consulting, you'll work across industries and data architectures that would take a decade to encounter at a single company. Our Commercial segment spans Financial Services, Manufacturing, Energy & Utilities, and more—each engagement is a new data ecosystem to master and a new platform to ship.


Impact you can measure. Our clients are Fortune 500 companies making significant investments in data infrastructure. The pipelines you build will power real decisions—the ML models that drive production schedules, the dashboards that inform pricing strategies, the data products that enable self‑service analytics. You'll see your work become the foundation others build on.


A team that builds. Huron's Data Science & Machine Learning team is a close‑knit group of practitioners, not just advisors. We write code, build pipelines, and deploy platforms. You'll work alongside engineers and data scientists who understand the craft and push each other to improve.


Investment in your development. We provide resources for continuous learning, conference attendance, and certification. As our DSML practice grows, there's significant opportunity to take on technical leadership and shape our data engineering capabilities.


Position Level

Associate


Country

United Kingdom


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write a Data Engineering Job Ad That Attracts the Right People

Data engineering is the backbone of modern data-driven organisations. From analytics and machine learning to business intelligence and real-time platforms, data engineers build the pipelines, platforms and infrastructure that make data usable at scale. Yet many employers struggle to attract the right data engineering candidates. Job adverts often generate high application volumes, but few applicants have the practical skills needed to build and maintain production-grade data systems. At the same time, experienced data engineers skip over adverts that feel vague, unrealistic or misaligned with real-world data engineering work. In most cases, the issue is not a shortage of talent — it is the quality and clarity of the job advert. Data engineers are pragmatic, technically rigorous and highly selective. A poorly written job ad signals immature data practices and unclear expectations. A well-written one signals strong engineering culture and serious intent. This guide explains how to write a data engineering job ad that attracts the right people, improves applicant quality and positions your organisation as a credible data employer.

Maths for Data Engineering Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data engineering jobs in the UK, maths can feel like a vague requirement hiding behind phrases like “strong analytical skills”, “performance mindset” or “ability to reason about systems”. Most of the time, hiring managers are not looking for advanced theory. They want confidence with the handful of maths topics that show up in real pipelines: Rates, units & estimation (throughput, cost, latency, storage growth) Statistics for data quality & observability (distributions, percentiles, outliers, variance) Probability for streaming, sampling & approximate results (sketches like HyperLogLog++ & the logic behind false positives) Discrete maths for DAGs, partitioning & systems thinking (graphs, complexity, hashing) Optimisation intuition for SQL plans & Spark performance (joins, shuffles, partition strategy, “what is the bottleneck”) This article is written for UK job seekers targeting roles like Data Engineer, Analytics Engineer, Platform Data Engineer, Data Warehouse Engineer, Streaming Data Engineer or DataOps Engineer.

Neurodiversity in Data Engineering Careers: Turning Different Thinking into a Superpower

Every modern organisation runs on data – but without good data engineering, even the best dashboards & machine learning models are built on sand. Data engineers design the pipelines, platforms & tools that make data accurate, accessible & reliable. Those pipelines need people who can think in systems, spot patterns in messy logs, notice what others overlook & design elegant solutions to complex problems. That is exactly why data engineering can be such a strong fit for many neurodivergent people, including those with ADHD, autism & dyslexia. If you’re neurodivergent & considering a data engineering career, you might have heard comments like “you’re too disorganised for engineering”, “too literal for stakeholder work” or “too distracted for complex systems”. In reality, the traits that can make traditional office environments hard often line up beautifully with data engineering work. This guide is written for data engineering job seekers in the UK. We’ll cover: What neurodiversity means in a data engineering context How ADHD, autism & dyslexia strengths map to common data engineering tasks Practical workplace adjustments you can request under UK law How to talk about your neurodivergence in applications & interviews By the end, you’ll have a clearer sense of where you might thrive in data engineering – & how to turn “different thinking” into a genuine professional superpower.