Data Engineer

Intellect Group
London
2 weeks ago
Create job alert

🚀 Are you a Data Engineer (5–7+ years) who enjoys owning production-grade pipelines end-to-end, optimising performance, and working with modern Python tooling on time-series datasets?


I’m supporting a London-based fintech in their search for a hands-on Data Engineer to help build and improve the data infrastructure powering a unified data + analytics API for financial markets participants.


You’ll sit in a small engineering/analytics team and take ownership of pipelines end-to-end — from onboarding new datasets through to reliability, monitoring and data quality in production. Finance experience is a bonus, but not essential.


Note: they use cloud infrastructure but deploy services on their own servers, so a strong production/ops mindset is important.


In this role, you’ll:

  • Build, streamline and improve ETL/data pipelines (prototype → production)
  • Ingest and normalise high-velocity, time-series datasets from multiple external sources
  • Work heavily in Python with a modern columnar stack (Polars + Parquet/Arrow/PyArrow; DuckDB is a nice-to-have)
  • Orchestrate workflows and improve reliability (they use Temporal — similar orchestration experience is fine)
  • Own production readiness: validations, automated checks, backfills/reruns, monitoring/alerting, incident/RCA mindset
  • Work independently and help drive delivery forward — including providing practical technical guidance to shape solutions


What’s in it for you?

  • Modern Python stack – Polars + Parquet/Arrow (DuckDB a plus)
  • Ownership & impact – high visibility; you’ll influence performance and reliability directly
  • Market/time-series exposure – complex financial datasets; learn the domain as you go
  • Hybrid London – London preferred, 2–3 days in the office
  • Start ASAP – interviewing now


What my client is looking for:

  • 5–7+ years hands-on data engineering experience
  • Strong Python + SQL fundamentals (ETL, pipelines, data modelling, performance)
  • Hands-on experience with Polars and Parquet/Arrow/PyArrow
  • Proven ability to operate pipelines in production (monitoring, backfills, data quality, incidents)
  • Able to work independently and drive things forward without heavy oversight
  • Interest in financial data (experience helpful but not required)


Nice to have:

  • DuckDB experience
  • Time-series experience (market data, telemetry, pricing, events)
  • Streaming exposure (Kafka/Event Hubs/Kinesis)
  • Experience with Temporal (or similar orchestrators like Airflow/Dagster/Prefect)
  • Any exposure to AI agents / automation tooling


👉 Apply now!

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

New Data Engineering Employers to Watch in 2026: UK and Global Companies Driving the Data Revolution

Data engineering is at the heart of the digital economy, transforming raw data into actionable insights, powering analytics, AI systems, and cloud infrastructure. As the UK and global markets continue to invest heavily in data platforms, pipelines, and real-time analytics, demand for skilled data engineers is growing rapidly. For professionals exploring opportunities on www.DataEngineeringJobs.co.uk , the critical question is: which companies are expanding, hiring, and shaping the future of data-driven business? This article highlights new data engineering employers to watch in 2026, including UK startups, scale-ups, and international firms expanding in the UK.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.