Data Engineer

Intellect Group
City of London
1 week ago
Create job alert

๐Ÿš€ Are you a Data Engineer who enjoys building production-grade pipelines, optimising performance, and working with modern Python tooling (DuckDB/Polars) on time-series datasets?


Iโ€™m supporting a UK-based fintech in their search for a hands-on Python Data Engineer to help build and improve the data infrastructure powering a unified data + analytics API for financial markets participants.


Youโ€™ll sit in a engineering/analytics team and take ownership of pipelines end-to-end โ€” from onboarding new datasets through to reliability, monitoring and data quality in production.

In this role, youโ€™ll:


  • ๐Ÿ”ง Build, streamline and improve ETL/data pipelines (prototype โ†’ production)
  • ๐Ÿ“ˆ Ingest and normalise high-velocity time-series datasets from multiple external sources
  • โš™๏ธ Work heavily in Python with a modern stack including DuckDB and Polars (plus Parquet/PyArrow)
  • ๐Ÿงฉ Orchestrate workflows and improve reliability (they use Temporal โ€” similar orchestration experience is fine)
  • โœ… Improve data integrity and visibility: validations, automated checks, backfills, monitoring/alerting
  • ๐Ÿ“Š Support downstream analytics and client-facing outputs (dashboards/PDF/Plotly โ€” least important)


Whatโ€™s in it for you?

  • ๐Ÿ“Œ Modern data stack โ€“ DuckDB/Polars + Parquet/Arrow in a genuinely hands-on environment
  • ๐Ÿ“ˆ Ownership & impact โ€“ Youโ€™ll be close to the data flows and have real influence on performance and reliability
  • ๐Ÿฆ Market data exposure โ€“ Work with complex financial datasets (experience helpful, interest is enough)
  • ๐Ÿข Hybrid London โ€“ London preferred, with 2โ€“3 days in the office
  • โšก Start ASAP โ€“ Interviewing now


What my client is looking for:

  • Strong Python + SQL fundamentals (data engineering / ETL / pipeline ownership)
  • Hands-on experience with DuckDB and/or Polars (DuckDB especially valuable)
  • Experience operating pipelines in production (monitoring, backfills, incident/RCA mindset, data quality)
  • Cloud experience with demonstrable production use (Azure preferred)
  • Clear communicator, comfortable working across engineering/analytics stakeholders


Nice to have:

  • Time-series data experience (market data, telemetry, pricing, events)
  • Streaming exposure (Kafka/Event Hubs/Kinesis)
  • Experience with Temporal (or similar orchestrators like Airflow/Dagster/Prefect)
  • Any exposure to AI agents / automation tooling


๐Ÿ‘‰ Apply now!

Related Jobs

View all jobs

Data Engineer - AI Analytics and EdTech Developments

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If youโ€™re aiming for a career in data engineering, it can feel like youโ€™re staring at a never-ending list of tools and technologies โ€” SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and itโ€™s easy to conclude that unless you have experience with every modern tool in the data stack, you wonโ€™t even get a callback. Hereโ€™s the honest truth most data engineering hiring managers will quietly agree with: ๐Ÿ‘‰ They donโ€™t hire you because you know every tool โ€” they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think โ€” but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If youโ€™re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers donโ€™t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration โ€” and if they donโ€™t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application โ€” and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Arenโ€™t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.