Data Engineer

Winton
City of London
5 days ago
Create job alert

Winton is a research-based investment management company with a specialist focus on statistical and mathematical inference in financial markets. The firm researches and trades quantitative investment strategies, which are implemented systematically via thousands of securities, spanning the world's major liquid asset classes. Founded in 1997 by David Harding, Winton today manages assets for some of the world’s largest institutional investors.

We employ ambitious professionals who want to work collaboratively at the leading edge of investment management.

Winton leverages quantitative analysis and cutting-edge technology to identify and capitalize on opportunities across global financial markets. We foster a collaborative and intellectually stimulating environment, bringing together individuals with Mathematics, Physics and Computer Science backgrounds who are passionate about applying rigorous scientific methods to financial challenges. As a fundamentally data-driven business, our success is heavily linked to the acquisition, processing, and analysis of vast datasets. High-quality, well-managed data forms the critical foundation for our quantitative research, strategy development, and automated trading systems.

As a Data Engineer within our Quantitative Platform team, you will play a pivotal role in building and maintaining the data infrastructure that fuels our research and trading strategies. You will be responsible for the end-to-end lifecycle of diverse datasets – including market, fundamental, and alternative sources – ensuring their timely acquisition, rigorous cleaning and validation, efficient storage, and reliable delivery through robust data pipelines. Working closely with quantitative researchers and technologists, you will tackle complex challenges in data quality, normalization, and accessibility, ultimately providing the high-fidelity, readily available data essential for developing and executing sophisticated investment models in a fast-paced environment.

Your responsibilities will include:

  • Evaluating, onboarding, and integrating complex data products from diverse vendors, serving as a key technical liaison to ensure data feeds meet our stringent requirements for research and live trading.
  • Designing, implementing, and optimizing robust, production-grade data pipelines to transform raw vendor data into analysis-ready datasets, adhering to software engineering best practices and ensuring seamless consumption by our automated trading systems.
  • Engineering and maintaining sophisticated automated validation frameworks to guarantee the accuracy, timeliness, and integrity of all datasets, directly upholding the quality standards essential for the efficacy of our quantitative strategies.
  • Providing expert operational support for our data pipelines, rapidly diagnosing and resolving critical issues to ensure the uninterrupted flow of high-availability data powering our daily trading activities.
  • Participating actively in team rotations, including on-call schedules, to provide essential coverage and maintain the resilience of our data systems of standard business hours.

What we are looking for:

  • 1+ years’ experience building ETL/ELT pipelines using Python
  • Familiarity with various technologies such as S3, Kafka, Airflow, Iceberg.
  • A commitment to engineering excellence and pragmatic technology solutions.
  • A desire to work in an operational role at the heart of a dynamic data-centric enterprise.
  • Excellent communication and collaboration skills, and the ability to work in a team.

What would be advantageous:

  • Strong understanding of financial markets.
  • Proficiency working with large financial datasets from various vendors.
  • Experience working with hierarchical reference data models.
  • Proven expertise in handling high-throughput, real-time market data streams
  • Familiarity with distributed computing frameworks such as Apache Spark
  • Operational experience supporting real time system
Equal Opportunity Workplace

We are proud to be an equal opportunity workplace. We do not discriminate based upon race, religion, color, national origin, sex, sexual orientation, gender identity/expression, age, status as a protected veteran, status as an individual with a disability, or any other applicable legally protected characteristics.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.