Be at the heart of actionFly remote-controlled drones into enemy territory to gather vital information.

Apply Now

Data Engineer

Vintage Cash Cow
Morley
5 days ago
Applications closed

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Department: Technology & Data


Employment Type: Full Time


Location: Trimble Offices, Morley


Description

About the team:


At Vintage Cash Cow and Vintage.com, technology is how we scale impact. Every customer journey, from sending in pre‑loved items to getting paid, is powered by the systems we design, the products we build, and the data we unlock.


Our Technology & Data team is at the heart of this transformation driving greenfield product development, experimenting with fresh ideas, and bringing innovative solutions to life. We’re building modern, scalable, and customer‑focused platforms that set the standard for the re‑commerce industry.


This is a team where curiosity meets craft: blending creativity, technical excellence, and a product mindset to deliver experiences that feel simple, rewarding, and future‑proof.


About the role:


We’re looking for a hands‑on, detail‑loving Data Engineer to help us level up our data foundations and set us up for bigger, bolder analytics. This is our second data hire, which means you won’t just be maintaining something that already exists, you’ll be helping define how it should work end‑to‑end.


You’ll own the pipes and plumbing: designing and building robust data pipelines, shaping our warehouse models, and keeping data clean, reliable, and ready for decision‑making. A big part of your focus will be digital marketing and CRM data (HubSpot especially), so our Growth, Finance, and Product teams can move fast with confidence.


If you’re excited by building a modern stack, balancing build vs buy, and creating the kind of data platform that people actually love using, you’ll fit right in.


This role can be based in either the UK or the Netherlands.


Getting Started…

  • Get familiar with our current data setup and BI stack (Snowflake, dbt, FiveTran, Sigma, SQL, and friends).
  • Meet your key partners across Growth/Marketing, Finance, Operations, and Product to understand the metrics that matter most.
  • Explore our core data sources (Adalyser, Meta Ads, Google Ads, HubSpot, Aircall, and internal platforms) and how they flow today.
  • Spot early wins: where pipelines can be simplified, data quality boosted, or reporting made faster and smarter.

Establishing Your Impact…

  • Build and optimise reliable, scalable ELT/ETL pipelines into Snowflake.
  • Create clean, reusable models that make downstream analytics simple and consistent.
  • Put proactive monitoring and validation in place so teams can trust what they see.
  • Reduce manual work across reporting and data movement through automation.

Driving Excellence…

  • Help define and evolve our data architecture as we scale into new markets.
  • Champion best practice: documentation, governance, naming conventions, testing, and performance.
  • Partner closely with stakeholders so data engineering solves real commercial problems (not just technical ones).
  • Keep one eye on what’s next: smarter tooling, AI‑assisted analytics, and ways to make our stack even more self‑serve.

Key Responsibilities

Key Goals & Objectives:



  • Build and maintain a modern, scalable data platform that supports growth and decision‑making.
  • Ensure data is accurate, consistent, and trusted across the business.
  • Improve speed, reliability, and automation of data pipelines and reporting workflows.
  • Enable high‑quality self‑serve analytics by delivering well‑modelled, well‑documented data sets.
  • Support digital performance and CRM insight through strong marketing data foundations.

Key Responsibilities:


Data architecture & pipeline development



  • Design, implement, and maintain robust data pipelines across multiple systems.
  • Ensure smooth, well‑governed flow of data from source → warehouse → BI layers.
  • Support end‑to‑end warehouse design and modelling as our stack grows.

Data integration



  • Integrate and manage a wide range of data sources within Snowflake, including:

    • Adalyser
    • Meta Ads
    • Google Ads
    • HubSpot
    • Aircall
    • Performance tracking data
    • Product imagery + metadata from bespoke internal platforms

    Maintain consistency and quality across the ecosystem as new sources come online.

Data quality & validation



  • Build automated checks to monitor accuracy, completeness, and freshness.
  • Run regular audits and troubleshoot issues quickly and calmly.
  • Create clear ownership and definitions for key data sets.

Optimisation & automation



  • Identify opportunities to streamline pipelines, improve performance, and reduce cost.
  • Automate repetitive workflows to free teams up for higher‑value analysis.
  • Improve reliability and speed of reporting inputs.

Collaboration



  • Work closely with teams across Growth, Finance, Ops, and Product to understand KPIs and reporting needs.
  • Translate those needs into smart, scalable data solutions.
  • Communicate clearly with both technical and non‑technical folks, no jargon fog.

Documentation & best practices



  • Document architecture, pipelines, models, and workflows so everything is clear and easy to pick up.
  • Contribute to data standards and governance as we build out the function.
  • Share knowledge openly and help shape how data engineering is done at Vintage.com and Vintage Cash Cow.

Skills, Knowledge and Expertise

Essential Skills & Experience:



  • Strong Snowflake experience: loading, querying, optimising, and building views/stored procedures.
  • Solid SQL skills: confident writing complex queries over large datasets.
  • Hands‑on pipeline experience using tools like dbt, FiveTran, Airflow, Coalesce, HighTouch, Rudderstack, Snowplow, or similar.
  • Data warehousing know‑how and a clear view of what “good” looks like for scalable architecture.
  • Analytical, detail‑focused mindset: you care about quality, reliability, and root‑cause fixes.
  • Great communication: able to explain technical concepts in a simple, useful way.
  • Comfortable working in a small, high‑impact team where you’ll shape the roadmap.

Nice to have:



  • Experience working with HubSpot data (ETL into a warehouse, understanding the schema, reporting context).
  • Digital marketing analytics background: ads platforms, attribution, funnel performance, campaign measurement.
  • Familiarity with CRMs/marketing automation tools (HubSpot, Marketo, Salesforce, etc.).
  • Python or R for automation, data wrangling, or pipeline support.
  • Understanding of A/B testing or experimentation frameworks.
  • Exposure to modern data governance/catalogue tooling.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Data Engineering Hiring Trends 2026: What to Watch Out For (For Job Seekers & Recruiters)

As we move into 2026, the data engineering jobs market in the UK is evolving fast. Almost every organisation is talking about AI, analytics & data-driven decision making – but behind all that sits the data engineering function. Cloud costs, complex data estates, stricter regulation & the explosion of AI workloads are all changing how data platforms are built & run. Some companies are tightening budgets & consolidating teams, while others are doubling down on modern data stacks, lakehouses & real-time pipelines. Whether you are a data engineering job seeker planning your next move, or a recruiter building data teams, understanding the key data engineering hiring trends for 2026 will help you stay ahead.

Data Engineering Recruitment Trends 2025 (UK): What Job Seekers Need To Know About Today’s Hiring Process

Summary: UK data engineering hiring has shifted from title‑led CV screens to capability‑driven assessments that emphasise reliable pipelines, modern lakehouse/streaming stacks, data contracts & governance, observability, performance/cost discipline & measurable business outcomes. This guide explains what’s changed, what to expect in interviews & how to prepare—especially for platform‑oriented DEs, analytics engineers, streaming specialists, data reliability engineers, DEs supporting AI/ML platforms & data product managers. Who this is for: Data engineers, analytics engineers, streaming engineers, data reliability/SRE, data platform engineers, data product owners, ML/feature‑store engineers & SQL/ELT specialists targeting roles in the UK.

Why Data Engineering Careers in the UK Are Becoming More Multidisciplinary

For many years, data engineering in the UK meant designing pipelines, moving data between systems, and ensuring analysts had what they needed. Today, the field is expanding. With cloud platforms, machine learning, real-time analytics and the explosion of sensitive personal data, employers expect data engineers to do much more. Modern data engineering is no longer just about code and storage. It requires legal awareness, ethical judgement, psychological insight, linguistic clarity and human-centred design. These disciplines shape how data is collected, processed, explained and trusted. In this article, we’ll explore why data engineering careers in the UK are becoming more multidisciplinary, how law, ethics, psychology, linguistics & design now influence job descriptions, and what job-seekers & employers must do to thrive.