Data Engineer

Vintage Cash Cow & Arcavindi
Morley
2 days ago
Create job alert

Description


About the team

At Vintage Cash Cow and Vintage.com, technology is how we scale impact. Every customer journey, from sending in pre-loved items to getting paid, is powered by the systems we design, the products we build, and the data we unlock.


Our Technology & Data team is at the heart of this transformation driving greenfield product development, experimenting with fresh ideas, and bringing innovative solutions to life. We’re building modern, scalable, and customer‑focused platforms that set the standard for the re‑commerce industry.


This is a team where curiosity meets craft : blending creativity, technical excellence, and a product mindset to deliver experiences that feel simple, rewarding, and future‑proof.


About the role

We’re looking for a hands‑on, detail‑loving Data Engineer to help us level up our data foundations and set us up for bigger, bolder analytics. This is our second data hire, which means you won’t just be maintaining something that already exists, you’ll be helping define how it should work end‑to‑end.


You’ll own the pipes and plumbing : designing and building robust data pipelines, shaping our warehouse models, and keeping data clean, reliable, and ready for decision‑making. A big part of your focus will be digital marketing and CRM data (HubSpot especially), so our Growth, Finance, and Product teams can move fast with confidence.


If you’re excited by building a modern stack, balancing build vs buy, and creating the kind of data platform that people actually love using, you’ll fit right in.


This role can be based in either the UK or the Netherlands.


Getting Started…

  • Get familiar with our current data setup and BI stack (Snowflake, dbt, FiveTran, Sigma, SQL, and friends).
  • Meet your key partners across Growth / Marketing, Finance, Operations, and Product to understand the metrics that matter most.
  • Explore our core data sources (Adalyser, Meta Ads, Google Ads, HubSpot, Aircall, and internal platforms) and how they flow today.

Spot early wins : where pipelines can be simplified, data quality boosted, or reporting made faster and smarter.


Establishing Your Impact…

  • Build and optimise reliable, scalable ELT / ETL pipelines into Snowflake.
  • Create clean, reusable models that make downstream analytics simple and consistent.
  • Put proactive monitoring and validation in place so teams can trust what they see.
  • Reduce manual work across reporting and data movement through automation.

Driving Excellence…

  • Help define and evolve our data architecture as we scale into new markets.
  • Champion best practice : documentation, governance, naming conventions, testing, and performance.
  • Partner closely with stakeholders so data engineering solves real commercial problems (not just technical ones).
  • Keep one eye on what’s next : smarter tooling, AI‑assisted analytics, and ways to make our stack even more self‑serve.

Key Responsibilities
Key Goals & Objectives :

  • Build and maintain a modern, scalable data platform that supports growth and decision‑making.
  • Ensure data is accurate, consistent, and trusted across the business.
  • Improve Speed, reliability, and automation of data pipelines and reporting workflows.
  • Enable high‑quality self‑serve analytics by delivering well‑modelled, well‑documented data sets.
  • Support digital performance and CRM insight through strong marketing data foundations.

Data architecture & pipeline development

  • Design, implement, and maintain robust Data Pipelines across multiple systems.
  • Ensure smooth, well‑governed flow of data from source → warehouse → BI layers.
  • Support end‑to‑end warehouse design and modelling as our stack grows.

Data integration

Integrate and manage a wide range of data sources within Snowflake, including :



  • Adalyser
  • Meta Ads
  • Google Ads
  • HubSpot
  • Aircall
  • Performance tracking data
  • Product imagery + metadata from bespoke internal platforms
  • Maintain consistency and quality across the ecosystem as new sources come online.

Data quality & validation

  • Build automated checks to monitor accuracy, completeness, and freshness.
  • Run regular audits and troubleshoot issues quickly and calmly.
  • Create Clear ownership and definitions for key data sets.

Optimisation & automation

  • Identify opportunities to streamline pipelines, improve performance, and Reduce Cost.
  • Automate repetitive workflows to free teams up for higher‑value analysis.
  • Improve reliability and Speed of reporting inputs.

Collaboration

  • Work closely with teams across Growth, Finance, Ops, and Product to understand KPIs and reporting needs.
  • Translate those needs into smart, scalable data solutions.
  • Communicate clearly with both technical and non‑technical folks, no jargon Fog.

Documentation & best practices

  • Document architecture, pipelines, models, and workflows so everything is clear and easy to pick up.
  • Contribute to data standards and governance as we build out the function.
  • Share knowledge openly and help shape how data engineering is done at Vintage.com and Vintage Cash Cow.

Skills, Knowledge and Expertise
Essential Skills & Experience :

Strong Snowflake experience : loading, querying, optimising, and building views / stored procedures.


Solid SQL skills : confident writing complex queries over large datasets.


Hands‑on pipeline experience using tools like dbt, FiveTran, Airflow, Coalesce, HighTouch, Rudderstack, Snowplow, or similar.


Data warehousing know‑how and a clear view of what “good” looks like for scalable architecture.


Analytical, detail‑focused mindset : you care about quality, reliability, and root‑cause fixes.


Great communication : able to explain technical concepts in a simple, useful way.



  • Comfortable working in a small, high‑impact team where you’ll shape the roadmap.

Nice to have :

  • Experience working with HubSpot data (ETL into a warehouse, understanding the schema, Reporting context).
  • Digital marketing analytics background : ads platforms, attribution, funnel performance, campaign measurement.
  • Familiarity with CRMs / marketing automation tools (HubSpot, Marketo, Salesforce, etc.).
  • Python or R for automation, data wrangling, or pipeline support.
  • Understanding of A / B testing or experimentation frameworks.
  • Exposure to modern data governance / catalogue tooling.

Equal Opportunities

At Vintage.com, we’re committed to creating an inclusive environment where everyone feels heard, respected, and able to bring their authentic self to work. We believe that diversity fuels innovation, creativity, and success.


We welcome applicants from all backgrounds, perspectives, and experiences, and we work hard to ensure equitable opportunities for all. If you're excited about this role but don’t meet every requirement, we still encourage you to apply, your unique skills and experiences might be exactly what we need now or in the future.


If you need any adjustments or accommodations during the hiring process, just let us know and we’ll do our best to support you.


Your personal data will be handled in accordance with our Privacy Notice.


Vintage Trading Introduction

Vintage Trading is a fast‑growing circular economy business on a mission to make it easy and rewarding for people to declutter responsibly. We operate two leading brands : Vintage Cash Cow in the UK and Arcavindi across Europe.


With over 300 employees in the UK and a growing team in Europe, targeting 120 team members this year, we're building a unique operation powered by hundreds of in‑house experts in vintage, antiques, and second‑hand goods. Together, we’re redefining how people value and recycle the past, helping customers turn unwanted items into money while promoting sustainability and reuse.


As we continue to scale, we’re always on the lookout for talented, passionate individuals to join us in shaping the future of the vintage and re‑commerce world!


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How to Write a Data Engineering Job Ad That Attracts the Right People

Data engineering is the backbone of modern data-driven organisations. From analytics and machine learning to business intelligence and real-time platforms, data engineers build the pipelines, platforms and infrastructure that make data usable at scale. Yet many employers struggle to attract the right data engineering candidates. Job adverts often generate high application volumes, but few applicants have the practical skills needed to build and maintain production-grade data systems. At the same time, experienced data engineers skip over adverts that feel vague, unrealistic or misaligned with real-world data engineering work. In most cases, the issue is not a shortage of talent — it is the quality and clarity of the job advert. Data engineers are pragmatic, technically rigorous and highly selective. A poorly written job ad signals immature data practices and unclear expectations. A well-written one signals strong engineering culture and serious intent. This guide explains how to write a data engineering job ad that attracts the right people, improves applicant quality and positions your organisation as a credible data employer.

Maths for Data Engineering Jobs: The Only Topics You Actually Need (& How to Learn Them)

If you are applying for data engineering jobs in the UK, maths can feel like a vague requirement hiding behind phrases like “strong analytical skills”, “performance mindset” or “ability to reason about systems”. Most of the time, hiring managers are not looking for advanced theory. They want confidence with the handful of maths topics that show up in real pipelines: Rates, units & estimation (throughput, cost, latency, storage growth) Statistics for data quality & observability (distributions, percentiles, outliers, variance) Probability for streaming, sampling & approximate results (sketches like HyperLogLog++ & the logic behind false positives) Discrete maths for DAGs, partitioning & systems thinking (graphs, complexity, hashing) Optimisation intuition for SQL plans & Spark performance (joins, shuffles, partition strategy, “what is the bottleneck”) This article is written for UK job seekers targeting roles like Data Engineer, Analytics Engineer, Platform Data Engineer, Data Warehouse Engineer, Streaming Data Engineer or DataOps Engineer.

Neurodiversity in Data Engineering Careers: Turning Different Thinking into a Superpower

Every modern organisation runs on data – but without good data engineering, even the best dashboards & machine learning models are built on sand. Data engineers design the pipelines, platforms & tools that make data accurate, accessible & reliable. Those pipelines need people who can think in systems, spot patterns in messy logs, notice what others overlook & design elegant solutions to complex problems. That is exactly why data engineering can be such a strong fit for many neurodivergent people, including those with ADHD, autism & dyslexia. If you’re neurodivergent & considering a data engineering career, you might have heard comments like “you’re too disorganised for engineering”, “too literal for stakeholder work” or “too distracted for complex systems”. In reality, the traits that can make traditional office environments hard often line up beautifully with data engineering work. This guide is written for data engineering job seekers in the UK. We’ll cover: What neurodiversity means in a data engineering context How ADHD, autism & dyslexia strengths map to common data engineering tasks Practical workplace adjustments you can request under UK law How to talk about your neurodivergence in applications & interviews By the end, you’ll have a clearer sense of where you might thrive in data engineering – & how to turn “different thinking” into a genuine professional superpower.