Data Engineer

Vintage Cash Cow
Morley
1 month ago
Applications closed

Related Jobs

View all jobs

Data Engineer - AI Analytics and EdTech Developments

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Department: Technology & Data


Employment Type: Full Time


Location: Trimble Offices, Morley


Description

About the team: At Vintage Cash Cow and Vintage.com, technology is how we scale impact. Every customer journey, from sending in pre‑loved items to getting paid, is powered by the systems we design, the products we build, and the data we unlock.


Our Technology & Data team is at the heart of this transformation driving greenfield product development, experimenting with fresh ideas, and bringing innovative solutions to life. We’re building modern, scalable, and customer‑focused platforms that set the standard for the re‑commerce industry.


This is a team where curiosity meets craft: blending creativity, technical excellence, and a product mindset to deliver experiences that feel simple, rewarding, and future‑proof.


About the role: We’re looking for a hands‑on, detail‑loving Data Engineer to help us level up our data foundations and set us up for bigger, bolder analytics. This is our second data hire, which means you won’t just be maintaining something that already exists, you’ll be helping define how it should work end‑to‑end.


You’ll own the pipes and plumbing: designing and building robust data pipelines, shaping our warehouse models, and keeping data clean, reliable, and ready for decision‑making. A big part of your focus will be digital marketing and CRM data (HubSpot especially), so our Growth, Finance, and Product teams can move fast with confidence.


If you’re excited by building a modern stack, balancing build vs buy, and creating the kind of data platform that people actually love using, you’ll fit right in.


This role can be based in either the UK or the Netherlands.


Getting Started…

  • Get familiar with our current data setup and BI stack (Snowflake, dbt, FiveTran, Sigma, SQL, and friends).
  • Meet your key partners across Growth/Marketing, Finance, Operations, and Product to understand the metrics that matter most.
  • Explore our core data sources (Adalyser, Meta Ads, Google Ads, HubSpot, Aircall, and internal platforms) and how they flow today.
  • Spot early wins: where pipelines can be simplified, data quality boosted, or reporting made faster and smarter.

Establishing Your Impact…

  • Build and optimise reliable, scalable ELT/ETL pipelines into Snowflake.
  • Create clean, reusable models that make downstream analytics simple and consistent.
  • Put proactive monitoring and validation in place so teams can trust what they see.
  • Reduce manual work across reporting and data movement through automation.

Driving Excellence…

  • Help define and evolve our data architecture as we scale into new markets.
  • Champion best practice: documentation, governance, naming conventions, testing, and performance.
  • Partner closely with stakeholders so data engineering solves real commercial problems (not just technical ones).
  • Keep one eye on what’s next: smarter tooling, AI‑assisted analytics, and ways to make our stack even more self‑serve.

Key Responsibilities

Key Goals & Objectives:



  • Build and maintain a modern, scalable data platform that supports growth and decision‑making.
  • Ensure data is accurate, consistent, and trusted across the business.
  • Improve speed, reliability, and automation of data pipelines and reporting workflows.
  • Enable high‑quality self‑serve analytics by delivering well‑modelled, well‑documented data sets.
  • Support digital performance and CRM insight through strong marketing data foundations.

Key Responsibilities:
Data architecture & pipeline development

  • Design, implement, and maintain robust data pipelines across multiple systems.
  • Ensure smooth, well‑governed flow of data from source → warehouse → BI layers.
  • Support end‑to‑end warehouse design and modelling as our stack grows.

Data integration

  • Integrate and manage a wide range of data sources within Snowflake, including:

    • Adalyser
    • Meta Ads
    • Google Ads
    • HubSpot
    • Aircall
    • Performance tracking data
    • Product imagery + metadata from bespoke internal platforms
    • Maintain consistency and quality across the ecosystem as new sources come online.



Data quality & validation

  • Build automated checks to monitor accuracy, completeness, and freshness.
  • Run regular audits and troubleshoot issues quickly and calmly.
  • Create clear ownership and definitions for key data sets.

Optimisation & automation

  • Identify opportunities to streamline pipelines, improve performance, and reduce cost.
  • Automate repetitive workflows to free teams up for higher‑value analysis.
  • Improve reliability and speed of reporting inputs.

Collaboration

  • Work closely with teams across Growth, Finance, Ops, and Product to understand KPIs and reporting needs.
  • Translate those needs into smart, scalable data solutions.
  • Communicate clearly with both technical and non‑technical folks, no jargon fog.

Documentation & best practices

  • Document architecture, pipelines, models, and workflows so everything is clear and easy to pick up.
  • Contribute to data standards and governance as we build out the function.
  • Share knowledge openly and help shape how data engineering is done at Vintage.com and Vintage Cash Cow.

Skills, Knowledge and Expertise
Essential Skills & Experience

  • Strong Snowflake experience: loading, querying, optimising, and building views/stored procedures.
  • Solid SQL skills: confident writing complex queries over large datasets.
  • Hands‑on pipeline experience using tools like dbt, FiveTran, Airflow, Coalesce, HighTouch, Rudderstack, Snowplow, or similar.
  • Data warehousing know‑how and a clear view of what "good" looks like for scalable architecture.
  • Analytical, detail‑focused mindset: you care about quality, reliability, and root‑cause fixes.
  • Great communication: able to explain technical concepts in a simple, useful way.
  • Comfortable working in a small, high‑impact team where you’ll shape the roadmap.

Nice to have:

  • Experience working with HubSpot data (ETL into a warehouse, understanding the schema, reporting context).
  • Digital marketing analytics background: ads platforms, attribution, funnel performance, campaign measurement.
  • Familiarity with CRMs/marketing automation tools (HubSpot, Marketo, Salesforce, etc.).
  • Python or R for automation, data wrangling, or pipeline support.
  • Understanding of A/B testing or experimentation frameworks.
  • Exposure to modern data governance/catalogue tooling.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.