Data Engineer

WPP
City of London
4 months ago
Applications closed

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

WPP is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients, and communities.


Working at WPP means being part of a global network of more than 100,000 talented people dedicated to doing extraordinary work for our clients. We operate in over 100 countries, with corporate headquarters in New York, London and Singapore.


WPP is a world leader in marketing services, with deep AI, data and technology capabilities, global presence and unrivalled creative talent. Our clients include many of the biggest companies and advertisers in the world, including approximately 300 of the Fortune Global 500.


Our people are the key to our success. We're committed to fostering a culture of creativity, belonging and continuous learning, attracting and developing the brightest talent, and providing exciting career opportunities that help our people grow.


Why we're hiring:

As a Data Engineer in the WPP Enterprise Data Group, you will play a key role in maintaining and enhancing one of our client’s reporting solutions. This includes building and supporting data pipelines that deliver accurate, timely, and auditable information, enabling stakeholders to make informed business decisions and meet their reporting obligations.


You will design and implement scalable data solutions, focusing on ingestion, transformation, and delivery of client datasets. This role requires building and maintaining robust, high‑quality data pipelines to ensure reporting outputs are reliable, consistent, and aligned with contractual commitments.


As part of the team, your work will centre on developing and optimising reporting platforms using Azure Databricks and other Azure‑based services, covering batch and streaming data workloads. You will collaborate closely with data architects, analysts, and business stakeholders to ensure reporting solutions are fit‑for‑purpose and scalable.


You will be joining a group of Data Engineers passionate about delivering high‑impact data products for clients and committed to a culture of collaboration, knowledge sharing, and continuous improvement.


What you’ll be doing:

  • Design and build data ingestion pipelines from diverse sources (including APIs) to support reporting requirements.
  • Develop and maintain processing pipelines using PySpark/SQL within Azure Databricks to transform raw datasets into structured, reportable formats.
  • Design and deliver data warehousing solutions and core data engineering workstreams.
  • Ensure data pipelines and products are reliable, high‑quality, and accessible for reporting and analysis.
  • Maintain and optimise pipelines to ensure scalability, performance, and resilience.
  • Facilitate the secure exposure of data to third‑party platforms when required.
  • Propose technical designs and develop integrations to support evolving client reporting needs.
  • Collaborate closely with cross‑functional teams (Data Analysts, Business Analysts, PMO, and stakeholders) to deliver accurate and timely client reporting solutions.
  • To be a persuasive leader who can guide, lead and influence outcomes, and understands the appropriate approach to take depending on the audience and stakeholder group.
  • Exhibit strong stakeholder management skills to ensure that expectations are clearly set and outcomes are delivered to expectations.
  • Make effective decisions using judgement, evidence and expert knowledge to provide responsive solutions in a timely manner.
  • Keep up‑to‑date with industry developments and anticipate future opportunity and risk implications for your work and the wider organisation.
  • Be an ambassador for the X profession/X team by being a subject matter expert and acting as a trusted advisor.
  • Deliver long‑term, sustainable solutions that offer value for money and use best commercial and procurement practices.

What you’ll need:

  • Experience in Python and SQL
  • Experience using Databricks
  • Experience of Microsoft Azure data services – ADLS gen2, Azure Key Vault, Data Factory
  • Proven experience with API integrations for data ingestion
  • Ideally experience in the following – Delta Lake, PySpark
  • Exposure to data science / ML is a plus

Who you are:

You're open: We are inclusive and collaborative; we encourage the free exchange of ideas; we respect and celebrate diverse views. We are open‑minded: to new ideas, new partnerships, new ways of working.


You're optimistic: We believe in the power of creativity, technology and talent to create brighter futures for our people, our clients and our communities. We approach all that we do with conviction: to try the new and to seek the unexpected.


You're extraordinary: we are stronger together: through collaboration we achieve the amazing. We are creative leaders and pioneers of our industry; we provide extraordinary every day.


What we’ll give you:

Passionate, inspired people – We aim to create a culture in which people can do extraordinary work.


Scale and opportunity – We offer the opportunity to create, influence and complete projects at a scale that is unparalleled in the industry.


Challenging and stimulating work – Unique work and the opportunity to join a group of creative problem solvers. Are you up for the challenge?


#LI‑Onsite


We believe the best work happens when we're together, fostering creativity, collaboration, and connection. That's why we’ve adopted a hybrid approach, with teams in the office around four days a week. If you require accommodations or flexibility, please discuss this with the hiring team during the interview process.


WPP is an equal opportunity employer and considers applicants for all positions without discrimination or regard to particular characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.