Data Engineer – GCP/DSS

Hammersmith Broadway
2 months ago
Applications closed

Related Jobs

View all jobs

Principal Data Engineer (GCP)

Lead Data Engineer (GCP)

Data Engineer

Data Engineer

Lead Data Engineer

Data Engineer

Job Title: Data Engineer – GCP/DSS

Department: Enabling Functions

Location: Hybrid, London

Type: Both Contract (Inside IR35) & Permanent available

Salary: Competitive; depends on experience and open to discussion

Purpose of Job

What you will be working on

While our broker platform is the core technology crucial to success – this role will focus on supporting the middle/back-office operations that will lay the foundations for further and sustained success.

We're a multi-disciplined team, bringing together expertise in software and data engineering, full stack development, platform operations, algorithm research, and data science. Our squads focus on delivering high-impact solutions – we favour a highly iterative, analytical approach.

You will be designing and developing complex data processing modules and reporting using Big Query and Tableau. In addition, you will also work closely with the Infrastructure/Platform Team, responsible for architecting and operating the core of the Data Analytics platform.

Principle Accountabilities

Work with both the business teams (finance and actuary initially), data scientists and engineers to design, build, optimise and maintain production grade data pipelines and reporting from an internal Data warehouse solution, based on GCP/Big Query.

Work with finance, actuaries, data scientists and engineers to understand how we can make best use of new internal and external data sources.

Work with our delivery partners at EY/IBM to ensure robustness of design and engineering of the data model/MI and reporting which can support our ambitions for growth and scale.

BAU ownership of data models, reporting and integrations/pipelines.

Create frameworks, infrastructure and systems to manage and govern data assets.

Produce detailed documentation to allow ongoing BAU support and maintenance of data structures, schema, reporting etc.

Work with the broader Engineering community to develop our data and MLOps capability infrastructure.

Ensure data quality, governance, and compliance with internal and external standards.

Monitor and troubleshoot data pipeline issues, ensuring reliability and accuracy

Regulatory Conduct and Rules

  1. Act with integrity

  2. Act with due skill, care and diligence

  3. Be open and co-operative with Lloyd’s, the FCA, the PRA, and other regulators

  4. Pay due regard to the interests of customers and treat them fairly

  5. Observe proper standards of market conduct

    Education, Qualifications, Knowledge, Skills and Experience

  • Experience designing data models and developing industrialised data pipelines.

  • Strong knowledge of database and data lake systems.

  • Hands-on experience in Big Query, dbt, GCP cloud storage.

  • Proficient in Python, SQL and Terraform.

  • Knowledge of Cloud SQL, Airbyte, Dagster.

  • Comfortable with shell scripting with Bash or similar.

  • Experience provisioning new infrastructure in a leading cloud provider, preferably GCP.

  • Proficient with Tableau Cloud for data visualization and reporting.

  • Experience creating DataOps pipelines.

  • Comfortable working in an Agile environment, actively participating in approaches such as Scrum or Kanban

    Desirable Skills

    Experience of streaming data systems and frameworks would be a plus.

    Experience working in regulated industry, especially financial services, would be a plus.

    Experience creating MLOps pipelines is a plus

    The applicant must also demonstrate the following skills and abilities

    Excellent communication skills (both oral and written).

    Pro-active, self-motivated and able to use own initiative.

    Excellent analytical and technical skills.

    Ability to quickly comprehend the functions and capabilities of new technologies.

    Ability to offer balanced opinion regarding existing and future technologies.

    How to Apply

    If you are interested in the Data Engineer – GCP/DSS position, please apply here

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.