Data Architect - Bristol Opportunity

Hays Technology
Bristol, Bristol (county), United Kingdom
2 weeks ago
£80,000 – £88,000 pa

Salary

£80,000 – £88,000 pa

Work Location
Hybrid
Posted
2 Apr 2026 (2 weeks ago)

Your new company

They are a specialist insurance and risk solutions provider, supporting clients with tailored coverage and expert advice across a range of sectors. The business is known for its client‑focused approach, strong market relationships and commitment to delivering practical, dependable solutions.

With a collaborative culture and a focus on professional development, they offer a supportive environment where people are trusted, valued and encouraged to grow their careers within a forward‑thinking organisation.

Your new role

As a Data Architect, you'll play a key role in shaping how data is designed, managed and used across the business. You'll set the architectural direction for our data estate - from the point data first lands on the platform, through the Bronze, Silver and Gold layers of our Medallion Architecture, and all the way to analytics, AI and self-service reporting.

Working within the Microsoft Azure and Databricks ecosystem, you'll help build a data platform that's scalable, flexible and built to last. Your work will directly support high‑impact use cases, including advanced analytics, pricing models, AI/ML solutions and regulatory reporting - ensuring teams across the business can trust and use data with confidence.

Data Architecture & Modelling

Define and own the architectural principles, standards and policies governing SBG's data estate from the landing zone through to the Gold layer.

Design and govern the Medallion Architecture (Bronze / Silver / Gold), ensuring every layer is built for analytics, AI/ML and self-service consumption.

Own data modelling standards - conceptual, logical and physical - and ensure models are fit for both regulatory reporting and AI-driven insight.

Define Unity Catalogue structure, metadata standards and data lineage governance across the estate. Data Ingestion & Processing

Define ingestion standards and data contracts for data arriving from the landing zone into the Bronze layer, working in partnership with the Development and Application Management team.

Design and optimise ETL/ELT pipeline frameworks using Databricks, Delta Lake and Azure Data Factory. * Ensure Silver and Gold layer data products are fit for purpose for analytics, pricing, AI and ML model consumption.

Optimise data pipelines for efficiency, cost-effectiveness and high performance, leveraging Databricks for big data processing and machine learning.

Governance & Standards

Act as the architectural authority for the data estate - reviewing designs, enforcing standards and preventing platform fragmentation as SBG scales.

Ensure all data architecture decisions align with regulatory requirements - FCA, GDPR, Solvency II, IFRS 17 and BCBS 239.

Define and maintain data architecture policies and guidelines ensuring long-term scalability and sustainability.

Analytics & AI Enablement

Design the Gold layer to ensure data products are structured, documented and accessible for self-service analytics and AI/ML model consumption.

Collaborate with ML Ops and Data Science teams to define data product standards and feature engineering patterns.

Evaluate and lead adoption of emerging Azure and Databricks capabilities - including Microsoft Fabric, OneLake and DirectLake - where they advance the data architecture.

Drive innovation by evaluating and implementing emerging cloud-based data technologies to enhance SBG's competitive advantage.

What you'll need:

Strong stakeholder management across business, IT and compliance teams.

Excellent communication, collaboration and influencing skills at all levels of an organisation.

Experience leading data architecture and engineering teams in an enterprise environment.

Ability to define and implement a data strategy aligned with business objectives.

Proven track record of delivering enterprise-scale data solutions with a focus on performance, security and scalability.

Experience in regulated financial services, ensuring compliance with industry standards.

Deep expertise in data modelling - conceptual, logical and physical.

Data warehousing and data lake architecture for high-performance analytics.

ETL/ELT pipeline development and optimisation to support large-scale data processing.

Data integration across structured and unstructured sources, ensuring high availability.

Metadata management and governance to maintain data quality and lineage.

Experience defining data contracts and ingestion standards between source delivery teams and the data estate.

Deep expertise in Microsoft Azure cloud services - ADF, ADLS, Synapse, Purview.

Databricks - Delta Lake architecture, optimisation and advanced data processing.

Apache Spark for large-scale distributed computing and performance tuning.

Microsoft Fabric - OneLake and DirectLake integration.

Azure Synapse Analytics for enterprise-scale data warehousing.

Infrastructure-as-Code (Terraform or Azure Bicep) to automate cloud deployments.

CI/CD pipelines with Azure DevOps or GitHub Actions for automated deployment of data pipelines.

MLOps best practices - MLflow, Databricks Model Serving, Feature Store.

Knowledge of IFRS 17, BCBS 239, UK Data Protection Act and Solvency II compliance.

Experience with pricing models, claims processing and fraud detection in the insurance sector.

Strong problem-solving skills and ability to translate business needs into technical solutions.

Ability to document and present complex data architectures to technical and non-technical stakeholdersWhat you'll get in return

Hybrid working - 2 days in the office and 3 days working from home

25 days annual leave, rising to 27 days over 2 years' service and 30 days after 5 years' service. Plus bank holidays!

Discretionary annual bonus

Pension scheme - 5% employee, 6% employer & many more

What you need to do now

If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV.

If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career.

Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)

Related Jobs

View all jobs

Data Architect

Damia Group Chiswick, London, W4 5PS, United Kingdom
£70,000 – £90,000 pa

Data Architect

VIQU IT Birmingham, West Midlands (county), United Kingdom
£80,000 – £85,000 pa

Data Architect – Multi-Cloud – Eligible for Security Clearance

Avanti Recruitment London, United Kingdom
£90,000 – £100,000 pa

Data Architect - Bristol Opportunity

Hays Technology Bristol, Bristol (county), United Kingdom
£80,000 – £88,000 pa Hybrid

Data Architect (SC Cleared)

TXP London, United Kingdom
£450 – £550 pd

Data Architect - Palantir Foundry

VIQU IT London, United Kingdom
£450 – £550 pd Contract

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Where to Advertise Data Engineering Jobs in the UK (2026 Guide)

Advertising data engineering jobs in the UK requires a different approach to most technical hiring. Data engineers occupy a distinct discipline that sits between software engineering, data science and cloud infrastructure — and the strongest candidates identify firmly with the data engineering community rather than with adjacent roles. General job boards consistently conflate data engineering with data analysis, data science and BI development, producing high application volumes but low candidate quality for specialist pipeline and platform roles. This guide, published by DataEngineeringJobs.co.uk, covers where to advertise data engineering roles in the UK in 2026, how the main platforms compare, what employers should expect to pay, and what the data says about hiring across different role types.

New Data Engineering Employers to Watch in 2026: UK and Global Companies Driving the Data Revolution

Data engineering is at the heart of the digital economy, transforming raw data into actionable insights, powering analytics, AI systems, and cloud infrastructure. As the UK and global markets continue to invest heavily in data platforms, pipelines, and real-time analytics, demand for skilled data engineers is growing rapidly. For professionals exploring opportunities on www.DataEngineeringJobs.co.uk , the critical question is: which companies are expanding, hiring, and shaping the future of data-driven business? This article highlights new data engineering employers to watch in 2026, including UK startups, scale-ups, and international firms expanding in the UK.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.