Data Engineer

Above & Beyond - Climate Tech Recruitment
City of London
3 days ago
Create job alert

Data Engineer

Remote or Hybrid

Based in London or Nairobi (must have right to work)

London - £80,000 - 100,000

Nairobi - KES 10-15M


Above and Beyond Recruitment is proud to be partnering with ONE Data to recruit a Data Engineer to join their mission to build the world's first public finance and development data tool.


Who are we?

ONE Data is an initiative of The ONE Campaign focused on transforming how public finance and development data is accessed and used.


Our vision is a world where information asymmetries are collapsed and high-quality, evidence-based decisions lead to greater economic opportunity and healthier lives.


Our mission is to organise the world’s public finance and development data and make it universally accessible and useful - collapsing the time from raw data to actionable insight. By building open, interoperable data infrastructure and intuitive analytical tools, ONE Data strengthens transparency, accountability, and more effective investment in development.


In a system where data is fragmented, delayed, and difficult to interpret, ONE Data integrates disparate sources into trusted, policy-relevant insights that empower decision-makers, advocates, journalists, researchers, and partners globally.


The opportunity:

We are looking for a Data Engineer to help build the data infrastructure that powers ONE Data's products, Knowledge Graph, APIs, and analytical platforms. This is a role with real ownership. You will shape foundational systems, help make architectural decisions, and see your work directly enable better policy decisions, research and analysis.


ONE Data works with complex, fragmented public finance and development datasets, from aid flows and budget data to debt statistics and policy indicators. The Data Engineer designs the pipelines, models, and quality frameworks that transform these disparate sources into trusted, interoperable data that researchers, policymakers, and advocates can rely on.


The successful candidate will help shape a working foundation into a mature, well-documented, well-tested data platform. They will contribute to architectural decisions alongside the Senior Director for Data & Product, helping establish engineering standards, and coordinating with external service providers for specialised data modelling and engineering work when the scope requires it.


You will focus on:

In the coming months, the Data Engineer will focus on:

  • Building the Development Finance Observatory, designing and shipping the ETL pipelines and tools that integrate development finance datasets (e.g. OECD, IATI, World Bank, IMF, WHO, etc.) into a unified knowledge graph.
  • Scaling the Knowledge Graph, including schema design, data integration, and optimisations.
  • Developing the data quality framework, implementing provenance tracking, quality indicators, coverage metrics, and automated testing so that every data point in our systems is trustworthy and well documented


And will also contribute to:

  • Shipping open-source data infrastructure, building pipelines and tools that the broader development data community can use and extend.
  • Designing APIs for data access, including RESTful APIs and an MCP server to provide programmatic access to our data.
  • Coordinating with specialist partners and external data engineering service providers for deep domain work like concept modelling or high-volume data integration.


Tech stack:

  • Languages: python (pandas, httpx, sdmx, pydantic, FastAPI, FastMCP, ADK), SQL (ISO Graph Query Language would be a plus)
  • Cloud: Google Cloud Platform (Cloud Run, Cloud Build, BigQuery, Spanner Graph, Cloud SQL, Cloud Storage)
  • Other: DuckDB, Terraform, Git, Cloud Build


The infrastructure runs primarily on Google Cloud Platform, with the Knowledge Graph built on Spanner through the Data Commons infrastructure, alongside BigQuery for internal analytical workloads and MySQL for supporting services.



Key responsibilities:

Data infrastructure and pipelines

  • Design, build, and maintain open-source ETL/ELT pipelines that ingest, clean, transform, and deliver development finance data from multiple sources.
  • Contribute to data modelling and schema design across ONE Data's infrastructure.
  • Help design, build and maintain APIs for structured data access, serving both internal products and external users.
  • Implement and maintain Infrastructure-as-Code for deployment, scaling, and monitoring.
  • Establish and maintain data lineage documentation across all systems.
  • Design and implement data quality frameworks, automated testing, and monitoring systems.


Knowledge graph and data architecture

  • Contribute to the development and evolution of the ONE Data’s deployment of the Data Commons Knowledge Graph on Spanner Graph, including schema design, data integration, and query optimisation.
  • Work within and extend the Data Commons infrastructure to support ONE Data's analytical and product needs.
  • Ensure interoperability and consistency across ONE Data’s systems, tools and products.


Collaboration and delivery

  • Support policy researchers, partners, and clients with data access and integration needs.
  • Help coordinate external data engineering service providers for specialised or high-volume data modelling work.
  • Participate in sprint planning, technical design reviews, and agile delivery cycles.
  • Contribute to open-source tooling and documentation.



Qualifications:

Education & Experience

  • Bachelor's degree (or higher) in computer science, data engineering, software engineering, or a related field.
  • 5+ years of experience in data engineering, back-end development, or a related technical role.
  • Experience working with open data, public finance, or international development datasets, including navigating the challenges of fragmented sources, inconsistent standards, and incomplete coverage that characterise this domain.
  • Experience contributing to data infrastructure decisions, with a desire to grow into architectural ownership.


Technical Expertise

  • Strong Python and SQL expertise for data engineering
  • Experience designing and building scalable ETL/ELT pipelines and data architectures.
  • Experience with Google Cloud Platform services (BigQuery, Cloud Storage, Spanner, Cloud Run, etc).
  • Experience with API design and development for data access.
  • Familiarity with Infrastructure-as-Code (Terraform or similar) or willingness to learn
  • Familiarity with graph databases or Knowledge Graph technologies strongly preferred. Willingness to learn and develop expertise in this area is essential.
  • Familiarity with data quality frameworks, automated testing, and monitoring.
  • Strong understanding of data modelling, schema design, and data governance principles.


Other attributes and culture fit:

  • Commitment to ONE Data's mission of making public finance and development data universally accessible and useful.
  • Belief that well-engineered data infrastructure is a public good.
  • Ability to operate effectively within a global matrix organisation.
  • Highly organised, analytical and self-motivated.
  • Collaborative mindset with strong interpersonal skills.
  • Comfortable navigating ambiguity and fast-moving priorities.
  • Remains positive under pressure and in high-stakes environments.
  • Independent problem solver with sound judgement.
  • Action-oriented and results focused.
  • Flexible and resourceful approach to delivery.
  • Commitment to transparency, accountability and equity in development.


Languages:

Fluency in English required. Proficiency in additional languages relevant to ONE’s work (such as French or German) is a plus.


Travel:

Travel requirements vary by role but may include occasional domestic and international travel (up to 10%) to attend partner meetings, conferences, or team convenings.


Work environment:

Hybrid or remote work environment depending on location. Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.



ONE is an equal opportunity employer and does not discriminate in its selection and employment practices. All qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, political affiliation, sexual orientation, gender identity, marital status, disability, protected veteran status, genetic information, age, or other legally protected characteristics.

Related Jobs

View all jobs

Data Engineer - AI Analytics and EdTech Developments

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.