Data Engineer

IBM
Blackwood
1 week ago
Create job alert
Introduction

At IBM Research, we are the innovation engine of IBM. Exploring what’s next in computing and shaping the technologies the world will rely on tomorrow. From advancing AI and hybrid cloud to pioneering practical quantum computing, we anticipate challenges and unlock new opportunities for clients, partners, and society. Working in Research means joining a team that accelerates discovery at the intersection of high-performance computing, AI, quantum, and cloud. You’ll collaborate with leading scientists, engineers, and visionaries to push boundaries and turn ideas into reality. With a culture built on curiosity, creativity, and collaboration, IBM Research offers the opportunity to grow your career while contributing to breakthroughs that transform industries and change the world.


Your Role And Responsibilities

IBM Quantum is building the world’s leading quantum computing systems, software, and cloud services. The Data Engineer in this role will design and operate the data pipelines that power insight into quantum hardware performance, system reliability, user workloads, and platform operations. You will work closely with quantum hardware, firmware, cloud, and product teams to turn diverse technical datasets into trusted analytics assets that guide decision‑making across IBM Quantum’s roadmap.


Preferred Education

Master's Degree


Required Technical And Professional Expertise

  • Design, build, and maintain scalable, reliable data pipelines supporting analytics, operational dashboards, and hardware performance insights for IBM Quantum systems.
  • Develop and operate ETL/ELT workflows with a focus on data quality, accuracy, timeliness, and continuous improvement.
  • Apply advanced SQL skills using PostgreSQL and Presto to support analytical workloads, including complex queries and performance tuning.
  • Build and operate orchestration workflows in Apache Airflow, including dependency management, retries, backfills, monitoring, and operational reliability.
  • Implement data transformations and validations using Python (e.g., pandas and related libraries).
  • Support large‑scale batch processing for high‑volume, heterogeneous datasets, including system telemetry, experiment metadata, cloud operations data, and device performance metrics.
  • Work with streaming platforms such as Apache Kafka or IBM Event Streams to consume event‑driven data from distributed quantum systems and services.
  • Apply streaming architecture concepts including topics, partitions, consumer groups, and schema evolution.
  • Integrate multiple technical data sources—quantum hardware telemetry, calibration data, experiment logs, job execution data, user activity, system health metrics—into trusted analytical datasets.
  • Collaborate with quantum hardware, software, product, SRE, and analytics teams to translate requirements into robust, production‑ready data solutions.
  • Use Git-based version control, contribute via code reviews, and follow industry-standard software engineering best practices.

Preferred Technical And Professional Experience

  • Experience with Lakehouse solutions and architectures, including IBM watsonx.data
  • Experience with distributed analytics engines such as Presto/Trino, or Apache Spark
  • Familiarity with data modeling techniques for analytical and reliability engineering use cases.


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.