Software Engineer

Hinckley
5 days ago
Create job alert

Join a pioneering startup using cutting-edge data technology to drive real-world impact on climate and infrastructure - where your code helps power the path to net zero.

Indeximate is a rapidly growing VC backed startup focussed on reducing the barriers to net zero using the fantastic wealth of data that can be obtained using fibre optic sensing. We are permanently instrumenting subsea power cables that provide us with our vital electricity supplies and are using this data to reduce the risks of these cables failing.

Our data has a myriad of other uses: monitoring the environment and the weather, mobility of the seabed, tracking marine mammals, detecting vessels and much more. One of our core goals is liberating these multiple measurements and delivering low-cost sensing as a service direct to the desktop.

With proprietary IP in data compression and analytics at the heart of our technology, we are now looking for a talented Software Engineer to help accelerate our growth and bring this ambitious vision to life.

The Role

This role requires a candidate to work on the cloud software infrastructure to process the uploaded data and turn it into actionable information for clients. This includes implementing algorithms in Python to process the data in the cloud. Our targeted deployment environments are Google's Cloud Platform, utilising BigQuery and CloudRun.

The successful candidate will be a key member of the fast-growing Indeximate team, working with Data Scientists and Software Engineers to turn the terabytes of data generated each day into information with a huge range of applications. Sensing data covering thousands of kilometres of assets worldwide needs to be stored and processed efficiently.

Key Accountabilities:

Data flow management and database optimisation
Development of scalable cloud-based Python implementations of cable health risk algorithms
Development of system monitoring and alerting for internal purposes
Ensure web security protocols are implemented and tightly adhered to
Testing of algorithm implementations against test datasets

Your Experience & Qualifications

You will be a UK citizen holding a graduate or extended degree in a relevant subject (Computer Science, Data science, Software Engineering, etc.) and have cemented those qualifications with at least three years or more of experience post degree working in a commercial cloud computing environment, exploiting Python and working with large scientific datasets.

We welcome applications from part-time and full-time workers. The role will involve regular low frequency travel.

Your Skills

We are a cloud-based data science company, and this role is at the deep end of that experience, and we expect that candidates will have a clearly evident skillset in implementing cloud-based solutions. In addition, we'd love to hear from candidates with:

Ability to implement cloud-based analytics solutions (Google BigQuery preferred) (essential)
Skilled in Python and data science computing and cloud computing integration (essential)
Demonstrable knowledge of key cloud security requirements and protocol implementation
Ability to assess and integrate new technologies
Self-motivated with a desire to improve products and technology
Ability to work independently as well as within a small team
Rigorous approach to testing and code quality
Comfortable with remote working

Salary and Benefits:

Competitive salary (£60,000 – £70,000 DOE)
Company shares ownership in a fast-growing startup
Company life insurance policy
25 days annual leave
Remote working
Travel, food and drinks are fully covered for all team meet ups

Apply directly to express your interest! We look forward to hearing from you

Related Jobs

View all jobs

Software Engineer

Software Engineer

Software Engineer

Software Engineer

Software Engineer

Software Engineer

Get the latest insights and jobs direct. Sign up for our newsletter.

By subscribing you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Quantum-Enhanced AI in Data Engineering: Reshaping the Big Data Pipeline

Data engineering has become an indispensable pillar of the modern technology ecosystem. As companies gather massive troves of data—often measured in petabytes—the importance of robust, scalable data pipelines cannot be overstated. From ingestion and storage to transformation and analysis, data engineers stand at the forefront of delivering reliable data for analytics, machine learning, and critical business decisions. Simultaneously, the field of Artificial Intelligence (AI) has undergone a revolution, transitioning from niche research projects to a foundational tool for everything from predictive maintenance and fraud detection to customer experience personalisation. Yet as AI models grow in complexity—think large language models with hundreds of billions of parameters—the data volumes and computational needs escalate dramatically. The industry finds itself at an inflection point: traditional computing systems may eventually hit performance ceilings, even when scaled horizontally with thousands of nodes. Enter quantum computing, a nascent yet rapidly progressing technology that leverages quantum mechanics to tackle certain computational tasks exponentially faster than classical machines. While quantum computing is still maturing, its potential to supercharge AI workflows—often referred to as quantum-enhanced AI—has piqued the curiosity of data engineers and enterprises alike. This synergy could solve some of the biggest headaches in data engineering: accelerating data transformations, enabling more efficient analytics, and even facilitating entirely new kinds of modelling once believed to be intractable. In this article, we explore: How data engineering has evolved to support AI’s insatiable appetite for high-quality, well-structured data. The fundamentals of quantum computing and why it may transform the data engineering landscape. Potential real-world applications for quantum-enhanced AI in data engineering—from data ingestion to machine learning pipeline optimisation. Emerging career paths and skill sets needed to thrive in a future where data, AI, and quantum computing intersect. Challenges, ethical considerations, and forward-looking perspectives on how this convergence might shape the data engineering domain. If you work in data engineering, are curious about quantum computing, or simply want to stay on the cutting edge of technology, read on. The next frontier of data-driven innovation may well be quantum-powered.

Data Engineering Jobs at Newly Funded UK Start-ups: Q3 2025 Investment Tracker

Data. It’s the critical lifeblood of every forward-thinking organisation, fueling everything from strategic decision-making to real-time analytics. As data volumes skyrocket and technologies mature, the UK has distinguished itself as a frontrunner in data innovation. A robust venture capital scene, government-backed initiatives, and a wealth of academic talent have created fertile ground for data-centric start-ups across the country. In this Q3 2025 Investment Tracker, we’ll delve into the newly funded UK start-ups shaping the future of data engineering. More importantly, we’ll explore the rich job opportunities that have emerged alongside these funding announcements. From building scalable ETL (Extract, Transform, Load) pipelines to architecting data warehouses and implementing advanced data governance frameworks, data engineers, architects, and analysts have an incredible array of roles to pursue. If you’re eager to elevate your career in data engineering, read on for insights into the most dynamic start-ups, their fresh capital injections, and the skill sets they’re hungry for.

Portfolio Projects That Get You Hired for Data Engineering Jobs (With Real GitHub Examples)

Data is increasingly the lifeblood of businesses, driving everything from product development to customer experience. At the centre of this revolution are data engineers—professionals responsible for building robust data pipelines, architecting scalable storage solutions, and preparing data for analytics and machine learning. If you’re looking to land a role in this exciting and high-demand field, a strong CV is only part of the puzzle. You also need a compelling data engineering portfolio that shows you can roll up your sleeves and deliver real-world results. In this guide, we’ll cover: Why a data engineering portfolio is crucial for standing out in the job market. Choosing the right projects for your target data engineering roles. Real GitHub examples that demonstrate best practices in data pipeline creation, cloud deployments, and more. Actionable project ideas you can start right now, from building ETL pipelines to implementing real-time streaming solutions. Best practices for structuring your GitHub repositories and showcasing your work effectively. By the end, you’ll know exactly how to build and present a portfolio that resonates with hiring managers—and when you’re ready to take the next step, don’t forget to upload your CV on DataEngineeringJobs.co.uk. Our platform connects top data engineering talent with companies that need your skills, ensuring your portfolio gets the attention it deserves.