Data Engineer

Cortech Talent Solutions Ltd
Glasgow
1 day ago
Create job alert

Role: Data Engineer

Location: Glasgow

Hybrid Work:3 days per week and 2x WFH

Salary: Competitive but around £50,000 - £60,000

Tech stack: Python, AWS, CI,CD, ETL, Data Warehouse


We CANNOT sponsor or accept anyone on a PSW or Graduate Visa.


**This Role is Exclusive to Cortech so you MUST apply via this advert **



We are looking for a AI engineer someone that is capable of deploying and maintaining infrastructure:


  • Python (must)
  • API design and frameworks FastAPI ideally (but Flask and Django, in said order, would indicate this)
  • Experience with AWS ideally (Azure or GCP would be a good indication)
  • Experience with Infrastructure as Code, CDK ideally (Terraform or Serverless potentially)


Great to have but can be taught:


  • Data and stream processing - AWS Firehose and ETL platforms
  • Experience with authentication frameworks
  • CI/CD GitHub actions (GitLab, TeamCity, CircleCI)




This is data related software development, responsible for whole lifecycle and developed end point for API data pipeline.


Must be able to write infrastructure code in terraform or AWS but can be taught.


We design and develop across a full stack of disciplines – Mechanical, Electronic, Electrical and Software Engineering – within the Digital team we develop software for IoT edge devices, cloud services, frontend UI, AI/ML models in computer vision, and Data Analysis.


We are seeking a talented and enthusiastic Data Engineer to join our talented AI/ML team. We are a medium-sized enterprise so you will be working closely with everyone in the business. If this kind of direct visibility and opportunity to shine through your collaboration and merit appeals, this is the place for you.



As a Data Engineer, you will have the opportunity to work closely with experienced professionals and gain valuable hands-on experience across the entire product development lifecycle.




Responsibilities of the role


• Develop, deploy and validate high-performing machine learning models for computer vision applications, such as image classification, object detection, image segmentation, and video analysis.

• Conduct thorough data analysis, feature engineering, and model selection to optimize model performance and accuracy.

• Collaborate with cross-functional teams (e.g., data scientists, software engineers, product managers) to translate business requirements into technical specifications and deliver impactful solutions.

• Develop and maintain robust and scalable machine learning pipelines using AWS services (e.g., SageMaker, EC2, S3, Lambda) and other relevant technologies.

• Stay abreast of the latest advancements in computer vision and machine learning research and explore new opportunities to apply these innovations to our business.

• Contribute to the development and improvement of our machine learning infrastructure and best practices.





Experience & Skills:



• Master's or Ph.D. in Computer Science, Computer Engineering, or a related field with a strong focus on machine learning.

• Solid understanding of deep learning concepts and architectures (e.g., CNNs, RNNs, Transformers) and their practical applications.

• Proficiency in Python and experience with popular machine learning libraries (e.g., TensorFlow, PyTorch).

• Strong experience with AWS services, including SageMaker, EC2, S3, Lambda, etc.

• Experience with cloud-native development and deployment methodologies.

• Ability to work independently and as part of a collaborative team.

• A strong passion for machine learning and a desire to continuously learn and grow.




General Skills

• Excellent problem-solving skills and the ability to think creatively to overcome technical challenges.

• A passion for learning and staying updated with the latest industry trends and best practices.

• Strong communication and teamwork skills, with the ability to effectively collaborate with cross-functional teams, your default should be Openness and transparency.

• Desire to take the initiative and self-start when necessary.

• Flexibility, we pride ourselves on doing what is necessary to make the whole organisation successful.




Bonus Points:


• Knowledge of MLOps principles and best practices.

• Experience with distributed computing and large-scale data processing.




How to apply?


Please send a CV to

Related Jobs

View all jobs

Data Engineer - AI Analytics and EdTech Developments

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Data Engineering Tools Do You Need to Know to Get a Data Engineering Job?

If you’re aiming for a career in data engineering, it can feel like you’re staring at a never-ending list of tools and technologies — SQL, Python, Spark, Kafka, Airflow, dbt, Snowflake, Redshift, Terraform, Kubernetes, and the list goes on. Scroll job boards and LinkedIn, and it’s easy to conclude that unless you have experience with every modern tool in the data stack, you won’t even get a callback. Here’s the honest truth most data engineering hiring managers will quietly agree with: 👉 They don’t hire you because you know every tool — they hire you because you can solve real data problems with the tools you know. Tools matter. But only in service of outcomes. Jobs are won by candidates who know why a technology is used, when to use it, and how to explain their decisions. So how many data engineering tools do you actually need to know to get a job? For most job seekers, the answer is far fewer than you think — but you do need them in the right combination and order. This article breaks down what employers really expect, which tools are core, which are role-specific, and how to focus your learning so you look capable and employable rather than overwhelmed.

What Hiring Managers Look for First in Data Engineering Job Applications (UK Guide)

If you’re applying for data engineering jobs in the UK, the first thing to understand is this: Hiring managers don’t read every word of your CV. They scan it. They look for signals of relevance, credibility, delivery and collaboration — and if they don’t see the right signals quickly, your application may never get a second look. In data engineering, hiring managers are especially focused on whether you can build and operate reliable, scalable data systems, handle real-world data challenges and work effectively with analytics, BI, data science and engineering teams. This guide breaks down exactly what they look at first in your application — and how to shape your CV, portfolio and cover letter so you stand out.

The Skills Gap in Data Engineering Jobs: What Universities Aren’t Teaching

Data engineering has quietly become one of the most critical roles in the modern technology stack. While data science and AI often receive the spotlight, data engineers are the professionals who design, build and maintain the systems that make data usable at scale. Across the UK, demand for data engineers continues to rise. Organisations in finance, retail, healthcare, government, media and technology all report difficulty hiring candidates with the right skills. Salaries remain strong, and experienced professionals are in short supply. Yet despite this demand, many graduates with degrees in computer science, data science or related disciplines struggle to secure data engineering roles. The reason is not academic ability. It is a persistent skills gap between university education and real-world data engineering work. This article explores that gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build successful careers in data engineering.