Senior Data Engineer (Big Data/ Hadoop/ Spark) (Financial Services)

4757777
  • Job type

    Contract
  • Location

    London
  • Working Pattern

    Flexible Working,Full-time
  • Specialism

    Data & Advanced Analytics
  • Industry

    Banking & Financial Services
  • Pay

    £700-850 Per Day (Inside IR35)
  • Closing date

    27 Feb 2026

Senior Data Engineer (Big Data/ Hadoop/ Spark) Required For Renowned Financial Organisation

Your new company
Working for a renowned financial services organisation

Your new role
We’re looking for a Senior Data Engineer to design and deliver scalable on prem, high‑quality data solutions for low/ high-level data platforms that power analytical and business insights. This is a hands‑on role suited to someone with strong data engineering and big data expertise, ideally gained within financial services.
Joining leading commodities, metals, trading, and exchange group, you will support a strategic metals initiative focused on reducing on‑prem platform costs and modernising legacy ETL processes.

You’ll help design and build a new on‑prem data platform aligned to the metals strategy while developing and maintaining scalable data pipelines and analytics infrastructure. Using Hadoop, Big Data, and Spark technologies, you will ensure data quality through automated validation, monitoring, and testing. You will also enable seamless integration across data warehouses and data lakes, contributing to a robust, scalable, and resilient enterprise data ecosystem.

What you'll need to succeed

  • Vast Data Engineering expertise with Big Data technologies.
  • Experience designing and building on‑prem data platforms, from high‑level architecture to detailed technical design.
  • Hands‑on experience configuring multi‑node Hadoop clusters, including resource management, security, and performance tuning.
  • Strong Big Data engineering background using Apache Airflow, Spark, dbt, Kafka, and Hadoop ecosystem tools.
  • Knowledge of RDBMS systems (PostgreSQL, SQL Server) and familiarity with NoSQL/distributed databases such as MongoDB.
  • Proven delivery of streaming pipelines and real‑time data processing solutions.
  • Improved job efficiency and reduced runtimes through Apache Spark optimisation and development.
  • Some experience with containerisation (Docker, Kubernetes) and CI/CD pipelines.
  • Delivered streaming pipelines and real‑time data processing solutions.
  • Experience replacing legacy ETL tools (e.g., Informatica) with modern data engineering pipelines and platform builds.
  • Proven background working within financial services environments.

What you'll get in return
Flexible working options available.

What you need to do now
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.

Apply for this job

Talk to Dimitri Lynch, the specialist consultant managing this position

Located in London-City, 5th Floor, 107 Cheapside, Telephone 020 3465 0080
Click here to access our Privacy Policy, which provides detailed information on how we use and protect your personal information, and your rights in relation to this.