Data Engineer (Data Modeling)

4772781
  • Job type

    Contract
  • Location

    Cambridge
  • Working Pattern

    Full-time
  • Specialism

    Data & Advanced Analytics
  • Industry

    Engineering
  • Pay

    Flexible

We have a contract job opportunity for a Data Engineer (Data Modeling) for our leading Client.

Job Overview:
We are seeking a motivated and detail-oriented Data Engineer with a passion for designing and delivering high-quality data solutions in Databricks. You will be responsible for building and optimising data pipelines that form the foundation of our reporting and analytics ecosystem.

Using your technical expertise, you will design and maintain efficient ETL processes to integrate data from systems such as ServiceNow, JIRA, and Dynatrace, ensuring accuracy, performance, and scalability. You will thrive in a fast-paced, collaborative environment and be eager to learn, innovate, and continuously improve our data platform.
If you are passionate about building reliable data foundations and enabling insights that drive smarter decisions — this is the ideal role for you!

Contract – 6 months
Location – Cambridge (2 days onsite)
Rate – Flexible (inside IR35)

Responsibilities:
  • Design, build, and maintain scalable ETL pipelines in Databricks to integrate data from multiple business systems such as ServiceNow, JIRA, and ADO.
  • Optimise data workflows for performance, scalability, and reliability.
  • Implement data validation and quality checks to ensure trustworthy reporting in downstream tools such as Power BI.
  • Create data modelling and schema design at backend for analytics and reporting.
  • Collaborate with the Staff Data Engineer and Visualisation Developer to align technical delivery with reporting needs.
  • Manage code in Git and support CI/CD processes for Databricks deployments.
  • Contribute to data lineage documentation, standards, and governance best practices.

Required Skills and Experience:
  • Proven experience in building and maintaining ETL pipelines using Databricks.
  • End to end Data modeling experience.
  • Strong knowledge of SQL and Python for data transformation and automation.
  • Proven experience in data modelling, schema design, and data quality validation.
  • Solid understanding of data performance tuning and pipeline optimisation in Databricks.
  • Experience working with Git-based version control and collaborative development workflows.
  • Strong analytical and problem-solving skills, with an eye for efficiency and accuracy.
  • Excellent communication and collaboration skills, comfortable working with both technical and non-technical partners.

“Nice To Have” Skills and Experience:
  • Familiarity with Agile delivery methods and iterative development practices!
  • Knowledge of data governance and data lineage documentation standards.
  • Exposure to automation and CI/CD frameworks within Databricks.


What you need to do now

If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.

If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career.

Apply for this job

Talk to Anushka Aggarwal, the specialist consultant managing this position

Located in Leicester, 1st & 2nd Floor, 2 Colton SquareTelephone +44 333 010 7430
Click here to access our Privacy Policy, which provides detailed information on how we use and protect your personal information, and your rights in relation to this.