companyCapco logo

Principal Azure Data Engineer (Databricks)

CapcoUK - London
On-site Permanent

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Qualifications

Extensive experience with Databricks, including Unity Catalog and DeltaLake. Proficient in Python, PySpark, and distributed data processing frameworks. Strong background in CI/CD pipeline development using Azure DevOps, Jenkins, and GitHub Actions. Comprehensive understanding of data lakehouse principles, data modeling, and GDPR-compliant design. Proven ability to build and maintain production-grade data pipelines.

About the job

Lead Principal Azure Data Engineer (Databricks)

Location: London (Hybrid) | Practice Area: Technology & Engineering | Type: Permanent

Shape the Future of Financial Services with Data Innovation

The Role

As a Lead Principal Azure Data Engineer specializing in Databricks at Capco, you will spearhead the architectural design and implementation of cutting-edge enterprise data solutions within the Azure ecosystem. Your expertise will drive the creation of both streaming and batch data pipelines, enabling our clients to advance their data capabilities. Collaborating with diverse teams, you will provide strategic guidance on best practices and technical approaches, ensuring adherence to the highest quality and security standards throughout all deployments.

What You’ll Do

  • Lead comprehensive delivery of Databricks-driven solutions within Azure environments.

  • Design secure, scalable data pipelines utilizing DeltaLake, Spark Structured Streaming, and Unity Catalog.

  • Establish and uphold engineering best practices across the data lifecycle.

  • Collaborate with clients to design solution strategies and robust data governance frameworks.

  • Mentor engineering teams and actively participate in internal capability enhancement programs.

What We’re Looking For

  • Demonstrated proficiency in the Databricks platform, including Unity Catalog, DeltaLake, and orchestration.

  • Advanced skills in Python, PySpark, and distributed data processing frameworks.

  • Extensive experience developing CI/CD pipelines with tools such as Azure DevOps, Jenkins, and GitHub Actions.

  • Thorough understanding of data lakehouse concepts, data modeling, and GDPR-compliant design methodologies.

  • Proven track record in creating robust, production-grade data pipelines from ingestion to serving.

Bonus Points For

  • Strong client-facing and commercial acumen to support pre-sales and RFP engagements.

  • Experience in coaching and mentoring engineering teams.

  • Development skills in Scala or Java.

  • Familiarity with handling PII and sensitive data in compliance with regulations.

About Capco

Capco is a global technology and consulting firm dedicated to the financial services industry. We strive to innovate and transform our clients' businesses through cutting-edge technology solutions and expert guidance. Our collaborative culture and commitment to excellence empower our team members to make a significant impact in the world of finance.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.