companyCapco logo

Senior Azure Data Engineer (Databricks)

CapcoUK - London
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Qualifications

Proven experience with Databricks and orchestration, strong Python, PySpark, and SQL skills, expertise in data pipeline development, familiarity with lakehouse architecture, and knowledge of DevOps tools.

About the job

Join Capco as a Senior Azure Data Engineer (Databricks)

Location: London (Hybrid) | Practice Area: Technology & Engineering | Type: Permanent

Transform data engineering within a prestigious consulting firm

Your Role

As a Senior Azure Data Engineer (Databricks) at Capco, you will be instrumental in architecting, developing, and deploying secure and scalable data pipelines utilizing the Databricks platform on Azure. You’ll engage closely with clients to comprehend their data requirements, crafting innovative solutions that catalyze business transformation. Your contributions will also enhance best practices and foster continuous improvement across multidisciplinary engineering teams.

Key Responsibilities

  • Design and implement resilient data pipelines leveraging DeltaLake, Spark Structured Streaming, and Unity Catalog

  • Develop real-time event-driven architectures employing tools like Kafka and Azure Event Hubs

  • Utilize DevOps methodologies to construct CI/CD pipelines via Azure DevOps, Jenkins, or GitHub Actions

  • Collaborate with clients and stakeholders to translate their data needs into strategic technical solutions

  • Promote clean coding practices, data lifecycle optimization, and adherence to software engineering best practices

Ideal Candidate Profile

  • Demonstrated hands-on experience with the Databricks platform and orchestration techniques

  • Proficiency in Python, PySpark, and SQL, alongside a solid understanding of distributed data systems

  • Expertise in crafting full lifecycle data pipelines encompassing ingestion, transformation, and serving stages

  • Familiarity with data lakehouse architecture, schema design, and GDPR-compliant strategies

  • Practical knowledge of DevOps tools and CI/CD workflows

Preferred Qualifications

  • Experience with Scala or Java development

  • Knowledge of Cloudera, Hadoop, HIVE, and the Spark ecosystem

  • Understanding of data privacy laws, including GDPR, and familiarity with handling sensitive information

  • Ability to swiftly learn and adapt to emerging technologies to fulfill business objectives

About Capco

Capco is a global technology and consulting firm dedicated to the financial services industry. We are committed to harnessing technology to transform businesses and drive innovation.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.