Lead Big Data Engineer - DBT/Redshift (Contract)
Two Circles
Contract|CA$450/hr - CA$500/hr|On-site|Vancouver, British Columbia, Canada Join Two Circles, a pioneering Sports & Entertainment Marketing firm dedicated to enhancing audience engagement and revenue growth. Our deep understanding of fan behavior allows us to assist clients in leveraging insights to expand their reach and increase their revenues, both directly to consumers and through business-to-business channels. With a client roster that includes over 1,000 global brands such as the English Premier League, Red Bull, UEFA, VISA, NFL, Nike, and Amazon, we pride ourselves on delivering exceptional results across the sports and entertainment sectors. Our team of over 1,000 professionals operates from 15 offices worldwide.We are currently seeking a Lead Data Engineer for a 12-month contract. This role requires a candidate with strong streaming experience, offering a competitive daily rate between $450 - $500 CAD.ROLE OVERVIEWAs a Lead Data Engineer, you will be integral to a client-focused data pod, responsible for delivering large-scale data engineering solutions within a cloud-native AWS environment. This hands-on role involves shaping the streaming architectural direction while actively participating in implementation.The work environment is AWS-centric (utilizing Redshift, S3, Glue, Step Functions, Lambda, EMR), with DBT serving as the transformation framework. We are in the process of integrating streaming data from GCP sources into our AWS data platform.Your responsibilities will include defining engineering standards for data modeling, DBT implementation, testing, CI/CD, and production resiliency, in close collaboration with the client’s data team.WHAT YOU’LL BE DOINGStreaming Architecture & Distributed SystemsLead the architectural design for streaming data ingestion from GCP into AWS.Develop robust ingestion frameworks incorporating error handling, retry strategies, monitoring, and failure isolation.Implement distributed processing pipelines using Spark/PySpark or similar technologies.Data Warehousing & DBT LeadershipConstruct and sustain scalable data warehouses along with ETL/ELT processes leveraging DBT models in Amazon Redshift.Design and execute DBT projects, including macros, tests, documentation, and reusable modeling patterns.Optimize Redshift queries and DBT performance to enhance warehouse efficiency and cost-effectiveness.Engineering Standards & QualityEstablish and uphold best practices for:Data modelingVersion control (Git workflows)CI/CD pipelines for DBT deploymentsAutomated testing at model, transformation, and pipeline levelsEmbed comprehensive testing into each DBT model (schema tests, custom tests, data validation checks)
Mar 2, 2026