About the job
We are on the lookout for an exceptional Data Engineer, a technical leader who thrives on challenges and excels in coding. If you are the person who:
Acts as the definitive technical authority within your team
Solves complex technical problems with ease
Delivers intricate features at a remarkable speed
Writes code that exemplifies best practices and clarity
Is dedicated to enhancing the overall quality of the codebase
Then we want to hear from you!
We are not looking for just anyone; we want developers who are confident in their skills and have proven their excellence.
What you will be responsible for:
Designing, optimizing, and expanding data pipelines and infrastructure leveraging Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.
Creating, operationalizing, and monitoring data ingestion and transformation workflows including DAGs, alerting mechanisms, retries, SLAs, lineage, and cost management.
Partnering with platform and AI/ML teams to streamline ingestion, validation, and real-time compute workflows; contributing towards the development of a feature store.
Incorporating pipeline health metrics into engineering dashboards to ensure complete visibility and observability.
Modeling data and executing efficient, scalable transformations within Snowflake and PostgreSQL.
Establishing reusable frameworks and connectors to standardize internal data publishing and consumption.

