companywizdaa logo

Senior Data Engineer (Python/PySpark/AWS Glue/Athena)

wizdaaRemote job
Remote Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Senior

Qualifications

Required Qualifications4+ years of hands-on experience in production data engineering. Extensive knowledge of Apache Airflow, AWS Glue, PySpark, and Python-based data pipelines. Strong SQL expertise and practical experience with PostgreSQL in live settings. In-depth understanding of cloud-native data workflows (preferably AWS) and pipeline observability (metrics, logging, tracing, alerting). Demonstrated ability to manage pipelines from inception to deployment: design, implementation, testing, monitoring, and improvements. Preferred QualificationsExperience with Snowflake performance tuning and cost optimization strategies. Familiarity with real-time processing (e.g., streaming ingestion, incremental models, CDC). Hands-on experience with a backend TypeScript framework (e.g., NestJS) is highly advantageous. Knowledge of data quality frameworks and schema management tools. Background in building internal developer platforms or data components.

About the job

We are in search of a highly skilled and innovative Data Engineer to join our dynamic team. As a pivotal technical leader, you will:

  • Be the go-to expert in your team, guiding projects with your technical acumen.

  • Conquer complex challenges that others find daunting.

  • Deliver intricate features at an unparalleled pace.

  • Produce exceptionally clean and maintainable code.

  • Enhance the quality of our entire codebase.

If you're an exceptional developer with a proven track record, we want to hear from you! This role requires a unique blend of skills and experience, designed for the best in the field.

Responsibilities:

  • Develop, optimize, and scale data pipelines and infrastructure utilizing technologies such as Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.

  • Design, operationalize, and oversee ingestion and transformation workflows, including DAGs, alerting, retries, SLAs, lineage, and cost controls.

  • Partner with platform and AI/ML teams to automate ingestion, validation, and real-time compute workflows, contributing towards a feature store.

  • Integrate pipeline health and metrics into engineering dashboards for enhanced visibility and observability.

  • Model data and execute efficient, scalable transformations using Snowflake and PostgreSQL.

  • Create reusable frameworks and connectors to standardize internal data publishing and consumption.

About wizdaa

At wizdaa, we pride ourselves on being at the forefront of technology. We foster an environment that not only promotes innovation but also encourages collaboration among exceptional talents. Join us to work with cutting-edge tools, tackle challenging problems, and make a meaningful impact.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.