companypridelogic logo

Data Engineer LATAM - Expert in Python, PySpark, AWS Glue

pridelogicRemote job
Remote Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Mid to Senior

Qualifications

Required Qualifications:A minimum of 4 years of experience in data engineering. Extensive hands-on experience with Apache Airflow, AWS Glue, PySpark, and Python-based data pipelines. Strong SQL proficiency and operational experience with PostgreSQL in active environments. A solid understanding of cloud-native data workflows, preferably within AWS, and expertise in pipeline observability (including metrics, logging, tracing, and alerting). Demonstrated experience in managing data pipelines end-to-end: from design and implementation to testing, deployment, monitoring, and iterative improvements. Preferred Qualifications:Experience with Snowflake performance tuning, including warehouses, partitions, clustering, and query profiling. Familiarity with real-time or near-real-time processing techniques such as streaming ingestion, incremental models, and Change Data Capture (CDC). Hands-on experience with backend TypeScript frameworks, particularly NestJS, is highly desirable. Knowledge of data quality frameworks, contract testing, or schema management (e.g., Great Expectations, dbt tests, OpenAPI/Protobuf/Avro). Experience in building internal developer platforms or components for data platforms, including connectors, SDKs, and CI/CD processes.

About the job

Are you an exceptional Data Engineer with a flair for problem-solving and a passion for optimizing data processes? At pridelogic, we are on the lookout for a technical powerhouse to join our innovative team. If you pride yourself on being the technical leader who consistently delivers complex features ahead of schedule, and you write code that stands as an example for others, we want to hear from you!

This position is designed for those who know they are extraordinary in their field. We seek developers with a proven track record of success in data engineering.

Your Responsibilities:

  • Develop, optimize, and scale data pipelines and infrastructure utilizing technologies such as Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.

  • Design, implement, and monitor data ingestion and transformation workflows including DAGs, alerting systems, retries, SLAs, lineage, and cost management.

  • Collaborate with platform and AI/ML teams to automate ingestion, validation, and real-time compute workflows, aiming towards a feature store.

  • Enhance engineering dashboards with pipeline health metrics and observability features for comprehensive insight.

  • Model data and execute efficient, scalable transformations in Snowflake and PostgreSQL.

  • Create reusable frameworks and connectors to standardize internal data publishing and consumption processes.

About pridelogic

At pridelogic, we are committed to leveraging cutting-edge technology to enhance data-driven decision-making. Our remote-first culture fosters innovation and collaboration, allowing you to work from anywhere while contributing to transformative projects. Join us and be part of a team that values your expertise and encourages continuous growth.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.