About the job
About Nelo
Nelo stands as a premier player in the consumer fintech and e-commerce sector in Mexico, boasting over $500 million in annualized GMV and exceeding $75 million in annual revenue. Our vision is to enhance the purchasing power of consumers throughout Latin America, achieving this through the development of a modern alternative to traditional credit cards.
Having successfully secured more than $40 million in venture capital from notable investors such as Homebrew, Two Sigma Ventures, and Susa Ventures, Nelo also benefits from a substantial $100 million asset credit facility provided by Victory Park Capital.
Our nimble team consists of seasoned professionals hailing from leading technology firms like Uber, Amazon, Rappi, and DiDi. We take pride in our speed, intellectual rigor, and operational efficiency.
Nelo operates offices in both Mexico City and New York City.
About the Role
We are in search of a Senior Data Engineer to architect, construct, and manage the foundational data platform that underpins analytics, machine learning, and strategic decision-making at Nelo. This dynamic, hands-on role is ideal for an experienced engineer who thrives throughout the entire data lifecycle—from ingestion and transformation to ensuring reliability, scalability, and enabling machine learning.
In this role, you will collaborate closely with teams across Analytics, Product, Engineering, Marketing, Risk, and Machine Learning to guarantee that our data infrastructure remains robust, scalable, and user-friendly as Nelo continues its growth trajectory.
Key Responsibilities
Own and enhance the data platform: Create, develop, and sustain scalable, dependable data pipelines and datasets that fuel analytics, reporting, and machine learning applications across the organization.
Construct and manage ETL/ELT pipelines: Design production-quality pipelines that aggregate data from transactional systems, third-party providers, and event streams into our data warehouse and feature store.
Empower analytics and business teams: Collaborate with Data Analytics and stakeholders to ensure that data is well-structured, documented, and accessible for self-service analysis.
Facilitate machine learning workflows: Develop and maintain feature pipelines and feature stores that support model training, validation, and both online and offline inference.
Guarantee data integrity and reliability: Implement data quality checks, monitoring, alerting, and SLAs to maintain trust in our data products.

