Qualifications
Key ResponsibilitiesLegacy Data Discovery & Data Model TransformationEngage in methodical system inventory efforts to document:Legacy file-based storage structuresSAS dataset dependenciesSubsystem data flowsManual gating and handoff processesEvaluate legacy storage models and design future state data models aligned with AWS Cloud Native architecture. Transition file-driven batch dependencies to:API-based ingestionEvent-driven workflowsDatabase-backed storage (e.g., Aurora/Postgres)Establish canonical data schemas and transformation standards. Cloud-Native Data Architecture DesignConstruct scalable AWS data pipelines utilizing services such as:S3GlueLambdaEventBridgeSNS/SQSAurora/PostgresBatchAthenaDevelop data ingestion, staging, transformation, and validation workflows. Implement schema management, versioning, and data lineage practices. Optimize data storage for performance, scalability, and cost efficiency. Support serverless and container-based data processing architectures. Expert Python-Based Data EngineeringDesign and develop sophisticated Python-based data transformation and validation pipelines. Create modular, reusable data processing components. Enhance large-scale data manipulation for distributed execution. Develop high-performance ETL/ELT frameworks. Integrate automated validation checks within data pipelines. Expert-level Python proficiency is required, particularly for:High-volume data processingData validation logicModular data engineering frameworksData Accuracy, Validation & VisibilityDesign and implement automated data validation processes to ensure...
About the job
As the Lead Data Engineer, you will spearhead the design and development of advanced, scalable data architectures to facilitate the transition of outdated, file-based analytical systems to modern AWS Cloud Native environments. This pivotal role emphasizes transforming legacy SAS-based data storage models—including flat files, batch outputs, and subsystem-specific data artifacts—into structured, governed, and scalable data frameworks optimized for cloud-native processing.
You will ensure data integrity, performance, and visibility across a comprehensive modernization initiative while providing technical leadership in data modeling, ingestion patterns, validation frameworks, and transparency reporting.
Expert-level proficiency in Python and substantial experience in architecting AWS-based data solutions are essential for this role.
About ignite-33
ignite-33 is at the forefront of data engineering, specializing in transforming legacy systems into modern, cloud-native solutions. We prioritize innovation, scalability, and data integrity in all our projects, and we are committed to empowering our clients with cutting-edge data architectures.