Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Manager
Qualifications
Proven experience in data engineering with significant expertise in Python and AWS. Experience with data orchestration tools like Airflow. Hands-on experience with Snowflake for data warehousing solutions. Strong leadership skills and experience managing a technical team. Excellent problem-solving abilities and a passion for data.
About the job
We are seeking a talented and experienced Data Engineering Manager to lead our data engineering team at dev2. In this hybrid role, you will be responsible for overseeing the development and implementation of robust data pipelines and architectures using Python, AWS, Airflow, and Snowflake. Your expertise will help drive data-driven decision-making across the organization.
About dev2
At dev2, we are dedicated to leveraging innovative technology solutions to empower businesses. Our team thrives on collaboration, creativity, and a commitment to excellence. Join us and be part of a forward-thinking company that values your contributions and fosters growth.
Similar jobs
1 - 20 of 63,362 Jobs
Search for Senior Data Engineer (Python/PySpark/AWS Glue/Amazon Athena/SQL/Apache Airflow)
We are on the lookout for an exceptional Data Engineer, a technical leader who thrives on challenges and excels in coding. If you are the person who:Acts as the definitive technical authority within your teamSolves complex technical problems with easeDelivers intricate features at a remarkable speedWrites code that exemplifies best practices and clarityIs dedicated to enhancing the overall quality of the codebaseThen we want to hear from you!We are not looking for just anyone; we want developers who are confident in their skills and have proven their excellence.What you will be responsible for:Designing, optimizing, and expanding data pipelines and infrastructure leveraging Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.Creating, operationalizing, and monitoring data ingestion and transformation workflows including DAGs, alerting mechanisms, retries, SLAs, lineage, and cost management.Partnering with platform and AI/ML teams to streamline ingestion, validation, and real-time compute workflows; contributing towards the development of a feature store.Incorporating pipeline health metrics into engineering dashboards to ensure complete visibility and observability.Modeling data and executing efficient, scalable transformations within Snowflake and PostgreSQL.Establishing reusable frameworks and connectors to standardize internal data publishing and consumption.
We are in search of a highly skilled and innovative Data Engineer to join our dynamic team. As a pivotal technical leader, you will:Be the go-to expert in your team, guiding projects with your technical acumen.Conquer complex challenges that others find daunting.Deliver intricate features at an unparalleled pace.Produce exceptionally clean and maintainable code.Enhance the quality of our entire codebase.If you're an exceptional developer with a proven track record, we want to hear from you! This role requires a unique blend of skills and experience, designed for the best in the field.Responsibilities:Develop, optimize, and scale data pipelines and infrastructure utilizing technologies such as Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.Design, operationalize, and oversee ingestion and transformation workflows, including DAGs, alerting, retries, SLAs, lineage, and cost controls.Partner with platform and AI/ML teams to automate ingestion, validation, and real-time compute workflows, contributing towards a feature store.Integrate pipeline health and metrics into engineering dashboards for enhanced visibility and observability.Model data and execute efficient, scalable transformations using Snowflake and PostgreSQL.Create reusable frameworks and connectors to standardize internal data publishing and consumption.
We are seeking a top-tier Data Engineer to join our team at wizdaa. If you are a developer who excels in:Leading your team with technical expertiseResolving complex challenges that others find difficultDelivering intricate features at an accelerated paceCreating exceptionally clean and maintainable codeEnhancing our codebase with pride and diligenceYour skills and experience will help us drive efficiency and innovation in data processing.Key Responsibilities:Develop, enhance, and scale data pipelines and infrastructure utilizing Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.Design, implement, and monitor data ingestion and transformation workflows, ensuring optimal performance and reliability.Work collaboratively with platform and AI/ML teams to automate data workflows and develop a comprehensive feature store.Integrate health metrics into engineering dashboards for enhanced visibility and operational insight.Model data and execute scalable transformations in Snowflake and PostgreSQL.Create reusable frameworks and connectors to streamline internal data processes.
Are you an exceptional Data Engineer with a flair for problem-solving and a passion for optimizing data processes? At pridelogic, we are on the lookout for a technical powerhouse to join our innovative team. If you pride yourself on being the technical leader who consistently delivers complex features ahead of schedule, and you write code that stands as an example for others, we want to hear from you!This position is designed for those who know they are extraordinary in their field. We seek developers with a proven track record of success in data engineering.Your Responsibilities:Develop, optimize, and scale data pipelines and infrastructure utilizing technologies such as Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.Design, implement, and monitor data ingestion and transformation workflows including DAGs, alerting systems, retries, SLAs, lineage, and cost management.Collaborate with platform and AI/ML teams to automate ingestion, validation, and real-time compute workflows, aiming towards a feature store.Enhance engineering dashboards with pipeline health metrics and observability features for comprehensive insight.Model data and execute efficient, scalable transformations in Snowflake and PostgreSQL.Create reusable frameworks and connectors to standardize internal data publishing and consumption processes.
We are seeking a highly skilled Manager of Data Engineering to lead our data engineering projects at dev2. This is a unique opportunity to work in a hybrid environment where you can collaborate with a talented team while having the flexibility to work remotely. You will be responsible for designing, building, and maintaining our data architecture using technologies such as Python, AWS, Airflow, and Snowflake. Your leadership will guide the successful execution of data strategies and ensure the availability of high-quality data for analysis and decision making.
dev2 is seeking an experienced and innovative Data Engineering Manager to lead our data engineering team in a hybrid work environment. In this role, you will be responsible for overseeing the design, development, and implementation of scalable data pipelines and architecture using technologies such as Python, AWS, Airflow, and Snowflake. Your leadership will drive the data strategy and empower the team to enhance data availability and quality.
Join dev2 as a Manager of Data Engineering, where you will lead a talented team in designing and implementing data solutions using cutting-edge technologies such as Python, AWS, Airflow, and Snowflake. This hybrid role allows you to enjoy a flexible work environment while collaborating closely with stakeholders to drive data initiatives that enhance business operations.Your expertise will help shape data strategies that promote efficiency and innovation within our organization. If you're passionate about data engineering and leadership, we want to hear from you!
Join dev2 as a Data Engineering Manager, where you will lead a dynamic team in harnessing the power of data to drive strategic decisions and innovations. In this hybrid role, you will be responsible for overseeing data engineering projects, ensuring the implementation of best practices in data management and cloud solutions using technologies such as Python, AWS, Airflow, and Snowflake.Your leadership will guide the team in developing scalable data pipelines, optimizing data architecture, and enhancing data quality. Collaborate with cross-functional teams to align data solutions with business objectives, while mentoring junior engineers and fostering a culture of continuous learning.
We are seeking a talented and experienced Data Engineering Manager to lead our data engineering team at dev2. In this hybrid role, you will be responsible for overseeing the development and implementation of robust data pipelines and architectures using Python, AWS, Airflow, and Snowflake. Your expertise will help drive data-driven decision-making across the organization.
Join dev2 as a Manager of Data Engineering where you will lead a talented team in designing and implementing data solutions leveraging Python, AWS, Airflow, and Snowflake. This hybrid role offers the flexibility of working remotely while collaborating with your team on-site.In this dynamic position, you will be responsible for guiding the data engineering team through complex data challenges, ensuring optimal performance, and driving innovation in our data architecture. Your leadership will be crucial in fostering a collaborative environment that encourages creativity and efficiency.
Join our innovative team at dev2 as a Data Engineering Manager where you'll lead initiatives in data architecture and engineering. In this hybrid role, you will harness your expertise in Python, AWS, Airflow, and Snowflake to drive strategic projects and enhance our data infrastructure. Your leadership will guide a talented team as we tackle complex data challenges and deliver high-quality solutions that empower our business decisions.
Role overview captivation seeks a Software Engineer III in Annapolis Junction, MD. This position centers on designing and refining scalable data pipelines and analytics tools. The team values collaboration and works together to overcome technical challenges, delivering solutions that enable data-driven decision making. What you will do Develop and maintain systems using Linux, Bash, and Python Orchestrate data workflows with Apache Airflow Query and manage data with SQL Enhance analytics features using Jupyter Notebook and NumPy Process and integrate data in JSON format Requirements Hands-on experience with Linux, Bash, and Python Familiarity with Apache Airflow, SQL, Jupyter Notebook, NumPy, and working with JSON data Comfortable collaborating with others Interest in solving technical problems Motivated by technology and working with data
A Bit About UsEnterpriseDB (EDB) is at the forefront of providing a data and AI platform that empowers organizations to fully leverage the capabilities of Postgres for transactional, analytical, and AI workloads across any cloud environment. Our mission is to help enterprises mitigate risks, manage costs, and scale efficiently in a data-driven world. With over 1,500 global clients and as a key contributor to the dynamic PostgreSQL community, EDB partners with leading government agencies, financial services, media, and IT companies. Our innovative solutions modernize legacy systems and eliminate data silos while harnessing the power of enterprise-grade open-source technologies. EDB ensures high availability of up to 99.999% with mission-critical capabilities that include robust security, compliance controls, and observability. Learn more at www.enterprisedb.com.**Candidate Note: This role is 100% remote for candidates based in EST or CST only**We are excited to welcome a Solutions Engineer to our engineering team. This dynamic position requires a blend of full-stack web development, data pipeline engineering, and cloud infrastructure management. You will play a pivotal role in designing, building, and maintaining applications that include both user-facing interfaces and sophisticated back-end automation.The ideal candidate is a well-rounded technical generalist, adept in multiple programming languages (JavaScript and Python) and enthusiastic about embracing new technologies as our stack evolves.Your Contributions Will Include:Developing and maintaining responsive web applications using Next.js and Node.js.
Full-time|Hybrid|Pennsylvania, Pennsylvania, United States
Role Overview qodeworld is hiring a Senior Data Engineer with deep experience in Informatica and PySpark. This role is based in Pennsylvania and requires working onsite three days each week. What You Will Do Apply 8–10 years of hands-on experience in data engineering and data analysis. Design, develop, and optimize ETL processes using Informatica PowerCenter and Informatica Data Quality (IDQ). Build and maintain large-scale data processing and analytics solutions with advanced PySpark skills. Work with Hadoop technologies, including HDFS, Hive, Sqoop, and MapReduce. Develop streaming and batch data pipelines using Python and Kafka. Use strong knowledge of database concepts, data modeling, and ETL workflows to support data architecture and design. Manage the full ETL lifecycle: data extraction, ingestion, quality checks, normalization, and loading. Contribute to Agile projects, using Jira for tracking and delivery. Engage directly with clients, coordinating across the software development lifecycle and communicating clearly with stakeholders. Preferred Qualifications Experience with AWS data services and analytics tools. Familiarity with machine learning models and AI concepts. Knowledge of data modeling tools such as Erwin.
Full-time|Hybrid|Pennsylvania, Pennsylvania, United States
About the Role qodeworld is hiring a Senior Data Engineer with deep experience in Informatica and PySpark. This position focuses on building and optimizing data solutions for large-scale analytics. The role is based in either Cleveland, OH or Pittsburgh, PA and requires onsite work three days per week. Main Responsibilities 8–10 years of experience in data engineering and data analysis. Hands-on expertise with Informatica PowerCenter and Informatica Data Quality (IDQ) for ETL design, development, and optimization. Advanced proficiency with PySpark for processing, transforming, and analyzing large datasets. Strong knowledge of Hadoop technologies, including HDFS, Hive, Sqoop, and MapReduce. Solid programming skills in Python and Kafka to build both streaming and batch data pipelines. Thorough understanding of database concepts, data modeling, data design, and ETL workflows. Experience across all phases of the ETL lifecycle: data extraction, ingestion, quality checks, normalization, and loading. Comfort working in Agile environments, using tools such as Jira to support project delivery. Proven background in client-facing roles, with strong communication and leadership skills to manage the software development lifecycle. Preferred Skills Familiarity with AWS data components and analytics. Understanding of machine learning models and AI concepts. Experience with data modeling tools like Erwin. Qualifications Bachelor’s or Master’s degree in Computer Science or a related discipline. Strong problem-solving skills and a collaborative approach to cross-functional teamwork.
Contract|$90/hr - $90/hr|Remote|Remote — Virginia, United States
Please submit your CV in English and indicate your English proficiency level. Toloka AI, working with Mindrift, offers project-based freelance roles for professionals who want to help test, evaluate, and improve artificial intelligence systems for leading technology companies. These are contract assignments, not permanent positions. Role overview The Freelance Data Science Engineer (Python & SQL) works remotely from Virginia, United States, and takes on a variety of AI-related projects. Assignments change from project to project, but the core work centers on designing and validating computational data science challenges that reflect real-world analytical problems across industries like telecommunications, finance, government, e-commerce, and healthcare. Develop data science problems that require Python programming, using libraries such as Pandas, Numpy, Scipy, Sklearn, Statsmodels, Matplotlib, and Seaborn. Ensure tasks are complex enough to need significant computation and cannot be solved manually in a short time. Create scenarios involving advanced data processing, statistical analysis, feature engineering, predictive modeling, and business insight generation. Design deterministic problems with reproducible results, using fixed random seeds if randomness is necessary. Base assignments on real business challenges like customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency. Build end-to-end tasks that cover the data science workflow: data ingestion, cleaning, exploration, modeling, validation, and deployment considerations. Integrate big data scenarios that require scalable computation strategies. Validate solutions using Python, standard data science libraries, and statistical methods. Document each problem clearly, including realistic business contexts and verified solutions. Requirements Minimum 5 years of hands-on data science experience with proven business results. Portfolio of completed projects or publications that highlight practical problem-solving skills. Advanced Python programming for data science, especially with Pandas, Numpy, Scipy, and Scikit-learn. Strong background in statistical analysis and machine learning, including algorithms and real-world applications. Proficiency in SQL and database operations for data analysis. Experience with Generative AI (LLMs, RAG, prompt engineering, vector databases). Understanding of MLOps and model deployment processes. Familiarity with tools such as TensorFlow, PyTorch, and LangChain. Excellent written English skills at C1 level or higher. How to join Apply Pass qualifications Join a project Complete tasks Receive compensation
Role overview Captivation seeks a Software Engineer II for a hybrid role in Annapolis Junction, MD. The team works closely together, focusing on practical solutions that drive business operations forward. Expect a hands-on environment where collaboration and real-world problem-solving are part of daily work. What you will do Develop and maintain software using Linux and Bash scripting Enhance data workflows with Apache Airflow and Apache Spark Use Docker and Podman to containerize and deploy applications Manage code and version control through Git Support and optimize systems running on AWS Work with teammates in a hybrid office and remote setup Key technologies Linux Bash Apache Airflow Apache Spark Docker Podman Git AWS Location This position is based in Annapolis Junction, MD and follows a hybrid schedule, combining in-office and remote work.
Capstone Integrated Solutions is hiring a Senior Data Engineer with a focus on AWS. This position is fully remote. Role overview This role centers on designing and building data pipelines using AWS tools and services. The Senior Data Engineer will work with teams across the company to make sure data remains accurate, reliable, and accessible. In addition, the position involves contributing to data architecture decisions and promoting engineering best practices. Collaboration and team approach The team values practical solutions and close collaboration. Colleagues from different functions work together to deliver data systems that help the company meet its goals. What you will do Design and build data pipelines with AWS services Partner with teams to ensure data quality and accessibility Influence data architecture and engineering standards
Contract|$90/hr - $90/hr|Remote|Remote — Iowa, United States
Please submit your resume in English and indicate your English proficiency level. Mindrift connects skilled professionals with project-based AI work for leading technology companies. Projects focus on testing, evaluating, and improving AI systems. This is a freelance, project-based position rather than a permanent staff role. Role overview This freelance Data Science Engineer position centers on designing and validating challenging data science problems for real-world business scenarios. Work is fully remote and project-based, with a focus on Python and SQL. What you will do Design data science problems that mirror analytical challenges in industries such as telecommunications, finance, government, e-commerce, and healthcare. Create tasks requiring Python programming, using libraries like Pandas, Numpy, Scipy, scikit-learn, Statsmodels, Matplotlib, and Seaborn. Ensure problems are computationally intensive, with solutions that may require days or weeks to process. Develop scenarios involving advanced data processing, statistical analysis, feature engineering, predictive modeling, and generating business insights. Write deterministic problems with reproducible results by using fixed random seeds or avoiding stochastic elements. Base challenges on real business cases, including customer analytics, risk assessment, fraud detection, forecasting, optimization, and efficiency improvements. Cover the full data science workflow: data ingestion, cleaning, exploratory analysis, modeling, validation, and deployment considerations. Incorporate big data scenarios that require scalable computation strategies. Validate all solutions in Python, using standard data science libraries and statistical techniques. Document each problem clearly within a realistic business context and provide accurate, verified answers. Requirements Minimum 5 years of hands-on data science experience with proven business results. Portfolio of completed projects or publications demonstrating real-world problem solving. Advanced Python skills for data science, including experience with pandas, numpy, scipy, scikit-learn, and statsmodels. Strong background in statistical analysis and machine learning, with deep understanding of algorithms and their practical use. Expertise in SQL and database operations for data analysis and manipulation. Familiarity with Generative AI tools and concepts (LLMs, Retrieval-Augmented Generation, prompt engineering, vector databases). Understanding of MLOps and model deployment workflows. Knowledge of modern frameworks such as TensorFlow, PyTorch, and LangChain. Excellent written English communication skills at C1 level or above. How to join Apply Complete qualifications Join a project Fulfill assigned tasks Receive compensation Location Remote , Iowa, United States
Join our dynamic team at usm2 as an AWS SQL Developer. In this role, you will leverage your expertise in SQL and AWS to develop and maintain robust database solutions that support our innovative projects. You will collaborate with cross-functional teams to ensure data integrity, optimize performance, and contribute to the overall success of our cloud initiatives.
Aug 31, 2016
Sign in to browse more jobs
Create account — see all 63,362 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.