Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Senior
Qualifications
Role OverviewWe are seeking a skilled Data Platform Engineer to join our core team at DatologyAI. As an early senior hire, you will collaborate directly with our founders to shape our product and influence critical technical decisions. You will be responsible for leading the development of our core product and data platform, essential components that enable us to process customer data and apply cutting-edge research to identify the most informative data points in extensive datasets. Your work will have a significant impact on our technology, product, and company culture.
About the job
About DatologyAI
At DatologyAI, we believe that the quality of training data is vital for the performance of AI models. Our innovative data curation suite is designed to automatically optimize petabytes of data, ensuring that your models are trained on the most relevant and effective datasets. By utilizing our curated data, users can experience training times that are 7-40 times faster and enhance model performance as if they had trained on more than 10 times the amount of raw data, all while reducing deployment costs significantly.
With $57.5 million raised across two funding rounds, our esteemed investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, and AI pioneers such as Geoff Hinton, Yann LeCun, and Jeff Dean. Our expert team is dedicated to simplifying the complex task of data curation, empowering anyone to train their models effectively on their own data.
This position is located in Redwood City, CA, and we work in-office four days a week.
About datologyai
DatologyAI is at the forefront of AI training data optimization. Our mission is to revolutionize the way AI models are trained by providing a state-of-the-art data curation suite. This enables organizations to maximize model performance while minimizing training costs and time.
Similar jobs
1 - 20 of 61,104 Jobs
Search for Lead Python Engineer - Data Infrastructure
Lead Python Engineer - Data Infrastructure About AscentAI AscentAI is at the forefront of developing intelligent software solutions tailored for risk and compliance teams within financial institutions. Our innovative platform simplifies complex regulatory information into actionable insights, empowering teams to mitigate risks, enhance operational efficiency, and proactively adapt to changes in global regulations. As a vibrant, mission-driven organization, we are pushing the limits of machine learning and artificial intelligence, combined with human-in-the-loop systems, to tackle some of the most challenging issues in regulatory compliance. The Role We are seeking a skilled Python Engineer to join our dynamic team. In this pivotal role, you will lead the design and development of robust, large-scale web scraping platforms that underpin AscentAI's data infrastructure. You will work collaboratively with fellow engineers and analysts to define data requirements, architect efficient data pipelines, and ensure the delivery of reliable, high-quality data at scale. Your expertise will also be critical in advising on scraping strategies, counteracting anti-bot measures, and implementing best practices in data extraction for cross-functional stakeholders in engineering, data science, and product development. This is a significant role that offers ownership and visibility, providing an opportunity to influence our technical architecture and overall business success. What You’ll Do Lead the design and development of large-scale web scraping platforms using Python and related frameworks. Mentor junior developers, providing technical guidance and conducting code reviews to ensure high-quality and maintainable code. Devise advanced strategies to navigate and overcome sophisticated anti-bot defenses such as CAPTCHAs, Cloudflare, and IP blocking, while adhering to legal and ethical standards and website terms of service. Collaborate with data analysts and engineers to establish data requirements and facilitate seamless data integration into databases. Optimize scrapers for performance, speed, and stability; set up real-time monitoring and alert systems to quickly respond to failures or changes in target sites. Create comprehensive technical documentation and engage effectively with cross-functional teams to ensure alignment and manage expectations.
About ZaimlerIn a world where AI agents struggle to reason over fragmented data, Zaimler emerges as the solution. Our mission is to unify disparate enterprise data across countless systems, providing a shared context, meaning, and structure. This transformation is essential as we transition from traditional copilots to fully autonomous agents, necessitating a new infrastructure layer that we are dedicated to building.At Zaimler, we are pioneering context infrastructure for the agentic era—a platform that autonomously discovers domain knowledge, maps intricate relationships, and equips AI agents with the semantic understanding required for precise and scalable operations. Envision knowledge graphs that facilitate real-time inference, tailored for systems that need to reason rather than merely retrieve data.Founded by industry veterans Biswajit Das (former VP Engineering at Truera and Chief Architect at Visa) and Sofus Macskassy (ex-Director of Engineering at LinkedIn), who notably built one of the largest knowledge graphs in production, Zaimler is a small, senior team at the seed stage, collaborating with major enterprises in sectors like insurance, travel, and technology. If you are passionate about creating the infrastructure that will support the next decade of AI advancements, we are eager to connect with you.The RoleWe are in search of a talented Data Infrastructure Engineer to establish the foundational distributed data layer that will power our semantic platform. In this role, you will be responsible for designing, building, and scaling systems that enable high-throughput data ingestion, transformation, and real-time processing.
Join arch.co as a Lead Infrastructure Engineer, where you will play a pivotal role in designing, implementing, and maintaining our cutting-edge infrastructure systems. Collaborate with cross-functional teams to ensure robust architecture and performance optimization, while mentoring junior engineers and providing technical leadership.
Stage is looking for an Infrastructure Data Engineer to join the team in Boston, MA. This position centers on building and maintaining the systems that move and organize data, making analytics and business intelligence possible across the company. Role overview The Infrastructure Data Engineer will design, implement, and support scalable data infrastructure. The work ensures that business needs are met as the company grows, and that analytics teams have reliable data to inform decisions. What you will do Design and implement data infrastructure to meet evolving business requirements Build and maintain data pipelines used for analytics and reporting Support business intelligence by ensuring data systems are reliable and accessible Requirements Stage is seeking candidates who have a strong interest in data engineering and are motivated to make a real impact on team and company goals.
Join our innovative team at alljoined as a Data Infrastructure Engineer where you will play a pivotal role in shaping our data architecture and ensuring the reliability and efficiency of our data systems. You will collaborate with cross-functional teams to design and implement scalable data solutions that empower data-driven decision-making.
Are you passionate about Python and eager to share your expertise with a vast community of developers?The Real Python tutorial team is renowned for delivering top-tier Python tutorials online. Our mission is to empower Python developers globally to enhance their skills.With over 3 million monthly visitors, we are excited about our journey thus far, but believe we can reach even greater heights!To elevate our tutorial quality and broaden our content offerings, we are seeking enthusiastic video course instructors who:Have a deep love for Python and a desire to assist learners in advancing their skillsUnderstand the significance of clarity and tone in educational video contentAspire to refine their craft and leverage our comprehensive publishing processCan commit to producing one or more new recordings each month and adhere to deadlinesThis position is fully remote. For more details, visit: realpython.com/jobs/video-course-instructorIdeal candidates will:Possess several years of programming experienceBe passionate about teaching “programming concepts” and have experience recording screencasts. The content you create will primarily be derived from existing written tutorials, so your ability to transform written material into engaging short videos is crucial.Have the ability to integrate Real Python into their weekly routine, as this role requires a notable time commitment.Joining the Real Python team comes with numerous benefits:Continuous Learning: Engage in ongoing learning and enjoy the process, enhancing your skills as a developer, writer, and communicator while forming valuable connections.Wide Reach: Our website attracts significant traffic—over 3 million visitors monthly and consistently growing. We are frequently highlighted in various Python publications and manage one of the largest email newsletters and social media channels in the community. Our YouTube channel boasts over 150,000 subscribers, ensuring that your published video series garners substantial viewership and appreciation.Content Enhancement: Upon submission of a video series to Real Python, our dedicated team will work with you to ensure the highest quality output.
Join our dynamic team at Sonsoft, Inc. as an Infrastructure Lead Engineer. In this pivotal role, you will be responsible for leading the design, implementation, and maintenance of our infrastructure systems. You will work closely with cross-functional teams to ensure our infrastructure supports our business objectives efficiently and effectively.As an Infrastructure Lead Engineer, you will leverage your expertise to create robust infrastructure solutions, troubleshoot complex issues, and drive continuous improvement initiatives. Your leadership will guide a team of engineers, fostering a culture of innovation and excellence.
About GigaAt Giga, we are pioneering the development, construction, and operation of AI energy infrastructure that powers the modern world. Our mission is to transform the flawed energy infrastructure experience permanently. By utilizing a vertically integrated model that combines site origination, development, infrastructure manufacturing, and power market participation, we deliver projects at an unparalleled speed. Founded in Texas, Giga collaborates with AI hyperscalers, data center operators, and various energy infrastructure teams to expedite project delivery.Why Join UsThe Pace: You will take charge of projects that progress faster than anywhere else in the industry.The Impact: You will transition ideas into actionable contracts and permits.The Team: Collaborate with a world-class group of engineers and builders who are shaping the future of AI infrastructure. Your ResponsibilitiesAs a Lead Mechanical Engineer at Giga, you will spearhead the mechanical design of cutting-edge data centers, essential for supporting the increasing demand for AI infrastructure. This role is an exceptional opportunity to define the mechanical framework of a rapidly expanding infrastructure company. Instead of managing a single project, you will create the Plan of Record (POR) for Giga Data Centers, a standardized product that will be deployed globally.You will establish benchmarks for high-density AI cooling systems, designing liquid cooling loops, thermal management solutions, and mechanical structures capable of supporting the most demanding computational workloads. The challenge of AI infrastructure is one of the defining industrial challenges of our era, and your contributions will build the foundation that drives it forward. Your innovative designs will be manufactured and deployed swiftly, generating visible impacts across numerous facilities.
Vitol is seeking a skilled and driven Python Data Engineer to enhance our data assets and support our analytical initiatives within a full-time capacity. In this role, you will collaborate closely with traders, analysts, researchers, and data scientists to define requirements and fulfill diverse data-related needs.Key Responsibilities:Develop modular, reusable components to facilitate communication between external data sources, internal tools, and databases.Engage with business stakeholders to clarify requirements for data ingestion and accessibility.Convert business requirements into actionable technical solutions.Ensure the integrity and organization of the Vitol Python codebase, adhering to established designs and coding standards.Enhance our developer tools and Python ETL toolkit through standardization and consolidation of core functionalities.Effectively coordinate efforts with our global team.Participate in Vitol’s Python development community and act as a liaison to support our expanding business development initiatives.
Full-time|On-site|Suitland-Silver Hill, Maryland, United States
Position OverviewAs the Lead Data Engineer, you will spearhead the design and development of advanced, scalable data architectures to facilitate the transition of outdated, file-based analytical systems to modern AWS Cloud Native environments. This pivotal role emphasizes transforming legacy SAS-based data storage models—including flat files, batch outputs, and subsystem-specific data artifacts—into structured, governed, and scalable data frameworks optimized for cloud-native processing.You will ensure data integrity, performance, and visibility across a comprehensive modernization initiative while providing technical leadership in data modeling, ingestion patterns, validation frameworks, and transparency reporting.Expert-level proficiency in Python and substantial experience in architecting AWS-based data solutions are essential for this role.
Full-time|$200K/yr - $275K/yr|On-site|San Francisco, CA
At Peregrine Technologies, a company backed by top-tier Silicon Valley investors, we empower public safety organizations, state and local governments, federal agencies, and private-sector entities to tackle societal challenges with unparalleled speed and precision. Our cutting-edge AI-enabled platform transforms fragmented and isolated data into actionable operational intelligence, delivering crucial insights that enhance decision-making processes and improve outcomes across various scenarios. Currently, we proudly serve hundreds of clients across over 30 states and two countries, impacting more than 125 million lives, and we are poised for further growth as we expand into the enterprise sector and international markets.TeamWe believe that empathy is key to enhancing our solutions. Our engineering team prioritizes understanding how users interact with our products, which guides us in finding the best solutions. You'll have the chance to collaborate closely with our onsite team to explore the diverse use cases that Peregrine addresses.We value both ownership and teamwork. In this role, you will be responsible for significant features while working alongside fellow engineers to bring them to fruition. We hold humility and empathy in high regard as essential traits for crafting effective solutions, and you will engage directly with our deployment team and users to iterate on problem-solving. Creativity and resilience are vital as we pursue our vision.RoleWe are in search of a Staff Data Infrastructure Engineer to join our dynamic team. In this role, you will take full ownership of the data layer that is foundational to all of Peregrine's operations. You will design and construct the systems responsible for ingesting, storing, and serving vast amounts of real-time operational data, empowering our clients to make critical decisions quickly and confidently.This senior individual contributor position is ideal for someone who excels at tackling complex technical challenges and possesses the experience and judgement necessary to influence key infrastructure decisions. You will engage with a variety of intricate challenges, including:Designing and managing a high-throughput, real-time data integration platform across diverse customer environments.Architecting a scalable open table format layer to ensure reliable data storage at petabyte scale.Building and optimizing distributed data processing pipelines using Apache Spark and related streaming technologies.Enhancing performance, reliability, and cost efficiency across the entire data infrastructure stack.Collaborating with platform and product engineering teams to define data contracts, schemas, and integration pathways.
Full-time|$129K/yr - $209K/yr|On-site|Waltham, Massachusetts, United States
Join Our TeamWe invite you to become a vital part of Evolv as a Senior Data Infrastructure Engineer within our Machine Learning & Sensors organization. This pivotal role entails the design, construction, and maintenance of robust, secure, and scalable data pipelines that drive our AI/ML research and production systems. You will take charge of the complete data lifecycle—from ingestion across thousands to millions of edge devices, through cloud processing, to a centralized data factory that supports model training, evaluation, and ongoing enhancement.Data is at the core of our mission to revolutionize AI-based weapon detection systems. Your expertise will ensure seamless data flow across various geographies, devices, and cloud systems, while adhering to stringent standards for quality, privacy, security, and scalability. This position is perfect for someone who is passionate about the intersection of distributed systems, cloud pipelines, and ML-driven data requirements.Success in the Role: Your First YearIn the first 30 days:Gain an in-depth understanding of our existing edge-to-cloud data pipelines and deployment environments.Evaluate current data ingestion processes, governance frameworks, and cloud infrastructure.Identify challenges related to data reliability, quality, and operational scalability.Establish rapport with AI/ML, data science, field operations, and cloud engineering teams.Design and prototype both cloud and edge data processing pipelines.Within the first three months:Implement enhancements to critical ingestion, validation, and processing pipelines.Deploy scalable data pipelines using AWS components such as S3, EC2, Lambda, Glue, Step Functions, and SageMaker integrations.Develop automated validation workflows to identify data corruption, missing metadata, or malformed data.Create automated model evaluation, training, and improvement pipelines to accelerate experimentation.Collaborate with field operations to enhance data reliability, observability, and coverage.By the end of the first year:Oversee the entire lifecycle of mission-critical data pipelines that support AI/ML research and production.Architect advanced edge-to-cloud data systems capable of scaling across millions of devices.Establish and enforce data governance frameworks, including retention, access control, privacy, and lineage.Enable ML teams to quickly conduct experiments with high-quality, discoverable, versioned datasets.
Full-time|$115K/yr - $205K/yr|Remote|New York - Remote
At Angi®, our mission for the past 30 years has been simple: to ensure that jobs are done right. We connect homeowners with trustworthy professionals who possess the necessary skills, while simultaneously linking these pros with homeowners seeking the jobs they desire.Angi at a glance:Homeowners have relied on Angi for over 300 million projects.We cover more than 1,000 home service tasks.Our team consists of 2,800 dedicated employees worldwide.Why join Angi:Angi® is on a mission to redefine the home services industry, fostering an environment where homeowners, professionals, and employees all benefit from a greater number of jobs completed successfully.For homeowners, our platform offers a dependable way to locate skilled professionals. For professionals, we act as a trustworthy business partner, helping them discover the work they want when they want it. For our employees, we provide an exceptional workplace that they can proudly call home. We look forward to welcoming you!About the team:We are currently searching for a Senior Data Engineer to join our Data Infrastructure team. This individual will play a pivotal role in constructing and managing the foundational platforms that facilitate data processing, storage, and analytics throughout our organization. The focus of this role will be on advancing our lakehouse architecture, data replication systems, and orchestration frameworks, all while ensuring scalable, reliable, and efficient data workflows.Please note, although this role is remote, we are a global company seeking candidates located in the Eastern Time Zone to align with our team's working hours.
We are seeking a top-tier Data Engineer to join our team at wizdaa. If you are a developer who excels in:Leading your team with technical expertiseResolving complex challenges that others find difficultDelivering intricate features at an accelerated paceCreating exceptionally clean and maintainable codeEnhancing our codebase with pride and diligenceYour skills and experience will help us drive efficiency and innovation in data processing.Key Responsibilities:Develop, enhance, and scale data pipelines and infrastructure utilizing Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.Design, implement, and monitor data ingestion and transformation workflows, ensuring optimal performance and reliability.Work collaboratively with platform and AI/ML teams to automate data workflows and develop a comprehensive feature store.Integrate health metrics into engineering dashboards for enhanced visibility and operational insight.Model data and execute scalable transformations in Snowflake and PostgreSQL.Create reusable frameworks and connectors to streamline internal data processes.
Peregrine Technologies, backed by top-tier Silicon Valley investors, empowers public safety organizations, government entities at all levels, and private institutions to effectively tackle societal challenges with unmatched speed and precision. Our innovative AI-driven platform transforms fragmented data into actionable insights, enabling rapid and informed decision-making that enhances outcomes across the board. Currently, we serve hundreds of clients in over 30 states and two countries, impacting more than 125 million lives as we broaden our reach into enterprise solutions and international markets.Our Engineering TeamWe are a team that prioritizes empathy in our engineering solutions. Understanding how our users interact with our products is essential to our process. Engineers will have the chance to collaborate closely onsite, gaining insights into the diverse use cases our platform addresses.We cherish both ownership and teamwork—you will take full accountability for significant features while working alongside fellow engineers to see them through to completion. We believe humility and empathy are crucial for crafting effective solutions, and you will engage directly with our deployment team and users to iterate and resolve their challenges. Creativity and determination are vital as we pursue our ambitious goals.Your RoleWe are seeking a Staff Data Infrastructure Engineer to join our dynamic team, where you will have substantial ownership over the data ecosystem that supports all of Peregrine's operations. You will design and develop systems that manage, store, and deliver vast amounts of real-time operational data, enabling our customers to make crucial decisions quickly and confidently.This position is ideal for a seasoned individual contributor who excels at solving complex technical challenges and possesses the expertise to influence foundational infrastructure strategies. You will engage with a variety of intricate issues, including:Creating and managing a high-throughput, real-time data integration platform across varied customer environments.Developing a scalable open table format layer for dependable data storage at a petabyte scale.Building and fine-tuning distributed data processing pipelines utilizing Apache Spark and related streaming technologies.Enhancing performance, reliability, and cost-effectiveness throughout the entire data infrastructure stack.Collaborating with platform and product engineering teams to establish data contracts, schemas, and integration pathways.
Part-time|$32/hr - $32/hr|Remote|Remote — Iowa, United States
Mindrift brings together professionals from around the world to work on AI projects for major technology companies. The team’s focus is on advancing Generative AI by connecting specialists with real-world expertise. Role overview This part-time, remote contract is for a Freelance Python Data Scraping Engineer (AI Pilot) supporting the Tendem project. Candidates must be based in Iowa, United States. The work centers on managing and carrying out web data extraction tasks, collaborating closely with Tendem Agents, and applying critical thinking to ensure the accuracy and relevance of collected data. Quality assurance is a key part of the position. What you will do Manage end-to-end data extraction workflows for complex websites, delivering structured datasets with precision and reliability. Use internal tools such as Apify and OpenRouter, along with custom workflows, to collect, validate, and process data according to project needs. Adapt scraping methods for dynamic web sources, including handling JavaScript-rendered content and responding to changing site behaviors. Apply strict data quality standards, running validation checks and systematic verification before delivering results. Scale operations for large datasets using batching or parallelization, monitor for failures, and maintain stability when site structures change. Requirements Minimum 3 years of experience in data engineering, web scraping, automation, or a related field. Compensation Earn up to $32 per hour, based on expertise and contribution speed. Actual pay may vary depending on project scope and complexity, and may differ across projects on the Mindrift platform. How to apply Submit an application through this posting to be considered for projects that fit your technical background and availability. Work may involve coding, automation, or refining AI outputs, all contributing to AI advancement and practical use cases.
About DatologyAIAt DatologyAI, we believe that the quality of training data is vital for the performance of AI models. Our innovative data curation suite is designed to automatically optimize petabytes of data, ensuring that your models are trained on the most relevant and effective datasets. By utilizing our curated data, users can experience training times that are 7-40 times faster and enhance model performance as if they had trained on more than 10 times the amount of raw data, all while reducing deployment costs significantly.With $57.5 million raised across two funding rounds, our esteemed investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, and AI pioneers such as Geoff Hinton, Yann LeCun, and Jeff Dean. Our expert team is dedicated to simplifying the complex task of data curation, empowering anyone to train their models effectively on their own data.This position is located in Redwood City, CA, and we work in-office four days a week.
The Bot CompanyAt The Bot Company, we are on a mission to create an innovative robot that enhances everyday life in homes everywhere.Located in the heart of San Francisco, our compact team comprises talented engineers, designers, and operators hailing from esteemed organizations such as Tesla, Cruise, OpenAI, Google, and Pixar. With a track record of delivering exceptional products to hundreds of millions of users, we understand the intricacies involved in crafting remarkable experiences.Our intentionally lean structure fosters swift decision-making while eliminating unnecessary bureaucracy. Each team member operates as an individual contributor, endowed with substantial autonomy, ownership, and accountability. We thrive on a culture of rapid iteration and efficient execution, working collaboratively across the technology stack.
Position OverviewJoin OpenEvidence as a Data Infrastructure Software Engineer, where you will engineer comprehensive systems that drive essential product and research operations. Your focus will be on optimizing performance, ensuring scalability, and enhancing accuracy, while enjoying the autonomy to manage the infrastructure that assists healthcare professionals in navigating complex clinical decisions in real-time.We value exceptional creators who thrive in versatile roles. Our engineers engage across various products and projects, taking ownership wherever they can make the most significant impact.About OpenEvidenceOpenEvidence is the leading medical AI platform globally, utilized by over 40% of clinicians in the U.S. in just over a year through organic product-led growth. As a $12 billion company, our engineering team comprises 30 talented individuals from MIT, Harvard, and Stanford. We believe that groundbreaking products are born from a small group of exceptional builders, driven by focused goals and empowered to take ownership and act swiftly. We are expanding our team to capitalize on an unparalleled opportunity to set the standard for medical AI platforms.If you are a top-tier engineer or scientist eager to push the boundaries and achieve tangible outcomes that affect millions of lives, we want to connect with you.Our CultureWe expect our work to be performed at an elite level. The journey from concept to execution and scaling is akin to a professional sport, where excellence is non-negotiable. We believe that the creation of innovative technologies is only achievable through complete ownership. Significant achievements happen when individuals take the initiative to see them through.Your ProfileThis role is not for those seeking a 9-to-5 job or merely looking to write papers. If you are ready to dive into the trenches, tackle challenges head-on, and create something from scratch that could impact millions and drive substantial revenue, you might be the perfect fit.We seek brilliant builders who are intelligent, ambitious, resourceful, self-reliant, detail-oriented, driven, hardworking, and humble. Does this sound rare? It is, as we have only found 30 of them so far, and we are eager to discover more.
Salary Range: The compensation for this role is negotiable, ranging from $6,000 to $12,500 per month (Gross, in USD), depending on experience and location.About Sezzle:Sezzle is on a mission to financially empower the next generation. We are transforming the shopping experience by integrating innovative technology with seamless, interest-free installment plans that make shopping more efficient and accessible. Our approach not only redefines payments but also enhances the way consumers discover, engage with, and purchase their favorite products, positively impacting merchant sales through increased conversions and larger order values. As we continue to redefine the future of fintech and retail, we are assembling a dynamic and innovative team dedicated to creating an exceptional shopping experience. If you are passionate about pushing the limits of technology and transforming consumer and merchant interactions, we invite you to join us at Sezzle and help shape the future of shopping!Role Overview:We are looking for a highly skilled and driven Principal Engineer for our Data Infrastructure team. This is an exceptional opportunity to excel in a fast-paced, dynamic environment within a rapidly expanding organization, offering ample opportunities for career growth.As Sezzle grows, so does our data generation and consumption, and we recognize the immense value of data. Our goal is to empower our business, engineering teams, and the entire organization to read, write, and analyze large volumes of data with speed and efficiency.We heavily utilize MySQL, Postgres, and Redshift, and leverage AWS tools such as Aurora RDS and DMS. We employ DBT-based transformations and similar toolkits to build a fast-growing data lake that supports multiple data warehouses. While we've achieved excellent scalability, we are eager to explore new tools and technologies to further enhance our capabilities.Key Responsibilities:Take full responsibility for our database and data warehousing infrastructure, including MySQL, Postgres, CDC capture, and Redshift, while ensuring KPIs and SLAs are met.
Mar 10, 2026
Sign in to browse more jobs
Create account — see all 61,104 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.