1 - 20 of 92,316 Jobs

Search for Freelance Data Science Engineer (Python & SQL) – Remote Opportunity

92,316 results

Apply
company
Contract|$90/hr - $90/hr|Remote|Remote — Iowa, United States

Please submit your resume in English and indicate your English proficiency level. Mindrift connects skilled professionals with project-based AI work for leading technology companies. Projects focus on testing, evaluating, and improving AI systems. This is a freelance, project-based position rather than a permanent staff role. Role overview This freelance Data Science Engineer position centers on designing and validating challenging data science problems for real-world business scenarios. Work is fully remote and project-based, with a focus on Python and SQL. What you will do Design data science problems that mirror analytical challenges in industries such as telecommunications, finance, government, e-commerce, and healthcare. Create tasks requiring Python programming, using libraries like Pandas, Numpy, Scipy, scikit-learn, Statsmodels, Matplotlib, and Seaborn. Ensure problems are computationally intensive, with solutions that may require days or weeks to process. Develop scenarios involving advanced data processing, statistical analysis, feature engineering, predictive modeling, and generating business insights. Write deterministic problems with reproducible results by using fixed random seeds or avoiding stochastic elements. Base challenges on real business cases, including customer analytics, risk assessment, fraud detection, forecasting, optimization, and efficiency improvements. Cover the full data science workflow: data ingestion, cleaning, exploratory analysis, modeling, validation, and deployment considerations. Incorporate big data scenarios that require scalable computation strategies. Validate all solutions in Python, using standard data science libraries and statistical techniques. Document each problem clearly within a realistic business context and provide accurate, verified answers. Requirements Minimum 5 years of hands-on data science experience with proven business results. Portfolio of completed projects or publications demonstrating real-world problem solving. Advanced Python skills for data science, including experience with pandas, numpy, scipy, scikit-learn, and statsmodels. Strong background in statistical analysis and machine learning, with deep understanding of algorithms and their practical use. Expertise in SQL and database operations for data analysis and manipulation. Familiarity with Generative AI tools and concepts (LLMs, Retrieval-Augmented Generation, prompt engineering, vector databases). Understanding of MLOps and model deployment workflows. Knowledge of modern frameworks such as TensorFlow, PyTorch, and LangChain. Excellent written English communication skills at C1 level or above. How to join Apply Complete qualifications Join a project Fulfill assigned tasks Receive compensation Location Remote , Iowa, United States

Apr 25, 2026
Apply
company
Contract|$90/hr - $90/hr|Remote|Remote — Virginia, United States

Please submit your CV in English and indicate your English proficiency level. Toloka AI, working with Mindrift, offers project-based freelance roles for professionals who want to help test, evaluate, and improve artificial intelligence systems for leading technology companies. These are contract assignments, not permanent positions. Role overview The Freelance Data Science Engineer (Python & SQL) works remotely from Virginia, United States, and takes on a variety of AI-related projects. Assignments change from project to project, but the core work centers on designing and validating computational data science challenges that reflect real-world analytical problems across industries like telecommunications, finance, government, e-commerce, and healthcare. Develop data science problems that require Python programming, using libraries such as Pandas, Numpy, Scipy, Sklearn, Statsmodels, Matplotlib, and Seaborn. Ensure tasks are complex enough to need significant computation and cannot be solved manually in a short time. Create scenarios involving advanced data processing, statistical analysis, feature engineering, predictive modeling, and business insight generation. Design deterministic problems with reproducible results, using fixed random seeds if randomness is necessary. Base assignments on real business challenges like customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency. Build end-to-end tasks that cover the data science workflow: data ingestion, cleaning, exploration, modeling, validation, and deployment considerations. Integrate big data scenarios that require scalable computation strategies. Validate solutions using Python, standard data science libraries, and statistical methods. Document each problem clearly, including realistic business contexts and verified solutions. Requirements Minimum 5 years of hands-on data science experience with proven business results. Portfolio of completed projects or publications that highlight practical problem-solving skills. Advanced Python programming for data science, especially with Pandas, Numpy, Scipy, and Scikit-learn. Strong background in statistical analysis and machine learning, including algorithms and real-world applications. Proficiency in SQL and database operations for data analysis. Experience with Generative AI (LLMs, RAG, prompt engineering, vector databases). Understanding of MLOps and model deployment processes. Familiarity with tools such as TensorFlow, PyTorch, and LangChain. Excellent written English skills at C1 level or higher. How to join Apply Pass qualifications Join a project Complete tasks Receive compensation

Apr 25, 2026
Apply
company
Contract|$90/hr - $90/hr|Remote|Remote — Wisconsin, United States

Toloka AI offers project-based freelance roles for experienced Data Science Engineers. This position supports leading technology companies by designing and validating data science challenges that reflect real-world analytics scenarios. The engagement is freelance, not permanent employment. Role overview Freelance Data Science Engineers at Toloka AI create computational challenges for industries such as telecom, finance, government, e-commerce, and healthcare. Projects require designing problems that use Python and SQL, focusing on tasks that cannot be solved manually. Each challenge covers the full data science workflow, from data ingestion and cleaning to modeling and deployment considerations. Engineers use libraries such as Pandas, NumPy, SciPy, Scikit-learn, Statsmodels, Matplotlib, and Seaborn. Problems must be deterministic, reproducible, and based on realistic business scenarios like customer analytics, risk assessment, fraud detection, forecasting, and operational efficiency. Documentation of each challenge includes business context and a verified solution. Big data processing and scalable computational approaches are often required. Requirements At least 5 years of hands-on data science experience with measurable business outcomes Portfolio of completed projects or publications showing real-world problem-solving Advanced Python skills for data science, including pandas, numpy, scipy, scikit-learn, and statsmodels Strong background in statistical analysis and machine learning algorithms Proficiency in SQL and database operations for data analysis Familiarity with Generative AI tools and concepts (LLMs, RAG, prompt engineering, vector databases) Understanding of MLOps practices and model deployment workflows Knowledge of frameworks such as TensorFlow, PyTorch, or LangChain Excellent written English skills at C1 level or higher How to apply Submit your CV in English and specify your English proficiency level Complete the qualification process Join a project after qualification Carry out assigned tasks Receive payment upon completion of work Location: Remote , Wisconsin, United States

Apr 25, 2026
Apply
company
Contract|$90/hr - $90/hr|Remote|Remote — United States

We invite you to submit your resume in English, specifying your level of English proficiency.At Mindrift, we merge innovation with opportunity, harnessing the power of collective intelligence to ethically shape the future of AI.About UsThe Mindrift platform serves as a bridge connecting specialists with AI projects from leading tech innovators. Our mission is to unleash the potential of Generative AI by leveraging real-world expertise from around the globe.Role OverviewAs a Data Science AI Trainer, you will play a pivotal role in advancing GenAI models to tackle specialized questions and enhance complex reasoning skills. You will have the chance to collaborate on various unique projects.Your responsibilities may include:Crafting original computational data science challenges that replicate real-world analytical processes across sectors such as telecom, finance, government, e-commerce, and healthcare.Designing problems that require Python programming solutions (utilizing libraries like pandas, numpy, scipy, sklearn, statsmodels, matplotlib, seaborn).Ensuring that the challenges are computationally demanding and cannot be solved manually within feasible timeframes (days/weeks).Creating scenarios necessitating sophisticated reasoning in data processing, statistical analysis, feature engineering, predictive modeling, and insight extraction.Formulating deterministic problems with reproducible outcomes, avoiding stochastic elements or employing fixed random seeds for exact results.Grounding challenges in actual business scenarios focusing on customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency.Designing comprehensive problems that cover the entire data science pipeline (data ingestion → cleaning → EDA → modeling → validation → deployment considerations).Integrating big data processing requirements that call for scalable computational strategies.Validating solutions through Python using standard data science libraries and statistical techniques.Clearly documenting problem statements within realistic business contexts and providing verified correct answers.Getting StartedTo join us, simply apply to this posting, meet the qualifications, and you can contribute to exciting projects that align with your skills, all while working on your own schedule. By creating training prompts and refining model responses, you will help shape the future of AI, ensuring that technology serves everyone.

Nov 27, 2025
Apply
company
Contract|$90/hr - $90/hr|Remote|Remote — Iowa, United States

Please submit your CV in English and mention your English proficiency level. Mindrift connects experienced professionals with project-based AI assignments for technology companies. The platform focuses on testing, evaluating, and improving AI systems. This is a project-based, non-permanent position. Role overview The Freelance Data Scientist (Python & SQL) - AI Trainer designs and validates data science challenges that mirror real-world analytics problems from fields like telecom, finance, government, e-commerce, and healthcare. Projects are remote and based in Iowa, United States. What you will do Create computational data science challenges based on genuine business scenarios, such as customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency. Develop problems that require advanced Python programming, using libraries like Pandas, Numpy, Scipy, Sklearn, Statsmodels, Matplotlib, and Seaborn. Ensure challenges are computationally intensive and cannot be solved by hand. Design tasks involving data processing, statistical analysis, feature engineering, predictive modeling, and extracting business insights. Produce deterministic problems with reproducible solutions, using fixed seeds or avoiding randomness. Build comprehensive challenges that cover the entire data science workflow, from data ingestion to deployment considerations. Include big data scenarios that require scalable solutions. Validate solutions using Python, standard data science libraries, and statistical methods. Document each problem clearly, providing realistic business context and verified answers. Requirements Minimum 5 years of hands-on data science experience with proven business impact. Portfolio of completed projects or publications showing real-world problem-solving abilities. Advanced Python skills for data science (including Pandas, Numpy, Scipy, Sklearn, Statsmodels). Strong knowledge of statistical analysis and machine learning, including algorithm applications. Proficiency in SQL and database operations for data analysis and manipulation. Experience with Generative AI (LLMs, RAG, prompt engineering, vector databases). Familiarity with MLOps practices and deploying machine learning models. Understanding of frameworks such as TensorFlow, PyTorch, or LangChain. Strong written English skills at C1 level or above. Application process Apply Pass qualifications Join a project Complete tasks Receive compensation

Apr 25, 2026
Apply
company
Contract|$90/hr - $90/hr|Remote|Remote — Wisconsin, United States

Please submit your CV in English and indicate your English proficiency level. This freelance, project-based contract connects experienced data scientists with AI training assignments for major technology clients. Projects focus on testing, evaluating, and improving AI systems. This is not a permanent employment position. Role overview The AI Training Specialist will design advanced computational data science challenges that mirror real-world analytical workflows. Scenarios span industries such as telecom, finance, government, e-commerce, and healthcare. Each challenge should require deep analytical thinking and practical coding skills. What you will do Create data science problems that require Python programming with libraries like Pandas, Numpy, Scipy, Sklearn, Statsmodels, Matplotlib, and Seaborn. Develop computationally intensive tasks that cannot be solved manually in a few days or weeks. Formulate challenges involving complex data processing, statistical analysis, feature engineering, predictive modeling, and generating insights. Ensure all problems are deterministic, with replicable solutions and fixed random seeds. Base scenarios on business needs such as customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency. Cover the full data science pipeline, from data ingestion to deployment considerations. Include tasks that require scalable computational approaches and big data processing. Validate solutions using standard Python data science libraries and statistical methods. Document each problem clearly, providing realistic business context and verified answers. Requirements Minimum 5 years of hands-on data science experience with measurable business results. Portfolio of projects or publications demonstrating real-world problem-solving. Advanced Python skills for data science (pandas, numpy, scipy, scikit-learn, statsmodels). Strong background in statistical analysis and machine learning, including practical applications and algorithms. Expertise in SQL and database management for data manipulation and analysis. Familiarity with Generative AI tools and concepts (LLMs, RAG, prompt engineering, vector databases). Understanding of MLOps and model deployment workflows. Experience with frameworks such as TensorFlow, PyTorch, and LangChain. Excellent written English at C1 level or above. Application process Apply Pass qualification(s) Join a project Complete tasks Receive payment This contract is remote and open to candidates based in Wisconsin, United States.

Apr 25, 2026
Apply
company
Part-time|$32/hr - $32/hr|Remote|Remote — Iowa, United States

Mindrift brings together professionals from around the world to work on AI projects for major technology companies. The team’s focus is on advancing Generative AI by connecting specialists with real-world expertise. Role overview This part-time, remote contract is for a Freelance Python Data Scraping Engineer (AI Pilot) supporting the Tendem project. Candidates must be based in Iowa, United States. The work centers on managing and carrying out web data extraction tasks, collaborating closely with Tendem Agents, and applying critical thinking to ensure the accuracy and relevance of collected data. Quality assurance is a key part of the position. What you will do Manage end-to-end data extraction workflows for complex websites, delivering structured datasets with precision and reliability. Use internal tools such as Apify and OpenRouter, along with custom workflows, to collect, validate, and process data according to project needs. Adapt scraping methods for dynamic web sources, including handling JavaScript-rendered content and responding to changing site behaviors. Apply strict data quality standards, running validation checks and systematic verification before delivering results. Scale operations for large datasets using batching or parallelization, monitor for failures, and maintain stability when site structures change. Requirements Minimum 3 years of experience in data engineering, web scraping, automation, or a related field. Compensation Earn up to $32 per hour, based on expertise and contribution speed. Actual pay may vary depending on project scope and complexity, and may differ across projects on the Mindrift platform. How to apply Submit an application through this posting to be considered for projects that fit your technical background and availability. Work may involve coding, automation, or refining AI outputs, all contributing to AI advancement and practical use cases.

Apr 24, 2026
Apply
company
Contract|$90/hr - $90/hr|Remote|Remote — Virginia, United States

Please submit your CV in English and indicate your English proficiency level. Toloka AI, in partnership with Mindrift, offers project-based opportunities for experienced data science professionals. Assignments support technology companies by testing, evaluating, and refining AI systems. All work is temporary and structured around individual projects, not ongoing employment. Role overview This freelance position centers on creating and validating computational challenges that mirror real-world data science workflows. Projects span sectors including telecommunications, finance, government, e-commerce, and healthcare. Each challenge is designed to require advanced analytical thinking and practical coding skills. What you will do Develop original data science problems that simulate business scenarios such as customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency. Design tasks that require advanced Python programming, using libraries like Pandas, Numpy, Scipy, Scikit-learn, Statsmodels, Matplotlib, and Seaborn. Ensure problems are computationally intensive and may take significant time to solve. Cover the full data science pipeline: data ingestion, cleaning, exploratory analysis, modeling, validation, and deployment considerations. Include big data processing elements that require scalable solutions. Write deterministic, reproducible problems by avoiding randomness or using fixed seeds. Validate solutions in Python and document both the problem statements and correct answers clearly in a business context. Requirements Minimum 5 years of hands-on data science experience with proven business impact. Portfolio of completed projects or publications demonstrating real-world solutions. Advanced Python skills for data science, including experience with Pandas, Numpy, Scipy, Scikit-learn, and Statsmodels. Strong foundation in statistical analysis and machine learning methods. Proficiency in SQL and database operations for data manipulation and analysis. Familiarity with Generative AI tools and concepts, such as large language models, retrieval-augmented generation, prompt engineering, and vector databases. Understanding of MLOps practices and model deployment workflows. Knowledge of frameworks like TensorFlow, PyTorch, and LangChain. Excellent written English communication skills at C1 level or above. How to apply Submit your application. Complete the required qualifications. Join a project. Finish assigned tasks. Receive payment. Location: Remote , Virginia, United States Note: All roles are temporary and project-based.

Apr 25, 2026
Apply
company
Part-time|$32/hr - $32/hr|Remote|Remote — New York, United States

Mindrift connects technical experts with AI-driven projects, focusing on the intersection of generative AI and specialized human knowledge. The company partners with technology leaders to deliver real-world solutions powered by collaborative intelligence. Role overview This part-time, remote contract position centers on advanced data extraction and processing for the Tendem project. As a Freelance Python Data Scraping Engineer (called "AI Pilot" at Mindrift), the role works within an AI-human hybrid system. Engineers collaborate with Tendem Agents, who handle routine tasks, while focusing on critical thinking and applying domain expertise to produce accurate, actionable data insights. What you will do Run end-to-end workflows to extract data from complex websites, delivering structured datasets with high accuracy. Use internal tools such as Apify and OpenRouter, as well as custom-built solutions, to collect, validate, and process data according to project needs. Adapt extraction methods for dynamic or JavaScript-rendered content and evolving site structures. Ensure data quality through validation checks, cross-source comparisons, formatting standards, and systematic verification before delivery. Optimize large-scale scraping with batching or parallelization, monitor for failures, and maintain resilience to minor layout changes. Requirements Minimum of 3 years of experience in data engineering, web scraping, automation, or a closely related technical field. Compensation Earn up to $32 per hour on this project, depending on experience and pace. Actual rates may vary by project scope, complexity, and required skills. Other projects on the Mindrift platform may offer different compensation based on their needs. How to apply Submit an application to this posting. After qualifying, join projects that fit your technical strengths and work on a flexible schedule. Tasks may include coding, automation, and refining AI outputs, all contributing to the advancement of AI and its real-world uses.

Apr 24, 2026
Apply
company
Part-time|$32/hr - $32/hr|Remote|Remote — Wisconsin, United States

Mindrift connects specialists with AI-driven projects from technology innovators. The company blends generative AI with expertise from contributors worldwide. Role overview This part-time, remote position is open to candidates based in Wisconsin, United States. As a Freelance Python Data Scraping Engineer, you will support the Tendem project by executing specialized data scraping workflows within an AI and human collaboration system. Internally, this role is called an AI Pilot. Work closely with Tendem Agents to tackle repetitive tasks, applying critical thinking and domain knowledge to deliver accurate, actionable data. Consistent quality control and attention to detail are essential in this role. Main responsibilities Manage end-to-end data extraction workflows across complex websites, ensuring thorough coverage and accuracy. Use internal tools such as Apify and OpenRouter, along with custom-built workflows, to accelerate data collection, validation, and task completion. Adapt extraction strategies for dynamic web sources, including those with JavaScript-rendered content or changing structures. Apply validation checks and systematic verification to maintain data quality before delivering results. Scale scraping operations for large datasets using efficient methods that remain stable as target sites evolve. Compensation Earn up to $32 per hour, depending on experience and pace of contribution. Actual pay may vary by project scope and complexity. Application process Submit your application and demonstrate your technical skills to be considered. This freelance role offers the chance to contribute to real-world AI projects while working on a flexible schedule.

Apr 24, 2026
Apply
company
Part-time|$32/hr - $32/hr|Remote|Remote — Virginia, United States

Mindrift seeks a Freelance Python Data Scraping Engineer to support specialized data workflows for the Tendem project. This is a remote, part-time contract based in Virginia, United States. The role centers on complex data extraction, quality control, and collaboration with Tendem Agents who automate routine tasks. Role overview This position involves designing and managing data scraping operations for dynamic and interactive websites. The focus is on delivering structured, reliable datasets that meet project requirements. As an "AI Pilot," the engineer will handle the more challenging aspects of extraction while ensuring data quality and actionable outcomes. What you will do Lead end-to-end extraction from complex sites, producing accurate and structured data. Utilize internal tools like Apify and OpenRouter, along with custom Python workflows, for collecting, validating, and processing data. Adjust techniques to extract information from JavaScript-heavy or interactive sources. Perform validation checks and consistency controls to maintain high data quality before delivery. Scale operations for large datasets by batching or parallelizing tasks, monitor for failures, and adapt to changes in website structures. Compensation Rates reach up to $32 per hour, based on experience, speed, and project complexity. Actual pay may vary depending on the specific assignment and required skills. How to apply Submit an application to this post and complete the qualification steps. Candidates who qualify may join projects that fit their technical background and availability. Assignments include coding, automation, and refining AI-driven outputs, contributing to practical AI solutions.

Apr 24, 2026
Apply
company
Part-time|$32/hr - $32/hr|Remote|Remote — San Antonio, Texas, United States

Mindrift brings together technical specialists and AI-driven projects from major technology innovators. The platform’s goal is to blend generative AI with real-world expertise from a global network. Role overview This part-time, contract position focuses on building and maintaining Python-based data scraping workflows for the Tendem project. The role is open to candidates in San Antonio, Texas, or working remotely from anywhere in the United States. As a Freelance Python Data Scraping Engineer, you will work within a hybrid system that combines AI and human input. Internally, this position is known as an AI Pilot, collaborating with Tendem Agents who manage routine tasks. The AI Pilot uses domain expertise and quality assurance skills to deliver reliable, actionable datasets. What you will do Oversee end-to-end data extraction workflows on complex websites, ensuring accurate and well-structured results. Utilize internal tools such as Apify and OpenRouter, along with custom Python scripts, to manage data collection, validation, and processing. Adjust extraction methods for dynamic or JavaScript-heavy sites, refining techniques as website behaviors evolve. Apply data quality checks, including validation, cross-source consistency, and adherence to formatting standards before delivering datasets. Scale up scraping operations for large data sets using batching or parallelization, monitor workflow stability, and address errors from minor site changes. Requirements Minimum of 3 years’ experience in data engineering, web scraping, automation, or a closely related technical field. Strong Python programming skills and practical experience with data extraction tools. Demonstrated ability to think critically and solve problems. High attention to detail and a strong focus on data quality. Compensation Earn up to $32 per hour, depending on experience and contribution speed. Actual pay depends on project scope, complexity, and required expertise. Other projects may offer different rates. How to apply Submit your application to this post. Qualified candidates may be invited to work on projects that align with their technical skills and availability. Projects include coding, automation, and optimizing AI outputs, contributing directly to practical AI applications.

Apr 24, 2026
Apply
company
Part-time|$76/hr - $76/hr|Remote|Remote — United States

Please submit your CV in English and indicate your English proficiency level. About Mindrift Mindrift connects skilled specialists with project-based AI work for leading technology companies. Projects focus on testing, evaluating, and improving AI systems. This is a freelance, project-based role, not permanent employment. Role Overview The Material Science Specialist with Python will contribute to a variety of projects. Tasks may include: Designing material engineering challenges that reflect real engineering workflows Developing problems that require Python programming to solve engineering calculations and simulations Ensuring tasks are computationally intensive, involving numerical methods or iterative solutions Creating challenges focused on system design, optimization, and analysis Basing problems on real-world research or practical engineering scenarios Validating solutions using Python and well-established engineering libraries Documenting problem statements clearly and providing accurate, verified solutions Who Should Apply This freelance role suits material science professionals and engineers with Python skills who are interested in part-time, project-based work. Preferred qualifications include: Degree in Material Science or a closely related field Proficiency in Python for numerical validation (experience with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or relevant libraries is welcome) At least 2 years of relevant experience in applied, research, or teaching roles Strong grasp of practical engineering constraints and approximations Excellent written English skills (C1 level or higher) How Projects Work Submit your application Complete qualification steps Join a project Complete assigned tasks Receive payment Time Commitment Active project phases typically require 10–20 hours per week. Actual workload may vary with project needs. Compensation Earn up to $76 per hour, depending on contribution level and project pace. Compensation varies by project scope, complexity, and expertise required. Different projects may offer different pay rates based on their specific requirements.

Apr 16, 2026
Apply
company
Contract|$90/hr - $90/hr|Remote|Remote — United States

Please submit your resume in English and indicate your level of English proficiency.At Mindrift, innovation and opportunity converge. We harness the power of collective intelligence to ethically shape the future of artificial intelligence.About UsThe Mindrift platform serves as a bridge connecting specialists with AI projects from leading technology innovators. Our mission is to unlock the potential of Generative AI by leveraging real-world expertise from around the globe.Role OverviewGenerative AI models are evolving rapidly, and one of our key objectives is to enhance their ability to tackle specialized queries and develop advanced reasoning skills. As a Machine Learning Engineer on our platform, you will have the unique opportunity to collaborate on such innovative projects.While each project is distinct, typical responsibilities may include:Designing original computational STEM problems that replicate authentic scientific workflows.Developing problems that necessitate Python programming for resolution.Creating problems that are computationally intensive, unsolvable manually within practical timeframes (days/weeks).Formulating problems that require complex reasoning and creative problem-solving techniques.Validating solutions using Python with standard libraries (numpy, pandas, scipy, sklearn).Clearly documenting problem statements and providing verified correct answers.Getting StartedTo apply, simply respond to this post, qualify, and seize the opportunity to engage in projects that align with your skills, on your own timetable. By creating training prompts and refining model responses, you will contribute to shaping the future of AI, ensuring that technology serves everyone.

Dec 17, 2025
Apply
companyLila Sciences logo
Full-time|$120K/yr - $192K/yr|On-site|Cambridge, MA USA

Your Contribution at Lila Sciences Be a part of innovating the scientific landscape! We are on the lookout for a talented software engineer with a background in life sciences to enhance our data science team. In this role, you will collaborate closely with software engineers, laboratory scientists, and machine learning engineers to develop state-of-the-art tools for automated scientific analysis and beyond. Your expertise in web services and data engineering, particularly in Python development for scientific applications, will be crucial. If you excel in a collaborative and fast-paced environment while adhering to best practices in git, development workflows, and user-centered design, we encourage you to apply! Your Responsibilities Engage in the complete software development life cycle, concentrating on the design, implementation, and maintenance of software services. Create reusable code and libraries to enhance efficiency and scalability. Ensure development aligns with strategic objectives, facilitating software that meets broader organizational requirements. Oversee git repositories, manage the team’s Jira board and Notion Hub, advocate for best practices, assist laboratory scientists in utilizing new tools, and cultivate a collaborative development culture. Collaborate directly with scientists to identify gaps and unmet needs, crafting customized software solutions for data management, LIMS functionality, and data automation. Advocate for infrastructure as code and devise efficient deployment strategies. Produce clear, concise documentation for both engineering teams and end users. Required Qualifications A minimum of 2 years of experience in software development within a commercial environment. High proficiency in Python programming. Solid understanding of git best practices. Strong listening skills and the patience to thoroughly understand user challenges. Experience in implementing scalable software solutions. Exceptional problem-solving abilities and a team-oriented mindset. Excellent communication skills for effective collaboration with team members and stakeholders. A proactive self-starter with independent thinking capabilities and keen attention to detail. Desire to work with highly skilled and dynamic teams in a fast-paced, entrepreneurial technical setting. Preferred Qualifications A background in biological sciences. Familiarity with data science and machine learning libraries (e.g., pandas, numpy, scipy). Knowledge of modern developer tools (e.g., pydantic, pyright, uv). Understanding of Kubernetes, ArgoCD, and GitHub Actions.

Mar 4, 2026
Apply
companyNagarro logo
Full-time|Remote|Remote

Join Nagarro as a Staff Engineer specializing in Data Science, where you will have the opportunity to work on cutting-edge projects in a fully remote environment. As a key member of our engineering team, you'll leverage your expertise to build innovative data-driven solutions that empower our clients and enhance their business strategies.

May 19, 2023
Apply
company
Full-time|Remote|Remote

About RevenueBase:At RevenueBase, we are dedicated to building robust data infrastructure that enhances the trustworthiness of AI agents, minimizing errors.We offer freshly updated and verified B2B data crucial for the effective operation of autonomous AI agents and go-to-market workflows.Our company has experienced tremendous growth, tripling in size while achieving 100% gross dollar retention and maintaining a positive cash flow.Our data powers AI solutions for major players like Clay, Zoominfo, and Dun & Bradstreet, as well as the next generation of AI-driven GTM tools.Why This Position is Critical:As our data platform rapidly scales, we are seeking a skilled engineer to take charge of data pipelines from start to finish, ensuring high data quality and robust reliability during our growth phase.This role is essential for enhancing our data infrastructure, speeding up delivery through automation, and guaranteeing that our B2B clients receive reliable and timely data.You will work on data systems that directly support our customers' workflows, where the dependability of pipelines and data quality are crucial for client retention.Your Responsibilities:Develop and maintain production-ready data pipelines utilizing DBT, Snowflake, and contemporary orchestration tools.Take ownership of data engineering features from implementation to optimization and deployment.Troubleshoot and enhance existing pipelines by identifying bottlenecks and improving performance.Lead automation projects across the data stack to speed up delivery and minimize manual tasks.Provide secondary support for B2B clients by investigating data discrepancies and clarifying edge cases, ensuring data reliability.Design and implement new data import pipelines as we broaden our data source coverage.Enhance data quality by implementing validation, monitoring, and testing processes to ensure accurate data delivery.Participate in code reviews, architectural discussions, and contribute to data engineering best practices.Your Profile:You have over 3 years of professional experience in data engineering.You possess strong skills in SQL, data modeling, Python, and ETL/ELT principles.

Feb 10, 2026
Apply
companywizdaa logo
Full-time|Remote|Remote job

We are seeking a top-tier Data Engineer to join our team at wizdaa. If you are a developer who excels in:Leading your team with technical expertiseResolving complex challenges that others find difficultDelivering intricate features at an accelerated paceCreating exceptionally clean and maintainable codeEnhancing our codebase with pride and diligenceYour skills and experience will help us drive efficiency and innovation in data processing.Key Responsibilities:Develop, enhance, and scale data pipelines and infrastructure utilizing Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.Design, implement, and monitor data ingestion and transformation workflows, ensuring optimal performance and reliability.Work collaboratively with platform and AI/ML teams to automate data workflows and develop a comprehensive feature store.Integrate health metrics into engineering dashboards for enhanced visibility and operational insight.Model data and execute scalable transformations in Snowflake and PostgreSQL.Create reusable frameworks and connectors to streamline internal data processes.

Sep 8, 2025
Apply
company
Full-time|$120K/yr - $160K/yr|On-site|Somerville, Massachusetts, United States

Join VIA Science as a Senior Data EngineerAt VIA Science, we are on a mission to create cleaner, safer, and more equitable communities. By breaking down digital barriers, we provide the most secure and straightforward data and identity protection solutions.As a trusted partner of the U.S. Department of Defense and Fortune 100 companies worldwide, we tackle the toughest challenges in data and identity protection using our innovative Web3, quantum-resistant, passwordless technologies, protected by 19 patents.In your role as a Senior Data Engineer, you will be instrumental in shaping the future of our solutions, empowering our clients to leverage AI for data-driven decision-making.Collaborate with a dynamic team of data experts, developers, DevOps, and Client Delivery professionals who are at the forefront of AI innovation. This role is ideal for individuals driven by the challenge of enhancing data accessibility, ensuring high standards of data quality and availability, and improving overall performance.

Jan 26, 2026
Apply
companyMegazone Cloud logo
Data Engineer Co-op

Megazone Cloud

Internship|On-site|Rochester, NY

Join Our Team as a Data Engineer Co-opAt Megazone Cloud, we are seeking a passionate and innovative Data Engineer Co-op who thrives on building solutions and tackling real-world customer challenges. If you have a curious mind and a proactive attitude, we want to hear from you!This is your chance to dive into the world of data engineering, transitioning from academic theories to hands-on professional experience. We embrace a versatile skill set, ideal for those ready to engage in a dynamic environment and learn a modern tech stack through practical involvement.Your RoleHands-On Contribution: Become an integral member of our technical team, actively participating in the development and maintenance of our data pipelines that drive business and client solutions.Mentorship Opportunities: Work alongside experienced engineers under a mentorship model that aims to enhance your technical skills.Development Tasks: Write, refine, and optimize Python and SQL code for process automation and data workflow management.Project Engagement: Attend team meetings and contribute fresh insights during technical discussions and design sessions.Tooling Exposure: Gain practical experience with technologies such as AWS, Databricks, and Snowflake.Collaborative Work: Collaborate with data science and infrastructure teams to understand how high-growth startups scale their operations.Your Profile (Requirements)Education: Currently enrolled in a Bachelor's or Master's degree program in Computer Science, Data Science, or a related technical discipline.Timeline: Anticipated graduation date of December 2026 or later.Core Skills: Experience with Python and SQL through academic projects or coursework.Foundational Knowledge: Strong understanding of data structures, algorithms, and database concepts.Mindset: A proactive builder mentality with a strong desire for technical growth.Communication Skills: Ability to articulate technical concepts clearly and collaborate effectively within a team setting.Eligibility: Must be authorized to work in the United States.

Feb 23, 2026

Sign in to browse more jobs

Create account — see all 92,316 results

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.