Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Senior
Qualifications
Experience with Cloud APIs, SQL, and NoSQL databases. Proficiency in building and optimizing data pipelines processing large volumes of data. Ability to implement clean, incremental, and automated data updates.
About the job
Since its inception, Fivetran has been on a mission to simplify and enhance data accessibility, making it as reliable as electricity. Our platform seamlessly delivers customer data to warehouses in a ready-to-query format, eliminating the need for complex engineering or ongoing maintenance. We take pride in empowering organizations to harness data-driven insights with our technology on a daily basis.
About the Role
We are seeking a talented Senior Software Engineer to join our data pipeline service team. In this role, you will be responsible for developing and maintaining data pipelines that transfer data from various sources to data warehouses. Your responsibilities will encompass a wide range of tasks, from patching existing software to designing and implementing new connectors on our global Kubernetes compute cluster. You will ensure technical excellence within your team and services by actively contributing to code development, conducting code reviews, participating in architectural design, and mentoring junior engineers.
This full-time position is based in our Bangalore office, where our hybrid work model combines remote flexibility with in-person collaboration. Team members are expected to work in the office twice a week to foster connections and teamwork.
About Fivetran
Fivetran specializes in automating data integration, ensuring that businesses can access clean, reliable data effortlessly. Our innovative solutions empower companies to transform into data-driven organizations, leveraging their data for strategic decision-making.
Similar jobs
1 - 20 of 3,123 Jobs
Search for Senior Software Engineer Data Connectors
Full-time|Hybrid|Bengaluru, Karnataka, India, APAC
Since its inception, Fivetran has been on a mission to simplify and enhance data accessibility, making it as reliable as electricity. Our platform seamlessly delivers customer data to warehouses in a ready-to-query format, eliminating the need for complex engineering or ongoing maintenance. We take pride in empowering organizations to harness data-driven insights with our technology on a daily basis.About the RoleWe are seeking a talented Senior Software Engineer to join our data pipeline service team. In this role, you will be responsible for developing and maintaining data pipelines that transfer data from various sources to data warehouses. Your responsibilities will encompass a wide range of tasks, from patching existing software to designing and implementing new connectors on our global Kubernetes compute cluster. You will ensure technical excellence within your team and services by actively contributing to code development, conducting code reviews, participating in architectural design, and mentoring junior engineers.This full-time position is based in our Bangalore office, where our hybrid work model combines remote flexibility with in-person collaboration. Team members are expected to work in the office twice a week to foster connections and teamwork.
Teamwork Makes the Stream Work. Roku is Revolutionizing Television ViewingRoku stands at the forefront as the leading TV streaming platform across the U.S., Canada, and Mexico, with an ambitious goal to power every television worldwide. We initiated the streaming journey for TVs and aim to be the central platform connecting the entire TV ecosystem. Our mission is to connect viewers with their favorite content, empower publishers to grow and monetize large audiences, and provide advertisers with innovative tools to engage effectively.From your first day at Roku, your contributions will be valued and impactful. We are a rapidly expanding public company where every team member plays a crucial role. Join us in delighting millions of viewers globally while gaining significant experience across diverse disciplines. About the Team The Data Insights team is integral to Roku’s Advertising organization, spearheading measurement and analytics efforts that drive strategic decisions within the advertising landscape. We craft and oversee products that yield actionable insights for advertisers while fulfilling the operational and analytical requirements of internal teams. Collaboration is key as we partner with Product Managers, Data Scientists, Ad Sales, Ads Operations, and various groups within Advertising Engineering to deliver high-impact solutions. Looking ahead, we are investigating AI-driven measurement capabilities to enhance advertising campaign effectiveness and bolster internal analytics. About the Role We are in search of a talented Senior Software Engineer with extensive expertise in big data technologies, such as Apache Spark and Apache Airflow. This hybrid role merges software engineering and data engineering, necessitating skills in designing, building, and maintaining scalable systems for application development and large-scale data processing. In this position, you will collaborate with cross-functional teams to architect and manage robust, production-grade data products that fuel essential analytics and measurement capabilities. You will engage with technologies including Apache Spark, Apache Airflow, Trino, Druid, Spring Boot, and StarRocks.
At Databricks, we are driven by our mission to empower data teams in tackling some of the most challenging issues facing the world today. From realizing the future of transportation to expediting medical innovations, we build and maintain the leading data and AI infrastructure platform, enabling our clients to harness deep data insights for business enhancement. Founded by engineers with a relentless focus on customer satisfaction, we eagerly embrace every challenge, whether it's designing next-gen UI/UX for data interaction or scaling our services across millions of virtual machines.Our Databricks Mosaic AI utilizes a distinctive data-focused approach to develop enterprise-grade Machine Learning and Generative AI solutions, allowing organizations to securely and cost-effectively manage and deploy models trained with their proprietary data. We're excited to expand our team in Bengaluru, India, where we are in the process of launching 14 new teams from scratch!As a Senior Software Engineer at Databricks India, you will engage with various domains including:BackendDistributed Data Systems (DDS)Full Stack DevelopmentYour Impact:1. As part of our Backend teams, you will tackle diverse challenges across our core service platforms, including:Addressing intricate issues ranging from product development to infrastructure, focusing on distributed systems, large-scale service architecture, monitoring, workflow orchestration, and enhancing developer experience.Delivering dependable, high-performance services and client libraries designed for storing and accessing vast amounts of data on cloud storage solutions such as AWS S3 and Azure Blob Store.Creating robust, scalable services using technologies like Scala, Kubernetes, and Apache Spark™, supporting an infrastructure that handles millions of cluster-hours daily, while developing product features that empower customers to effortlessly manage and monitor their platform usage.2. Our DDS team encompasses:Apache Spark™Data Plane StorageDelta LakeDelta PipelinesPerformance Engineering3. As a Full Stack engineer, you will collaborate closely with your team and product management to deliver an outstanding user experience.
Full-time|On-site|Bengaluru, Karnataka, India, APAC
Join Fivetran as a Senior Developer Content Manager for our Connector SDK team. This is an exciting opportunity to lead the enhancement and management of developer content, focusing on creating engaging and informative resources for our SDKs. You will collaborate with cross-functional teams to ensure that developers have access to the best possible documentation and support.
Teamwork makes the stream work. Join Roku and Transform the Future of TV Streaming!As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is at the forefront of revolutionizing how audiences engage with television. Our goal is to power every TV worldwide, connecting viewers to their favorite content while empowering publishers and advertisers with innovative solutions.From day one, your contributions at Roku will be recognized and valued. We are a dynamic, growing public company where every team member plays a crucial role in delighting millions of viewers around the globe while acquiring invaluable experience across diverse fields. About Our Big Data TeamRoku operates one of the largest data lakes globally, managing over 70 PB of data and executing more than 10 million queries each month. Our Big Data team is responsible for developing and maintaining the platform that makes this possible. We offer tools to acquire, generate, process, monitor, validate, and access data for both streaming and batch processing. Our technologies include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and more. The team actively contributes to the Open Source community and aims to expand its involvement.Your RoleWe are modernizing our Big Data Platform and need your expertise to redefine our architecture to enhance user experience, reduce costs, and boost efficiency. If you are passionate about Big Data technologies and eager to explore Open Source, this position is tailored for you!Key ResponsibilitiesOptimize and fine-tune existing Big Data systems and pipelines, while also developing new ones to ensure they operate efficiently and cost-effectively.
Collaboration Fuels Innovation. Join Roku in Revolutionizing Television ViewingAs the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is on a mission to enhance how audiences experience television globally. We pioneered streaming technology and aim to connect consumers with the content they cherish, empower content publishers to grow and monetize their audiences, and offer advertisers unique tools to engage effectively with consumers.From day one at Roku, your contributions will be meaningful and recognized. As a rapidly expanding public company, we foster an environment where everyone plays a vital role. You’ll have the chance to delight millions of TV streamers worldwide while gaining invaluable experience across diverse disciplines. Team Overview The Data Management Platform (DMP) team is pivotal within Roku's Advertising division, spearheading audience management initiatives that drive decision-making across the advertising landscape. Our team develops and oversees products that facilitate advanced audience segmentation and management for advertisers, aligning with internal operational requirements. We collaborate closely with Product Managers, Machine Learning experts, Ad Sales, Ads Operations, and various teams within Advertising Engineering to deliver impactful solutions. Looking ahead, we are investigating AI-driven capabilities to further optimize advertising campaigns and enhance our platform's operational efficiency. Role Overview We are in search of a talented Senior Software Engineer skilled in big data technologies such as Apache Spark and Apache Airflow. This hybrid role will bridge software engineering expertise with data management, focusing on developing innovative solutions that enhance our advertising capabilities.
Teamwork Makes the Stream Work. Join Roku in Revolutionizing TV ViewingRoku is the leading TV streaming platform across the U.S., Canada, and Mexico, and our ambition is to power every television globally. We are at the forefront of streaming technology, connecting consumers to their favorite content, enabling publishers to grow and monetize their audiences, and providing advertisers with innovative ways to connect with viewers.From your very first day at Roku, you'll be an integral part of our mission. We thrive on a culture of innovation and collaboration, where every employee contributes to our success. You will have the opportunity to enhance the viewing experience for millions of streamers worldwide while gaining invaluable experience across diverse disciplines.
Full-time|Hybrid|Bengaluru, Karnataka, India, APAC
Since its inception, Fivetran has been dedicated to simplifying and ensuring the reliability of data access, likening it to the ease of electricity. Our platform seamlessly integrates customer data into their warehouses, rendering it canonical and ready for analysis, with no engineering overhead or maintenance needed. We take pride in the growing number of organizations that harness our technology daily to become authentically data-driven.About the RoleWe are seeking a Senior Software Engineer to join our Cloud Data Warehouse database connector team, collaborating with other database teams to enhance platform performance, reliability, and analytical capabilities, ultimately providing exceptional data usage experiences for our customers.This team is responsible for advancing a high-performance extract-load-transform (ELT) data integration system that empowers our connector teams to deliver the Fivetran ELT product through robust abstractions. The position will challenge you to tackle issues within the realms of performance engineering, data security, and cluster orchestration. You need not be a subject matter expert upon joining (you will become one in this role!), but prior experience in high-impact software teams is essential—especially in complex environments where optimal solutions may not be immediately apparent, and where your decisions have significant ramifications.We value candidates who are open to diverse perspectives and adept at building consensus. Successful individuals will possess the confidence to make decisive choices when necessary and the pragmatism to iterate on systems while they are live in production. A high level of expertise and productivity in contemporary software development environments, particularly with Java, is essential.We seek engineers who can grasp the key values that elevate our product and implement those values into the many small decisions they make daily as one of our senior engineers.This position is full-time and based in our Bangalore office. Our hybrid work model allows for a balanced mix of remote work and in-person collaboration, requiring two days in the office each week to foster connections and team-building.
Collaboration Fuels Innovation. Join Roku: Redefining TelevisionRoku stands as the leading TV streaming platform in the U.S., Canada, and Mexico, with aspirations to empower every television globally. As pioneers in streaming technology, we aim to connect consumers with their cherished content, assist content publishers in expanding and monetizing their audiences, and offer advertisers unique tools to engage effectively with viewers.From day one at Roku, you will have the opportunity to make significant contributions. As a rapidly expanding public company, we foster an environment where every team member plays a vital role. You will have the chance to delight millions of TV streamers worldwide while acquiring valuable experience across diverse fields. About Our Team The Ad Revenue team plays a crucial role within Roku's Advertising organization, focusing on financial automation and data solutions that facilitate informed decision-making across the advertising landscape. We thrive in a dynamic and intricate environment, collaborating closely with Finance, Accounting, Analytics, and various engineering teams to provide timely, impactful insights into business performance. As we progress, we are investing in AI-driven capabilities to enable non-technical stakeholders to effectively interpret our diverse data assets into actionable results.Role Overview We are in search of a highly proficient Senior Software Engineer for a hybrid role that merges software and data engineering. This position demands the capability to design, construct, and maintain scalable systems for both application development and extensive data processing. You will be responsible for architecting and overseeing production-grade data products and APIs, utilizing technologies such as Java/Scala, SQL, Spark, Airflow, and Kubernetes to deliver dependable, high-performance solutions. The ideal candidate will have a documented history of building high-scale data services and pipelines, with a strong commitment to data quality and operational excellence.
P-1385 In today's landscape, businesses are channeling substantial investments into the development and implementation of AI technologies and the data platforms that support them. However, do they truly understand the intricacies of their operations? At Databricks, we are dedicated to empowering data teams to tackle some of the most challenging global issues, from revolutionizing transportation to expediting medical innovations. We achieve this by constructing and maintaining the world's premier data and AI infrastructure platform, enabling our clients to leverage deep data insights to enhance their operations. Founded by engineers who prioritize customer satisfaction, we eagerly embrace every opportunity to confront technical challenges, including designing next-generation UI/UX for data interaction and optimizing our services across millions of virtual machines. And this is just the beginning. As a Senior Software Engineer on the Customer Foresight Team, you will spearhead the development of tools that empower customers to gain insights into their AI and data workloads, optimize performance, and reduce costs. You will craft data infrastructure capable of processing billions of entries daily, deploy across over 65 cloud regions globally, and support some of the world's most prominent companies in executing their DevOps, FinOps, SecOps, and AIOps operations. This role involves directing technical development throughout product milestones, from requirement refinement to execution, operation, and collaboration with the broader product development and partner teams to guarantee product success.
P-1403 At Databricks, we are dedicated to empowering data teams to tackle the most challenging problems in the world—ranging from transforming transportation to accelerating groundbreaking medical advancements. Our mission is realized through the development and operation of the premier data and AI infrastructure platform, enabling our clients to leverage deep data insights for enhanced business performance. The ingestion of data into the Lakehouse represents a pivotal investment area for Databricks, serving as a vital enabler for Data and AI processes. The Lakeflow Connect initiative aims to address this challenge by offering intuitive, ready-to-use connectors for a diverse array of sources, including enterprise applications (such as Salesforce, Workday, ServiceNow, SharePoint), databases (e.g., SQL Server), cloud storage, message queues, and local files. In addition to being a crucial component of Lakeflow and Data Engineering, Connect is a fundamental platform capability. Every interface at Databricks (Dashboards, Notebooks, SQL, AI) relies on ingestion functionality, and the leader in this role will collaborate closely with other product teams to integrate Connect into these interfaces. We are seeking engineers who possess a strong foundation in core database internals to join our Lakeflow Connect team. A significant aspect of Connect involves extracting data from OLTP systems while minimizing the impact on production environments. To achieve this efficiently, we are developing systems that implement techniques such as incremental data capture and log parsing. We are looking for hands-on engineers eager to make a substantial impact on a critical challenge facing the company.
Key Responsibilities:Architect, develop, and maintain efficient and scalable batch and stream data processing infrastructures to facilitate day-to-day machine learning operations, including training, serving, evaluation, and experimental systems.Create and implement foundational data models, data warehouses, and processing pipelines (both real-time and offline) using technologies such as AWS EMR Spark, Apache Kafka, AWS Athena, Snowflake, Airflow, and Apache HUDI.Collaborate closely with machine learning and data science teams to assess their data requirements, influence the data team’s strategic roadmap, and lead the execution of various initiatives.Establish a data governance platform to ensure secure and compliant data management, encompassing services for data cataloging, lineage tracking, auditing, data deletion, and masking.Develop and manage orchestration platforms utilizing Temporal and Airflow, empowering other teams to create features and workflows.Design and enhance platform and data services/APIs to provide data access for diverse stakeholders and customer-facing data products.
P-1348 At Databricks, we are dedicated to empowering data teams to tackle some of the most challenging problems in the world, ranging from security threat detection to the development of cancer drugs. Our mission is to create and manage the leading data and AI infrastructure platform, allowing our customers to concentrate on the critical challenges central to their missions. Our engineering teams are committed to developing innovative technical products that meet real and significant needs globally. We continuously push the limits of data and AI technology while ensuring resilience, security, and scalability to enhance our customers' success on our platform. We are responsible for the operation of one of the largest scale software platforms, comprising millions of virtual machines that generate terabytes of logs and process exabytes of data on a daily basis. At this scale, we encounter cloud hardware, network, and operating system issues, and our software must effectively shield our customers from these challenges. As a Senior Software Engineer on the Data Platform team, you will contribute to building the Data Intelligence Platform for Databricks, which aims to automate decision-making across the organization. You will collaborate closely with Databricks Product Teams, Data Science, Applied AI, and more. Your role will involve developing a range of tools for logging, orchestration, data transformation, metric storage, governance platforms, and data consumption layers. You will leverage the latest and most advanced Databricks products and other tools in the data ecosystem. Our team also serves as a substantial in-house customer, using Databricks to inform the future direction of our product. Your Impact: Design and manage the Databricks metrics store, enabling all business units and engineering teams to consolidate and share detailed metrics on a common platform with high quality, introspection capabilities, and query performance. Lead the development of the cross-company Data Intelligence Platform, which encapsulates all business and product metrics essential for running Databricks. You will play a pivotal role in balancing data protection with ease of shareability as we transition to a public company. Create tools and infrastructure to efficiently manage and operate Databricks on Databricks at scale across multiple clouds, geographies, and deployment types. This includes CI/CD processes, testing frameworks for pipelines and data quality, and infrastructure-as-code tooling. Establish the foundational ETL framework utilized by all pipelines developed within the company. Collaborate with our engineering teams to provide...
Acceldata is seeking a highly skilled Senior Software Engineer to join our dynamic team specializing in the Open Data Platform (ODP). In this role, you will be instrumental in designing, developing, and maintaining scalable software systems that power our innovative data solutions. You will collaborate with cross-functional teams to deliver high-quality products that exceed client expectations.
Senior Software EngineerJoin Zeta Global's dynamic Data Connectivity POD, where we innovate and build advanced data products that handle large-scale data processing with cutting-edge architecture on AWS, utilizing managed Apache Iceberg tables. Our systems handle hundreds of gigabytes daily and process multi-terabyte data volumes, supporting both batch and streaming pipelines with remarkable sub-second processing speeds. This scalable multi-tenant architecture ensures high data integrity and cost efficiency while delivering real-time updates and personalized customer experiences powered by advanced AI and proprietary data assets.In this role, you will play a crucial part in the DCon POD team at Zeta, responsible for developing and maintaining data platforms that manage all data flows. You'll design and implement scalable frameworks that enable self-service data pipeline creation, allowing both technical and non-technical users to build robust pipelines effectively. This position involves optimizing services for both batch and real-time streaming data processing, ensuring data availability with minimal latency. You will gain valuable expertise in constructing PetaByte-scale Data Lakes and data engineering frameworks that function at an enterprise level.As a Senior Software Engineer specializing in Microservices and Data Pipelines, you will drive the development of enterprise-grade data services. We seek hands-on, self-driven professionals eager to explore various technologies and languages to achieve results. Ideal candidates will have experience in creating high-throughput microservices, working with large-scale datasets and databases, developing ETL pipelines, and ensuring compliance in data management, particularly with strong Python skills.
Collaborate to Innovate in Streaming. Join Roku's Vision to Revolutionize TelevisionAs the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is on a mission to power every television globally. We have transformed how people enjoy their favorite shows and movies by connecting consumers to the content they love, assisting publishers in reaching vast audiences, and offering advertisers unique tools to engage effectively.At Roku, your contributions will be recognized from day one. We are a rapidly expanding public company where every team member plays a pivotal role. This is your chance to impact millions of TV streamers worldwide while gaining valuable experience across diverse fields. About Our Team The Ad Data Activation organization is at the forefront of building a reliable, privacy-conscious, and scalable data foundation vital to Roku’s advertising growth. We develop identity systems, device graph pipelines, audience platforms, and insights tools that empower precise targeting, measurement, and reporting across Roku Ads. We are seeking a Senior Machine Learning Engineer to enhance our systems' intelligence. You will operate at the intersection of large-scale data platforms, applied machine learning, and generative AI, developing functionalities that make Roku's advertising data actionable for both internal teams and advertisers. This role involves creating core generative AI platform features for Roku Advertising, alongside applying traditional ML techniques.
P-1346 At Databricks, we are dedicated to empowering data teams to tackle some of the world's most challenging problems—from transforming transportation to speeding up medical innovations. Our mission revolves around creating and operating the most advanced data and AI infrastructure platform, allowing our clients to harness deep data insights to enhance their businesses. Founded by engineers and driven by a strong customer focus, we eagerly embrace every chance to address technical obstacles, whether it's designing next-generation UI/UX for data interaction or scaling our services and infrastructure across millions of virtual machines. With Databricks Mosaic AI, we offer a distinctive data-centric methodology for constructing enterprise-grade Machine Learning and Generative AI solutions, enabling organizations to securely and cost-effectively manage ML and Generative AI models, augmented or trained using their enterprise data. As we expand in Bengaluru, India, we are in the process of establishing 14 new teams from the ground up! As a Senior Software Engineer in the Infrastructure domain at Databricks India, you will have the opportunity to work across: Backend (Infrastructure) Your impact will include: Engaging with diverse challenges that bridge product and infrastructure, including distributed systems, large-scale service architecture and monitoring, workflow orchestration, and enhancing developer experience. Delivering reliable and high-performance services and client libraries for managing vast amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store. Building robust, scalable services using technologies like Scala and Kubernetes, alongside data pipelines with Apache Spark™ and Databricks, to support a pricing infrastructure that serves millions of cluster-hours daily while developing product features that allow customers to manage and monitor platform usage effortlessly.
Full-time|Hybrid|Bengaluru, Karnataka, India, APAC
Since its inception, Fivetran has been on a mission to make data access as seamless and dependable as electricity. Our technology ensures that customer data is delivered to their warehouses in a canonical format, ready for querying, without the need for engineering or maintenance. We take pride in enabling organizations to harness the power of data to drive their decisions daily.About the RoleWe are seeking a dedicated and technically proficient Senior Engineering Manager to join our engineering leadership team, with a strong enthusiasm for data movement. In this role, you will collaborate closely with engineers and product managers to shape the future of the Fivetran Data Platform.As a Senior Engineering Manager, you will lead a team of engineers and coordinate with various stakeholders including product management, support engineers, and sales engineers. You will be responsible for the development of Fivetran’s premier destinations for our broad customer base, a critical function in fostering the long-term growth and success of our company.You will oversee the entire software development lifecycle for your team, delivering essential capabilities that contribute to the success of our diverse clientele. Your responsibilities will include ensuring the quality, efficiency, and reliability of your team's focus areas while fostering a culture of engagement, fairness, and continuous improvement.This is a full-time position based in our Bangalore office. Our hybrid work model provides a balance of remote flexibility and in-person collaboration, with two days in the office each week to foster team connection and collaboration.Technologies You’ll UseJava, Postgres, GCP, AWS, Azure, SQL, Snowflake, BigQuery, Bazel, BuildKite, Docker, KubernetesWhat You’ll DoLead your team through project management, technical design authoring, and review development work.
Join 6sense as a Software Engineer III focusing on Data! In this pivotal role, you will leverage your expertise to build data-driven solutions that propel our innovative platform forward. Collaborate with cross-functional teams to design, develop, and optimize data processes that enhance our offerings and drive business success.
Join Tekion as a Senior Data Platform Engineer, where you will play a crucial role in developing and optimizing our data platform. You will collaborate with cross-functional teams to build scalable data solutions that empower our business decisions and enhance user experience.
Mar 26, 2026
Sign in to browse more jobs
Create account — see all 3,123 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.