Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
Proven experience in software engineering with a strong focus on systems infrastructure. Expertise in programming languages such as Java, C++, or Python. Solid understanding of distributed systems and cloud architecture. Ability to work collaboratively in a team-oriented environment. Strong problem-solving skills and a passion for technology. Experience with DevOps practices and CI/CD pipelines is a plus.
About the job
As a Senior Staff Software Engineer specializing in Systems Infrastructure at LinkedIn, you will be at the forefront of delivering cutting-edge technology solutions that support our global platform. You will be responsible for designing, developing, and optimizing our infrastructure systems that scale to meet the demands of millions of users worldwide.
Your role requires a deep understanding of systems design, architecture, and an ability to innovate in a fast-paced environment. You will collaborate with cross-functional teams to drive projects from inception through deployment, ensuring the highest standards of quality and performance.
About LinkedIn
LinkedIn is the world's largest professional network, with over 700 million members. Our mission is to connect the world's professionals to make them more productive and successful. We prioritize innovation, diversity, and inclusion in our workforce and are committed to creating a workplace that fosters growth and collaboration.
Similar jobs
1 - 20 of 3,144 Jobs
Search for Senior Software Engineer Data Infrastructure
Key Responsibilities:Architect, develop, and maintain efficient and scalable batch and stream data processing infrastructures to facilitate day-to-day machine learning operations, including training, serving, evaluation, and experimental systems.Create and implement foundational data models, data warehouses, and processing pipelines (both real-time and offline) using technologies such as AWS EMR Spark, Apache Kafka, AWS Athena, Snowflake, Airflow, and Apache HUDI.Collaborate closely with machine learning and data science teams to assess their data requirements, influence the data team’s strategic roadmap, and lead the execution of various initiatives.Establish a data governance platform to ensure secure and compliant data management, encompassing services for data cataloging, lineage tracking, auditing, data deletion, and masking.Develop and manage orchestration platforms utilizing Temporal and Airflow, empowering other teams to create features and workflows.Design and enhance platform and data services/APIs to provide data access for diverse stakeholders and customer-facing data products.
P-1346 At Databricks, we are dedicated to empowering data teams to tackle some of the world's most challenging problems—from transforming transportation to speeding up medical innovations. Our mission revolves around creating and operating the most advanced data and AI infrastructure platform, allowing our clients to harness deep data insights to enhance their businesses. Founded by engineers and driven by a strong customer focus, we eagerly embrace every chance to address technical obstacles, whether it's designing next-generation UI/UX for data interaction or scaling our services and infrastructure across millions of virtual machines. With Databricks Mosaic AI, we offer a distinctive data-centric methodology for constructing enterprise-grade Machine Learning and Generative AI solutions, enabling organizations to securely and cost-effectively manage ML and Generative AI models, augmented or trained using their enterprise data. As we expand in Bengaluru, India, we are in the process of establishing 14 new teams from the ground up! As a Senior Software Engineer in the Infrastructure domain at Databricks India, you will have the opportunity to work across: Backend (Infrastructure) Your impact will include: Engaging with diverse challenges that bridge product and infrastructure, including distributed systems, large-scale service architecture and monitoring, workflow orchestration, and enhancing developer experience. Delivering reliable and high-performance services and client libraries for managing vast amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store. Building robust, scalable services using technologies like Scala and Kubernetes, alongside data pipelines with Apache Spark™ and Databricks, to support a pricing infrastructure that serves millions of cluster-hours daily while developing product features that allow customers to manage and monitor platform usage effortlessly.
About the RoleAs a Software Engineer focused on Platform and Data Infrastructure, you will be pivotal in designing and maintaining the foundational elements that drive Galileo’s platform. Your expertise will be vital in tackling complex systems challenges at scale, ensuring our infrastructure remains robust, efficient, and responsive. We are on the lookout for a skilled engineer who has hands-on experience in building large-scale real-time infrastructure, crafting services and APIs capable of processing millions of queries, and addressing the unique challenges posed by high-scale systems. Familiarity with optimizing high-volume traffic across SQL and NoSQL databases, time-series databases, and object stores is essential.What You'll Be DoingDesign and scale core infrastructure by creating and optimizing distributed systems and APIs that can manage millions of real-time queries while maintaining low latency and high reliability.Develop data-rich systems by working with SQL, NoSQL, time-series, and object storage solutions, ensuring that data pipelines and retrieval processes are optimized for maximum throughput and efficiency.Enhance performance at scale through profiling and tuning systems for latency, throughput, and cost, ensuring the platform grows in alignment with customer demand.Engage in the development of real-time serving systems by designing high-throughput caching layers and efficient data lookup services to provide swift, dependable access to extensive datasets.
Collaboration is key to seamless streaming. Join Roku in revolutionizing television viewing.As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is on a mission to enhance every television experience globally. By pioneering streaming technologies, we aim to connect audiences with their favorite content, empower content creators to grow their reach, and offer advertisers innovative ways to engage.From day one, your contributions at Roku will be recognized and valued. We are a rapidly expanding public company where every team member plays a vital role. Get ready to engage millions of TV streamers globally while gaining invaluable experience across diverse areas. About Our TeamThe Search & Recommendations (S&R) Platform Engineering team is at the heart of our mission to provide exceptional streaming experiences for millions worldwide. We design and maintain the core infrastructure that enables search, personalization, and content discovery across all Roku platforms.Our diverse and collaborative team emphasizes ownership, transparency, and continuous improvement. We partner with various infrastructure teams to develop high-performance distributed systems and observability tools that facilitate real-time search, ranking, and recommendations.Our projects involve designing and optimizing online inference infrastructure, feature stores, and data pipelines, all seamlessly integrated within the broader platform ecosystem (Kubernetes, Istio, Envoy). We thrive on tackling complex technical challenges that impact user experience.
About AIONAION is revolutionizing the AI landscape by crafting an interoperable AI cloud platform that redefines high-performance computing (HPC) with its decentralized AI cloud. Built for optimal bare-metal performance, AION democratizes access to computing resources and offers managed services, positioning itself as an all-encompassing AI lifecycle platform—guiding organizations from data ingestion to model deployment through its innovative forward-deployed engineering approach.As AI reshapes businesses globally, the demand for compute resources is at an all-time high. AION aims to become the gateway for dynamic compute workloads, establishing integration bridges with various data centers worldwide, while reinventing the compute stack with cutting-edge serverless technology. We find ourselves at a pivotal moment where enterprises struggle to harmonize AI adoption with robust security measures. At AION, we prioritize enterprise security and compliance, meticulously rethinking every layer of our infrastructure, from hardware and network packets to API interfaces.Founded by a team of high-caliber professionals with successful prior ventures, AION is well-capitalized by prominent VCs and enjoys strategic global partnerships. With headquarters in the US and a growing global presence, we are currently assembling our core team in India.Who You AreYou are an experienced platform engineer with a passion for transforming complex infrastructure into deployable, composable, and portable solutions across diverse customer environments. You have successfully led multi-cloud deployment strategies, designed globally distributed systems with stringent data sovereignty requirements, and created automation tools for deploying enterprise platforms in customer VPCs and on-premises setups. You relish the challenge of simplifying sophisticated cloud platforms for seamless customer deployment and operation within their infrastructures.Your skills in Kubernetes, infrastructure-as-code (Terraform, Pulumi), and multi-cloud architectures are crucial. You possess a deep understanding of private cloud deployments, workload isolation, container security, and compliance demands. You take ownership of the deployment experience, strategically assess customer needs, and aspire for your work to empower enterprises globally to run AION on their own terms.
P-926 At Databricks, we are dedicated to empowering data teams to tackle some of the most challenging global issues — from turning innovative transportation concepts into reality to accelerating groundbreaking medical advancements. Our mission is realized through the creation and management of the world's leading data and AI infrastructure platform, enabling our clients to leverage profound data insights to enhance their business outcomes. Networking Infrastructure forms a fundamental component of this platform, driving all our data and AI offerings. It also underpins essential enterprise security features utilized by clients across our suite of products, facilitating connectivity solutions to cloud resources while safeguarding against data exfiltration. We are in the process of establishing our team from the ground up in Bengaluru, India. We invite experienced Senior Software Engineers with a strong networking background to join our Networking Infrastructure team. As an early member of our Bengaluru site, you will focus on developing new backend connectivity services that support millions of VMs operating on Databricks. You will lead the design and development of innovative services that enhance the connectivity between our control plane and compute plane. This new platform will enable Databricks to scale our compute plane more effectively while optimizing cloud resource utilization. Collaborating closely with cross-functional teams, including product management, operations, and other engineering groups, you will ensure the delivery of robust, scalable, and efficient networking systems. This position offers a remarkable opportunity for a hands-on leader who excels in a dynamic environment and is eager to tackle complex multi-cloud and distributed systems challenges.
P-1403 At Databricks, we are dedicated to empowering data teams to tackle the most challenging problems in the world—ranging from transforming transportation to accelerating groundbreaking medical advancements. Our mission is realized through the development and operation of the premier data and AI infrastructure platform, enabling our clients to leverage deep data insights for enhanced business performance. The ingestion of data into the Lakehouse represents a pivotal investment area for Databricks, serving as a vital enabler for Data and AI processes. The Lakeflow Connect initiative aims to address this challenge by offering intuitive, ready-to-use connectors for a diverse array of sources, including enterprise applications (such as Salesforce, Workday, ServiceNow, SharePoint), databases (e.g., SQL Server), cloud storage, message queues, and local files. In addition to being a crucial component of Lakeflow and Data Engineering, Connect is a fundamental platform capability. Every interface at Databricks (Dashboards, Notebooks, SQL, AI) relies on ingestion functionality, and the leader in this role will collaborate closely with other product teams to integrate Connect into these interfaces. We are seeking engineers who possess a strong foundation in core database internals to join our Lakeflow Connect team. A significant aspect of Connect involves extracting data from OLTP systems while minimizing the impact on production environments. To achieve this efficiently, we are developing systems that implement techniques such as incremental data capture and log parsing. We are looking for hands-on engineers eager to make a substantial impact on a critical challenge facing the company.
P-1348 At Databricks, we are dedicated to empowering data teams to tackle some of the most challenging problems in the world, ranging from security threat detection to the development of cancer drugs. Our mission is to create and manage the leading data and AI infrastructure platform, allowing our customers to concentrate on the critical challenges central to their missions. Our engineering teams are committed to developing innovative technical products that meet real and significant needs globally. We continuously push the limits of data and AI technology while ensuring resilience, security, and scalability to enhance our customers' success on our platform. We are responsible for the operation of one of the largest scale software platforms, comprising millions of virtual machines that generate terabytes of logs and process exabytes of data on a daily basis. At this scale, we encounter cloud hardware, network, and operating system issues, and our software must effectively shield our customers from these challenges. As a Senior Software Engineer on the Data Platform team, you will contribute to building the Data Intelligence Platform for Databricks, which aims to automate decision-making across the organization. You will collaborate closely with Databricks Product Teams, Data Science, Applied AI, and more. Your role will involve developing a range of tools for logging, orchestration, data transformation, metric storage, governance platforms, and data consumption layers. You will leverage the latest and most advanced Databricks products and other tools in the data ecosystem. Our team also serves as a substantial in-house customer, using Databricks to inform the future direction of our product. Your Impact: Design and manage the Databricks metrics store, enabling all business units and engineering teams to consolidate and share detailed metrics on a common platform with high quality, introspection capabilities, and query performance. Lead the development of the cross-company Data Intelligence Platform, which encapsulates all business and product metrics essential for running Databricks. You will play a pivotal role in balancing data protection with ease of shareability as we transition to a public company. Create tools and infrastructure to efficiently manage and operate Databricks on Databricks at scale across multiple clouds, geographies, and deployment types. This includes CI/CD processes, testing frameworks for pipelines and data quality, and infrastructure-as-code tooling. Establish the foundational ETL framework utilized by all pipelines developed within the company. Collaborate with our engineering teams to provide...
As a Senior Staff Software Engineer specializing in Systems Infrastructure at LinkedIn, you will be at the forefront of delivering cutting-edge technology solutions that support our global platform. You will be responsible for designing, developing, and optimizing our infrastructure systems that scale to meet the demands of millions of users worldwide.Your role requires a deep understanding of systems design, architecture, and an ability to innovate in a fast-paced environment. You will collaborate with cross-functional teams to drive projects from inception through deployment, ensuring the highest standards of quality and performance.
Teamwork Makes the Stream Work. Roku is Revolutionizing Television ViewingRoku stands at the forefront as the leading TV streaming platform across the U.S., Canada, and Mexico, with an ambitious goal to power every television worldwide. We initiated the streaming journey for TVs and aim to be the central platform connecting the entire TV ecosystem. Our mission is to connect viewers with their favorite content, empower publishers to grow and monetize large audiences, and provide advertisers with innovative tools to engage effectively.From your first day at Roku, your contributions will be valued and impactful. We are a rapidly expanding public company where every team member plays a crucial role. Join us in delighting millions of viewers globally while gaining significant experience across diverse disciplines. About the Team The Data Insights team is integral to Roku’s Advertising organization, spearheading measurement and analytics efforts that drive strategic decisions within the advertising landscape. We craft and oversee products that yield actionable insights for advertisers while fulfilling the operational and analytical requirements of internal teams. Collaboration is key as we partner with Product Managers, Data Scientists, Ad Sales, Ads Operations, and various groups within Advertising Engineering to deliver high-impact solutions. Looking ahead, we are investigating AI-driven measurement capabilities to enhance advertising campaign effectiveness and bolster internal analytics. About the Role We are in search of a talented Senior Software Engineer with extensive expertise in big data technologies, such as Apache Spark and Apache Airflow. This hybrid role merges software engineering and data engineering, necessitating skills in designing, building, and maintaining scalable systems for application development and large-scale data processing. In this position, you will collaborate with cross-functional teams to architect and manage robust, production-grade data products that fuel essential analytics and measurement capabilities. You will engage with technologies including Apache Spark, Apache Airflow, Trino, Druid, Spring Boot, and StarRocks.
Join Roku, a pioneering company in streaming technology, as a Senior Software Engineer specializing in Cloud Infrastructure and Observability. In this role, you will design and implement robust cloud solutions, focusing on performance, reliability, and scalability. You will collaborate with cross-functional teams to enhance our observability frameworks, ensuring seamless user experiences. If you're passionate about cloud technologies and want to make a significant impact, we want to hear from you!
Join zaimler as a Data Infrastructure EngineerAt zaimler, we understand that AI agents cannot effectively reason over fragmented data. In today's enterprise landscape, data is often scattered across multiple systems without a coherent context, leading to failures in AI applications. As we transition from copilots to fully autonomous agents, we are pioneering a new layer of infrastructure to address this challenge.Our platform at zaimler serves as the backbone for the agentic era, facilitating the automatic discovery of domain knowledge, mapping of relationships, and providing AI agents with the semantic context they need to operate accurately and efficiently at scale. Imagine knowledge graphs that enable real-time inference, designed for systems that require reasoning, not merely data retrieval.Founded by industry veterans Biswajit Das and Sofus Macskassy, who have extensive experience in data infrastructure and knowledge graph development, zaimler is a small but dynamic team at the seed stage, collaborating with major enterprises in sectors like insurance, travel, and technology. If you aspire to contribute to the foundational infrastructure that will support the next decade of AI advancements, we would love to connect with you.Role OverviewWe are seeking a skilled Data Infrastructure Engineer to play a pivotal role in developing our foundational distributed data layer that drives our semantic platform. You will be responsible for designing, constructing, and scaling systems that facilitate high-throughput data ingestion, transformation, and real-time processing, ultimately shaping the core of our knowledge layer.As an early member of our Bengaluru office, your expertise will significantly influence the technical direction, culture, and standards of our growing team.
Full-time|Hybrid|Bengaluru, Karnataka, India, APAC
Since its inception, Fivetran has been on a mission to simplify and enhance data accessibility, making it as reliable as electricity. Our platform seamlessly delivers customer data to warehouses in a ready-to-query format, eliminating the need for complex engineering or ongoing maintenance. We take pride in empowering organizations to harness data-driven insights with our technology on a daily basis.About the RoleWe are seeking a talented Senior Software Engineer to join our data pipeline service team. In this role, you will be responsible for developing and maintaining data pipelines that transfer data from various sources to data warehouses. Your responsibilities will encompass a wide range of tasks, from patching existing software to designing and implementing new connectors on our global Kubernetes compute cluster. You will ensure technical excellence within your team and services by actively contributing to code development, conducting code reviews, participating in architectural design, and mentoring junior engineers.This full-time position is based in our Bangalore office, where our hybrid work model combines remote flexibility with in-person collaboration. Team members are expected to work in the office twice a week to foster connections and teamwork.
Join WEKA, where we are revolutionizing the enterprise data stack for the era of reasoning. Our flagship product, NeuralMesh by WEKA, is setting the benchmark for agentic AI data infrastructure through a cloud and AI-native software solution that seamlessly integrates into any environment. Our technology transforms traditional data silos into efficient data pipelines that optimize GPU utilization, accelerate AI model training and inference, and enhance other compute-intensive workloads while reducing energy consumption.As a pre-IPO, rapidly growing company, WEKA has successfully secured $375M in funding from esteemed venture capitalists and strategic investors. We partner with some of the world's leading enterprises and research organizations, including 12 of the Fortune 50, helping them achieve faster and more sustainable discoveries, insights, and business outcomes. If you're passionate about tackling complex data challenges and driving intelligent innovation, we welcome you to embark on this exciting journey with us.
At Databricks, we are driven by our mission to empower data teams in tackling some of the most challenging issues facing the world today. From realizing the future of transportation to expediting medical innovations, we build and maintain the leading data and AI infrastructure platform, enabling our clients to harness deep data insights for business enhancement. Founded by engineers with a relentless focus on customer satisfaction, we eagerly embrace every challenge, whether it's designing next-gen UI/UX for data interaction or scaling our services across millions of virtual machines.Our Databricks Mosaic AI utilizes a distinctive data-focused approach to develop enterprise-grade Machine Learning and Generative AI solutions, allowing organizations to securely and cost-effectively manage and deploy models trained with their proprietary data. We're excited to expand our team in Bengaluru, India, where we are in the process of launching 14 new teams from scratch!As a Senior Software Engineer at Databricks India, you will engage with various domains including:BackendDistributed Data Systems (DDS)Full Stack DevelopmentYour Impact:1. As part of our Backend teams, you will tackle diverse challenges across our core service platforms, including:Addressing intricate issues ranging from product development to infrastructure, focusing on distributed systems, large-scale service architecture, monitoring, workflow orchestration, and enhancing developer experience.Delivering dependable, high-performance services and client libraries designed for storing and accessing vast amounts of data on cloud storage solutions such as AWS S3 and Azure Blob Store.Creating robust, scalable services using technologies like Scala, Kubernetes, and Apache Spark™, supporting an infrastructure that handles millions of cluster-hours daily, while developing product features that empower customers to effortlessly manage and monitor their platform usage.2. Our DDS team encompasses:Apache Spark™Data Plane StorageDelta LakeDelta PipelinesPerformance Engineering3. As a Full Stack engineer, you will collaborate closely with your team and product management to deliver an outstanding user experience.
Full-time|Remote|Bengaluru, India; EMEA Remote; Tel Aviv, Israel
At WEKA, we are pioneering a transformative approach to the enterprise data stack, designed for the era of reasoning. Our flagship product, NeuralMesh by WEKA, exemplifies the forefront of agentic AI data infrastructure, offering a cloud and AI-native software solution that is adaptable to any environment. This innovation converts traditional data silos into dynamic data pipelines, significantly boosting GPU utilization and enhancing the speed, efficiency, and energy consumption of AI model training, inference, and other high-compute workloads.As a pre-IPO, growth-stage enterprise, WEKA is experiencing remarkable growth, having secured $375 million in funding from prominent venture capital and strategic investors. We collaborate with some of the largest and most innovative organizations worldwide, including 12 of the Fortune 50, to accelerate their discoveries, insights, and sustainable business outcomes. Our team is driven by a commitment to address our customers' most intricate data challenges and to foster intelligent innovation and business value. If this resonates with you, we welcome you to embark on this exciting journey with us.
P-1394 At Databricks, we are dedicated to empowering data teams to tackle some of the world's most significant challenges—from revolutionizing transportation to accelerating medical advancements. Our mission is fulfilled by creating and managing the industry's leading data and AI infrastructure platform, enabling our customers to harness deep data insights to enhance their operations. Founded by engineers with a customer-centric mindset, we eagerly embrace every challenge, whether it's creating cutting-edge UI/UX for data interaction or scaling our infrastructure across millions of virtual machines. Our engineering teams design technical products that address real, critical needs in today's world. We continuously push the limits of data and AI technology while ensuring the security and scalability essential for our customers' success on our platform. We operate one of the largest software platforms globally, comprising millions of virtual machines that generate terabytes of logs and process exabytes of data daily. At this scale, we monitor cloud hardware, network, and operating system faults, and our software is designed to shield our customers from these issues seamlessly. As a Senior Software Engineer on the Observability team, you will develop observability solutions that provide critical insights into the health and performance of our products and infrastructure.
Teamwork makes the stream work. Join Roku and Transform the Future of TV Streaming!As the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is at the forefront of revolutionizing how audiences engage with television. Our goal is to power every TV worldwide, connecting viewers to their favorite content while empowering publishers and advertisers with innovative solutions.From day one, your contributions at Roku will be recognized and valued. We are a dynamic, growing public company where every team member plays a crucial role in delighting millions of viewers around the globe while acquiring invaluable experience across diverse fields. About Our Big Data TeamRoku operates one of the largest data lakes globally, managing over 70 PB of data and executing more than 10 million queries each month. Our Big Data team is responsible for developing and maintaining the platform that makes this possible. We offer tools to acquire, generate, process, monitor, validate, and access data for both streaming and batch processing. Our technologies include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and more. The team actively contributes to the Open Source community and aims to expand its involvement.Your RoleWe are modernizing our Big Data Platform and need your expertise to redefine our architecture to enhance user experience, reduce costs, and boost efficiency. If you are passionate about Big Data technologies and eager to explore Open Source, this position is tailored for you!Key ResponsibilitiesOptimize and fine-tune existing Big Data systems and pipelines, while also developing new ones to ensure they operate efficiently and cost-effectively.
(P-1490) At Databricks, we process vast amounts of data, managing petabytes and billions of transaction events each day. Our infrastructure is critical; every cluster launch, query executed, and dollar billed must function flawlessly. With stringent accuracy requirements of 99.999% for billing transactions and the ability to ingest terabytes of data per second across over 100 regions, the stakes are high. A mere five-minute outage can lead to significant revenue loss and erode customer trust. Therefore, our infrastructure is not just important—it is essential for our survival. As we scale, the next phase of our growth demands that we establish disaster recovery systems that ensure reliability, not just hope for it. We need testing frameworks that can identify production-scale issues before they affect our users, correctness guarantees that eliminate billing errors, and automation that scales operations efficiently alongside growth. In this pivotal leadership role, you will spearhead the development of the data infrastructure organization that underpins Databricks' ongoing expansion. You will lay the groundwork for teams in Bengaluru, responsible for the foundational systems that assure billing accuracy, operational resilience, and zero-downtime recovery across our monetization stack. This encompasses multi-region data ingestion, developer platforms, and deployment automation that streamline operations at petabyte scale. Your mission transcends mere maintenance; it involves architecting the infrastructure that allows Databricks to grow while alleviating operational burdens. You will define the standards for world-class infrastructure that will serve data platforms for the next decade. In your role as a founding technical leader in our rapidly growing engineering hub, you will collaborate closely with global infrastructure leaders. Beyond building exceptional teams, you will influence architectural decisions that resonate throughout the organization and advocate for an infrastructure-as-product mindset that transforms infrastructure into a global force multiplier. You will thrive in an engineering culture rooted in Apache Spark and open source, where technical expertise is highly valued and infrastructure engineers are regarded as skilled artisans. The ideal candidate has previously built infrastructure organizations in environments where achieving five nines was not merely a goal but a reality, where petabyte-scale operations were a daily expectation, and where the technical strength of the infrastructure team determined business scalability. You possess the technical expertise to engage in discussions about data architecture, the strategic insight to shape multi-year platform roadmaps, and the leadership ability to create teams that attract top-tier engineers. Most importantly, you believe that effective data infrastructure not only supports the business but also defines its potential.
Collaboration Fuels Innovation. Join Roku in Revolutionizing Television ViewingAs the leading TV streaming platform in the U.S., Canada, and Mexico, Roku is on a mission to enhance how audiences experience television globally. We pioneered streaming technology and aim to connect consumers with the content they cherish, empower content publishers to grow and monetize their audiences, and offer advertisers unique tools to engage effectively with consumers.From day one at Roku, your contributions will be meaningful and recognized. As a rapidly expanding public company, we foster an environment where everyone plays a vital role. You’ll have the chance to delight millions of TV streamers worldwide while gaining invaluable experience across diverse disciplines. Team Overview The Data Management Platform (DMP) team is pivotal within Roku's Advertising division, spearheading audience management initiatives that drive decision-making across the advertising landscape. Our team develops and oversees products that facilitate advanced audience segmentation and management for advertisers, aligning with internal operational requirements. We collaborate closely with Product Managers, Machine Learning experts, Ad Sales, Ads Operations, and various teams within Advertising Engineering to deliver impactful solutions. Looking ahead, we are investigating AI-driven capabilities to further optimize advertising campaigns and enhance our platform's operational efficiency. Role Overview We are in search of a talented Senior Software Engineer skilled in big data technologies such as Apache Spark and Apache Airflow. This hybrid role will bridge software engineering expertise with data management, focusing on developing innovative solutions that enhance our advertising capabilities.
Mar 5, 2026
Sign in to browse more jobs
Create account — see all 3,144 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.