Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Mid to Senior
Qualifications
Key ResponsibilitiesDevelop and manage ingestion pipelines to reliably and accurately transfer paid media data from sources including Meta and Google Ads to our data warehouse. Collaborate closely with Analytics Engineers to ensure data is structured and trustworthy for easy modeling. Assist in optimizing our Dagster setup, including scheduling and monitoring, to ensure smooth operations. Contribute to our lightweight, maintainable ingestion strategies using dlt. Leverage AI development tools in your daily workflow for efficient development and debugging. Desired QualificationsA few years of experience in data engineering or a related field. Proficiency in Python, as most of our systems are built using this language. Experience in building and maintaining data pipelines in cloud-based warehouse environments. A focus on data quality and reliability, beyond just data delivery. Ability to work collaboratively with Analytics Engineers to understand their data needs. Preferred QualificationsExperience with data ingestion from advertising platform APIs (e.g., Meta, Google Ads). Familiarity with Dagster, dlt, or similar modern orchestration and ingestion tools. Knowledge of Redshift or other columnar warehouses.
About the job
About the Role
Better Collective is looking for a mid-level Data Engineer focused on Paid Media to join the Data Engineering team in Belgrade, Serbia. This position centers on building and maintaining the data pipelines that support campaign performance analysis across advertising platforms.
What You Will Do
Ingest data efficiently from advertising platforms, including Meta and Google Ads
Clean and transform incoming data for use in analytics and reporting
Prepare datasets that enable Analytics Engineers to measure and improve campaign performance
Tech Stack
Dagster
dlt
Redshift
AI-assisted development tools
Your work will lay the groundwork for data-driven decisions in our Paid Media efforts, helping the analytics team deliver clear insights.
About Better Collective
Better Collective is a leading sports betting media group that provides innovative content and technology solutions. We are committed to empowering users to make informed decisions in their betting endeavors through high-quality data and analytics.
Similar jobs
1 - 20 of 172 Jobs
Search for Software Engineer Data Infrastructure Acquisition
Speechify seeks a Software Engineer specializing in Data Infrastructure and Acquisition to join the team in Belgrade, Serbia. This position involves creating and maintaining systems that bring together information from many sources, directly shaping the way users interact with Speechify’s products. Key Responsibilities Design and build data infrastructure that meets product requirements Integrate data from multiple sources into established systems Work on projects aimed at improving user experience and strengthening platform performance Location This role is based in Belgrade, Serbia.
Join us at Databricks, where we are driven by a passion for empowering data teams to tackle the most challenging issues globally — from transforming transportation to fast-tracking medical innovations. We pride ourselves on developing and maintaining the premier data and AI infrastructure platform, enabling our clients to harness deep data insights for business enhancement. Founded by engineers, Databricks is embarking on a transformative journey to create the leading Data Intelligence Platform. We are building on a solid foundation with the ambition to create significantly superior products. Our focus spans everything from the Storage Engine (data layout, encryption, caching, etc.) to the Query Engine (Vectorization, Query Optimization, etc.), with the goal of enhancing every component to offer our customers the most efficient, user-friendly, and secure data platform for all their data workloads. As a Software Engineer, you will be a pioneering member of our Belgrade site and contribute to our multi-year mission to realize our Lakehouse vision. You will engage in the complete development cycle and embody all core Databricks values: ownership, data-driven decision-making, teamwork, and customer obsession.
P-1416 At Databricks, we are dedicated to empowering data teams to tackle the world's most challenging problems — from transforming transportation to accelerating medical innovations. Our mission is realized through the creation and management of the world's leading data and AI infrastructure platform, enabling our clients to harness deep data insights for enhanced business performance. Founded by engineers, Databricks has embarked on a multi-year journey to develop the premier Data Intelligence Platform. We are enhancing a strong foundation to create significantly improved products. From the Storage Engine (data layout, encryption, caching, etc.) to the Query Engine (Vectorization, Query Optimization, etc.), we aim to reimagine every component to deliver the fastest, most user-friendly, and secure data platform for all workloads. As a software engineer, you will be a pivotal member of the Belgrade site, contributing to the foundational team dedicated to realizing our Lakehouse vision. You will engage in the complete development cycle and embody all core Databricks values (ownership, data-driven, teamwork, customer-centric).
P-1415 At Databricks, we are dedicated to empowering data teams to address some of the most challenging issues facing the world today — from revolutionizing transportation to expediting medical advancements. We achieve this by developing and maintaining the premier data and AI infrastructure platform, enabling our customers to leverage deep data insights for business enhancement. Founded by engineers, Databricks is embarking on a multi-year mission to create the most advanced Data Intelligence Platform. Building upon a solid foundation, our ambition is to significantly improve our product offerings. From the Storage Engine (data layout, encryption, caching, etc.) to the Query Engine (vectorization, query optimization, etc.), we aim to reevaluate every component to deliver the fastest, most user-friendly, and secure data platform for all workloads. As a senior software engineer, you will be a pivotal member of not just the Belgrade site but also the foundational team driving our multi-year Lakehouse vision. You will participate in the complete development cycle and embody all fundamental Databricks values (own it, data-driven decision making, teamwork, customer-centricity).
Join Perplexity as a Senior Infrastructure Engineer and play a pivotal role in transforming how individuals search and engage with the internet. Your dedication and expertise will be essential in delivering a top-tier product. This position uniquely combines infrastructure mastery with software engineering, allowing you to take ownership of the crucial systems that drive our products and development processes.Key ResponsibilitiesConstruct and sustain a robust and scalable infrastructure to enable high-performance search functionalities.Create internal tools and automation processes to enhance developer workflows and operational effectiveness.Architect, deploy, and manage cloud-native systems, primarily utilizing AWS.Enhance and maintain CI/CD pipelines, testing frameworks, and release protocols.Troubleshoot and optimize Linux environments, containerized applications, and backend services.Contribute directly to product codebases, engaging with languages such as Python, Go, and other systems-level languages.QualificationsProven experience in cloud infrastructure (AWS preferred), systems architecture, and automation.Extensive knowledge of Linux internals, including performance optimization and debugging techniques.Demonstrated experience in building or maintaining CI/CD systems and associated tools.Strong proficiency in Python and at least one systems programming language (Go, Rust, C/C++, or Java).Ability to navigate both infrastructure and application-level code seamlessly.A proactive and enthusiastic learner who excels in a dynamic, innovative environment.
P-1415 At Databricks, our mission is to transform the data lifecycle by simplifying processes from data ingestion to ETL, BI, and extending to ML/AI through a unified platform. We envision a future where traditional data warehouse architectures are superseded by an innovative pattern known as the Lakehouse (CIDR 2021 paper), which merges data warehousing with advanced analytics. This paradigm shift addresses critical challenges like data staleness, reliability, total cost of ownership, data lock-in, and the limitation of use-case support. To realize this vision, we are developing a cutting-edge query engine and structured storage system designed to exceed the performance of specialized data warehouses in relational queries. We aim to preserve the expressiveness and robustness of general-purpose systems like Apache Spark™ to accommodate various workloads, including ETL and data science. Join us on this multi-year journey. As a member of our team, you will be involved in designing next-generation systems set to redefine industry standards in several key areas: Query compilation and optimization Distributed query execution and scheduling Vectorized execution engine Data security Resource management Transaction coordination Efficient storage structures (encodings, indexes) Automatic physical data optimization Your Responsibilities: Clarify requirements and drive design decisions for ambiguous problems Create technical design documents and project plans Develop new features Mentor junior engineers Test, deploy to production, and monitor outcomes.
P-1414 At Databricks, our mission is to revolutionize the entire data lifecycle—from ingestion to ETL, BI, and extending into ML/AI—through a unified platform. We envision a future where the traditional data warehouse architecture is transformed into a modern architectural paradigm known as Lakehouse (CIDR 2021 paper). This open platform harmonizes data warehousing with advanced analytics, addressing key challenges such as data staleness, reliability, total cost of ownership, data lock-in, and limited use-case support. To realize this vision, Databricks is developing a cutting-edge (decoupled) query engine and structured storage system that aims to surpass specialized data warehouses in relational query performance while preserving the flexibility and robustness characteristic of general-purpose systems like Apache Spark™. Your contributions will be vital to this multi-year initiative. As a member of our team, you will design next-generation systems that set new benchmarks in the following areas: Query compilation and optimization Distributed query execution and scheduling Vectorized execution engine Data security Resource management Transaction coordination Efficient storage structures (encodings, indexes) Automatic physical data optimization
At Perplexity, we are in search of a passionate Backend/Infrastructure Engineer Intern to join our dynamic and impactful team dedicated to enhancing our search capabilities. This role presents a unique opportunity to collaborate with seasoned engineers in designing, building, and maintaining the robust infrastructure and backend services that ensure Perplexity remains fast, reliable, and scalable.Internship Duration: 12 - 24 weeks, full-time, in-person at our Belgrade office.Key Responsibilities:Construct and uphold infrastructure that drives high-performance search systems, with guidance from senior engineers.Design and implement internal tools and automation to enhance developer workflows and operational productivity.Aid in the design, deployment, and monitoring of cloud-native systems utilizing AWS.Contribute to the optimization of CI/CD pipelines, testing infrastructure, and release processes.Debug and enhance Linux-based services, containers, and backend systems.Participate in coding for products, primarily utilizing Rust, Go, or C++.Qualifications:Currently pursuing a degree in Computer Science, Engineering, or a related field, or possess equivalent practical experience.Familiarity with at least one programming language such as Go, Rust, or C/C++, gained through academic coursework or personal projects.A basic understanding or exposure to cloud infrastructure (e.g., AWS) and Linux systems is advantageous, though not mandatory.A keen interest in infrastructure, distributed systems, or backend engineering, along with a willingness to learn about CI/CD, monitoring, and performance optimization.A proactive, inquisitive learner who thrives in a fast-paced, innovative setting.
About Unlimit Unlimit is a leading global fintech powerhouse, renowned for operating the largest proprietary payments infrastructure in the world.Established in 2009, Unlimit has expanded its reach across 17 international offices and employs over 700 experts. We integrate more than 1,000 payment methods into a unified platform, empowering businesses to scale across borders. Our comprehensive suite of financial tools encompasses payment processing, alternative payment solutions, multicurrency business accounts, card issuing, banking-as-a-service, and crypto on- and off-ramps.Our mission is to dismantle financial barriers, enabling seamless money movement globally. We equip innovative businesses with the resources to accept, send, and manage payments effortlessly, regardless of their location.About the RoleWe are in search of a proactive and experienced Senior System Administrator to oversee and support our on-premise IT infrastructure. This role is crucial in ensuring the high availability, security, and operational stability of our local systems. The ideal candidate will be structured, security-conscious, and adept at navigating a regulated fintech environment.
Full-time|Remote|Remote — Belgrade, Vojvodina, Serbia
Founded in 1997, Libertex Group has been a pioneering force in the online trading sector, seamlessly blending cutting-edge technology with market dynamics and digital innovations.The award-winning Libertex trading platform provides traders with the tools to engage in the market, offering opportunities to invest in stocks or trade CFDs linked to commodities, Forex, ETFs, cryptocurrencies, and more.Passionate about the unifying power of sports, Libertex proudly serves as the Official Online Trading Partner of the Audi Revolut F1 Team.Join us in creating innovative fintech solutions, empowering individuals to #TradeForMore with Libertex.Job OverviewWe are seeking a skilled Infrastructure Engineer to architect, deploy, and maintain robust and secure infrastructure solutions on AWS and in remote data centers. The successful candidate will have a strong emphasis on automation, utilizing Terraform and Terragrunt, and will possess an extensive background in cloud networking and infrastructure as code. Proficiency in proactive monitoring, troubleshooting, and effective communication within cross-functional teams is essential.
P-1416 At Databricks, we are driven by a passion for empowering data teams to tackle some of the world's most complex challenges—ranging from revolutionizing transportation to accelerating medical innovations. We achieve this by developing and maintaining the premier data and AI infrastructure platform, enabling our clients to harness deep data insights to optimize their operations. Founded by a team of engineers, Databricks is on an ambitious journey to create the ultimate Data Intelligence Platform. While we are building on a robust foundation, our vision is to enhance every component to deliver a faster, more user-friendly, and secure data platform capable of handling diverse data workloads. As a software engineer, you will be a vital member of the inaugural Belgrade team and contribute significantly to our multi-year mission to realize the Lakehouse vision. You will engage in the end-to-end development lifecycle and embody the core values of Databricks. Your Impact: Our backend teams tackle a variety of challenges across our essential service platforms. You may work on: Complex issues spanning both product and infrastructure, including distributed systems, large-scale service architecture, monitoring, workflow orchestration, and enhancing developer experience. Building reliable, secure, and high-performance services and client libraries to manage and access vast amounts of data on cloud storage solutions like AWS S3, GCS, and Azure Blob Store. Contributing to product features that enable our customers to effortlessly store and retrieve their data. What We Seek: Bachelor's degree (or higher) in Computer Science or a related discipline. 8+ years of production-level experience in Java, Scala, C++, or similar programming languages. Proven experience in developing large-scale distributed systems. Experience with SaaS platforms or Service-Oriented Architectures. Solid understanding of SQL.
At Databricks, we are dedicated to empowering data teams to tackle the world’s most challenging problems—whether it's revolutionizing transportation or advancing medical innovations. Our mission is to create and operate the leading data and AI infrastructure platform, enabling clients to harness deep data insights for improved business outcomes.Founded by visionary engineers, Databricks is on an exciting multi-year journey to develop the ultimate Data Intelligence Platform. We are building upon a robust foundation with the ambition to create significantly better products. Our aim is to reimagine every component to deliver the fastest, easiest, and most secure data platform suited for all data workloads.As a Backend Software Engineer, you will become a key member of not only the Belgrade site but also a pivotal team in our quest to realize the Lakehouse vision. You will engage in the complete development cycle and embody the core values of Databricks.
Join Databricks as a Senior Backend Software Engineer and be part of our pioneering team in Belgrade, Serbia. At Databricks, we are dedicated to empowering data teams to tackle some of the most pressing challenges across various industries, from revolutionizing transportation to accelerating groundbreaking medical discoveries. Our mission is to create and maintain the most advanced data and AI infrastructure platform, allowing our clients to leverage deep data insights for enhanced business outcomes.Founded by visionary engineers, Databricks is on a multi-year journey to develop the ultimate Data Intelligence Platform. While we have a solid foundation, our ambition is to innovate and enhance every aspect of our offerings, ensuring our customers experience the fastest, most user-friendly, and secure data platform for their diverse workloads.As a Senior Software Engineer, you will play a crucial role as a founding member of our Belgrade site, contributing significantly to our Lakehouse vision. You will engage in all phases of the development cycle while embodying the core values of Databricks.Your Impact:Our backend teams tackle a range of challenges across essential service platforms, including:Addressing complex issues related to product and infrastructure, such as distributed systems, scalable service architecture, monitoring, workflow orchestration, and enhancing developer experience.Designing reliable, secure, and high-performance services and client libraries for the efficient storage and access of vast amounts of data across cloud storage backends like AWS S3, GCS, and Azure Blob Store.Creating product features that enable our customers to effortlessly store and access their data.
Become a part of Perplexity AI as a Search Golang Engineer and play a crucial role in shaping the future of highly scalable, AI-driven search infrastructure. In this dynamic position, you will utilize your expertise in Golang to create, implement, and manage backend systems capable of efficiently processing millions of queries with unparalleled reliability.Responsibilities:Develop robust and scalable distributed backend services utilizing Golang.Design, enhance, and sustain search infrastructure to accommodate rapid traffic increases.Create cloud-native solutions focusing on horizontal scalability and quick failover capabilities.Establish comprehensive monitoring, autoscaling, and incident recovery mechanisms.Work closely with product, infrastructure, and DevOps teams to optimize throughput and system resilience.Lead improvements in CI/CD processes, automation, and operational excellence for backend systems.Mentor fellow engineers and advocate for scalable design principles throughout the organization.
Join the Smallpdf Team!At Smallpdf, we pride ourselves on being pioneers in the realm of digital document solutions, empowering millions around the globe to enhance their workflows through our innovative and user-friendly tools. Our mission is to transform document management and elevate productivity for everyone.With a diverse team of over 110 professionals hailing from 34 different nationalities, and offices in Zurich, Belgrade, and Barcelona, we’re committed to making document handling easier for over a billion users across our web, mobile, and desktop platforms. Our dedication to excellence has earned us the trust of industry leaders such as Lufthansa, UBS, and Swiss Life.Your RoleWe are on the lookout for a highly skilled Senior Data Platform Engineer to join our team, focusing on the core components of our data infrastructure. Our platform processes more than 100 million events daily and supports a vast Redshift data warehouse that underpins everything from organization-wide reporting to document analytics and A/B testing.In this pivotal role, you will design and maintain robust ingestion pipelines, ensure optimal performance of our Redshift warehouse at scale, and deliver high-quality data for our dbt transformation layer. You will also be responsible for managing the systems that provide data back to the business.Collaboration is key, as you will work closely with seasoned data professionals and stakeholders across the organization to extract valuable insights from complex datasets.The ideal candidate will possess strong expertise in Redshift, Kubernetes, Python, and SQL, and be comfortable managing and owning critical applications and infrastructure. Experience in environments of similar scale and complexity will be advantageous.What You Will Work OnMaintain the health and performance of our Redshift and Kubernetes (EKS) clusters, foundational to our data delivery.Oversee the ingestion of terabytes of near-real-time events from our products, backend systems, and third-party integrations—vital for our reporting, A/B testing, and operational monitoring.Collaborate with both technical and non-technical team members to establish and maintain data ingestion and infrastructure for our applications.Own the components that drive our data applications (such as Looker and Growthbook), ensuring their reliability, scalability, and self-serve capabilities for internal users.Enhance the developer experience around dbt by developing tools, templates, and CI workflows.
About the Role Better Collective is looking for a mid-level Data Engineer focused on Paid Media to join the Data Engineering team in Belgrade, Serbia. This position centers on building and maintaining the data pipelines that support campaign performance analysis across advertising platforms. What You Will Do Ingest data efficiently from advertising platforms, including Meta and Google Ads Clean and transform incoming data for use in analytics and reporting Prepare datasets that enable Analytics Engineers to measure and improve campaign performance Tech Stack Dagster dlt Redshift AI-assisted development tools Your work will lay the groundwork for data-driven decisions in our Paid Media efforts, helping the analytics team deliver clear insights.
About UsAt Rho, we're redefining banking for startups. Our innovative platform allows businesses to open accounts in mere minutes, issue cards, manage expenses, pay bills, and maintain financial records—all in one seamless experience supported by dedicated human assistance.About the RoleWe are seeking a skilled Senior Software Engineer with a focus on backend development to join our talented core product team in Belgrade. You will play a pivotal role in enhancing and scaling Rho's comprehensive finance platform.If you are passionate about technology and eager to expand your knowledge, you'll be working with the following tech stack:Python and GoGraphQLPostgreSQLDockerKubernetesGoogle Cloud Platform (GCP)
Join Databricks as we embark on a transformative journey to revolutionize the data lifecycle, from ingestion through ETL, BI, and into the realms of ML/AI, all within a unified platform. Our vision is to transition from traditional data warehouse architectures to the innovative Lakehouse paradigm, as detailed in the CIDR 2021 paper. This new architecture addresses critical challenges such as data staleness, reliability, total cost of ownership, data lock-in, and limited use-case support.At Databricks, we are developing the next generation of decoupled query engines and structured storage systems designed to surpass specialized data warehouses in relational query performance. Our goal is to maintain the expressiveness and robustness of general-purpose systems, like Apache Spark™, to accommodate diverse workloads, ranging from ETL to advanced data science applications. You will play an essential role in this multi-year endeavor.As a valued member of our team, you will be tasked with designing cutting-edge systems that leapfrog current state-of-the-art technologies in the following areas:Query compilation and optimizationDistributed query execution and schedulingVectorized execution engineData securityResource managementTransaction coordinationEfficient storage structures (encodings, indexes)Automatic physical data optimization
ABBYY is seeking a Staff Software Engineer to join the team in Belgrade, Serbia, with a hybrid work arrangement. This position plays a central part in developing software that enables organizations to manage and use their data more effectively. Key responsibilities Design and implement new software features and systems. Develop solutions aimed at increasing productivity and efficiency for ABBYY’s customers. Collaborate with fellow engineers to deliver reliable and well-structured products. Role focus This role centers on building software that supports businesses in working smarter with their data. The Staff Software Engineer will contribute technical expertise and work closely with the development team to ensure high-quality results.
Easygo, a leading Australian technology firm, powers globally recognized brands such as Stake, KICK, and Twist Gaming. As we expand our engineering capabilities in Belgrade, we are seeking a Senior Software Quality Engineer to join our Data and Integration Infrastructure team. This is an exciting opportunity to influence the platforms and systems that drive our products worldwide. About the Team You will become part of a dynamic engineering team that accelerates product and service teams working on KICK. The team is tasked with creating and maintaining shared platform capabilities that enhance the reliability, consistency, and developer experience across the platform. Our team is responsible for foundational components such as event pipelines, schema governance, service gateways, and notification delivery. These systems ensure reliable communication between KICK's services, with accurate, observable, and analytics-ready data flows across the platform. By delivering robust and well-architected platform primitives and templates, we play a pivotal role in shaping how KICK is developed and scaled, optimizing workflows, minimizing redundancy, and establishing a solid foundation for reliable integrations and data-driven decision-making. Who We’re Looking For We are in search of a seasoned Senior Quality Engineer to guarantee the reliability, scalability, and safety of KICK's core platform systems. This role involves collaborating closely with engineers to integrate quality practices throughout the development lifecycle, ensuring that platform components—like event pipelines and service gateways—are robust, testable, and safe for use by teams across KICK. If you are driven by the desire to enhance system reliability, facilitate rapid and assured software delivery, and thrive in a collaborative, high-impact engineering environment, this position offers a chance to shape the quality foundations of the KICK platform. Key Responsibilities: Integrate quality practices within Agile product teams while implementing the broader QA strategy across the SDLC. Work closely with developers, product managers, and designers to establish acceptance criteria, refine testability, and maintain a unified understanding of quality objectives. Engage in code reviews and technical design discussions, providing insights on testability, risk, and quality impact early in the development cycle. Identify testing risks and devise appropriate testing strategies for critical system components.
Mar 11, 2026
Sign in to browse more jobs
Create account — see all 172 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.