Infrastructure Engineer II
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Experience
Qualifications
About PPRO
PPRO is a leading FinTech company dedicated to simplifying access to local payment methods and fostering global commerce. With a diverse team from over 50 nationalities and a commitment to innovation, we strive to connect businesses with customers worldwide.
Similar jobs
Search for Engineering Manager For Ai Models Infrastructure
2,218 results
Intercom, based in Berlin, Germany, builds AI-powered tools that help businesses deliver customer service at scale. Nearly 30,000 companies worldwide rely on Intercom’s products, including Fin, an AI agent that handles customer queries around the clock. When integrated with Intercom’s Helpdesk, Fin becomes part of a broader suite designed to support both automated and human-assisted service. About the Role The AI Models Infrastructure Team sits within Intercom’s AI Group. As Engineering Manager, lead a team focused on designing and maintaining the infrastructure that powers Intercom’s AI models. This work is central to how Intercom’s AI capabilities evolve and scale. What You Will Do Guide a team of engineers responsible for building and supporting the systems behind Intercom’s AI models. Apply technical expertise to help the team navigate a fast-changing field. Foster a culture of continuous technical learning and growth. Collaborate with other teams to ensure the AI infrastructure meets product needs. What You Bring Strong technical background in AI or related fields. Experience leading engineering teams. Commitment to staying current with new developments in AI infrastructure. Learn More Explore Intercom’s engineering culture: intercom.engineering Read about Fin: fin.ai
Intercom builds tools that help businesses deliver better customer service, powered by artificial intelligence. Nearly 30,000 companies worldwide use Intercom’s products, including Fin, an AI customer service agent that works seamlessly with the Intercom Helpdesk as part of the Customer Service Suite. Since 2011, Intercom has focused on innovation and delivering value for clients. Role overview The AI Models Infrastructure Team within Intercom’s AI Group develops and maintains the core infrastructure that supports the company’s proprietary AI models. Intercom is hiring an Engineering Manager to lead this team. This is a technical leadership position working at the intersection of infrastructure and artificial intelligence. What you will do Guide a team of engineers building and operating the infrastructure for Intercom’s AI models Work closely with other teams to ensure the reliability and scalability of AI systems Support ongoing technical growth within the team and stay current with advances in AI Who we’re looking for Experience managing technical teams, ideally in AI or related fields Strong background in infrastructure or platform engineering Commitment to continuous technical development Locations Berlin, Germany Dublin, Ireland London, England Learn more About engineering at Intercom: https://intercom.engineering About Fin: http://fin.ai
About UsHelsing is a pioneering defense AI company dedicated to safeguarding democracies. Our mission is to achieve technological leadership, ensuring that open societies can maintain sovereignty over their decisions and ethical standards.As advocates for democratic values, we recognize our profound responsibility in the thoughtful advancement and application of transformative technologies like AI. We take this commitment seriously.Our team consists of passionate engineers, AI specialists, and customer-oriented program managers. We are on the lookout for mission-driven individuals to join our European teams and leverage their skills to tackle the most complex and impactful challenges. We foster an open and transparent culture that encourages healthy discussions about the utilization of technology in defense, its advantages, and its ethical considerations.The RoleAt Helsing, we are revolutionizing perception by developing foundational intelligence for the physical world. You will engage in researching, designing, and training large-scale Foundational Models that convert intricate multimodal sensor data into innovative autonomous capabilities.We are in search of an individual who operates at the nexus of AI Research and Machine Learning Engineering, possessing a proven background in LLM/VLM (multimodal) architectures. Your core responsibility will involve training and fine-tuning Vision-Language Models using our proprietary datasets to enhance our diverse product offerings. You will oversee the entire model lifecycle, from data curation and training to evaluation.
Intercom builds AI-powered customer service tools for businesses worldwide. Our flagship AI agent, Fin, helps companies deliver responsive support at any hour. Combined with our Helpdesk, these tools form the Intercom Customer Service Suite, which blends AI automation with human expertise for more complex questions. Since 2011, nearly 30,000 businesses have relied on Intercom to improve their customer experience. Our team values fast iteration, continuous learning, and delivering real value to clients. Role overview Intercom is hiring Senior AI Infrastructure Engineers in Berlin to design and scale the systems behind our next generation of AI products. The AI Infrastructure team works across the stack, from GPU-level programming to the user-facing agents that handle millions of support conversations each month. This group builds and maintains training pipelines and inference systems for custom models like Fin Apex, which are central to our AI offerings. Collaboration with a tight-knit, highly skilled team is part of the role. What you will do Develop and scale training pipelines for large transformer and LLM models, including data ingestion, preprocessing, distributed training, and evaluation. Build and optimize inference services for low-latency, reliable customer experiences, covering autoscaling, request routing, and fallback mechanisms. Improve GPU-level performance by tuning kernels, increasing utilization, and identifying bottlenecks across both training and inference stacks. Work closely with machine learning scientists to implement new training and inference techniques. What we look for Demonstrated experience in model training or inference at scale, with strong skills in low-level GPU programming (such as CUDA or Triton). Experience across multiple areas is a bonus.
Prior Labs
About UsAt Prior Labs, we are pioneers in developing foundation models that effectively comprehend tabular data, which serves as the cornerstone of various fields including science and business. While foundation models have revolutionized the processing of text and images, structured data remains a largely untapped resource. Our mission is to address this $600 billion opportunity, fundamentally transforming the way organizations engage with scientific, medical, financial, and business data.Our Achievements: We proudly stand as the leading organization in the realm of structured data machine learning (ML). Our groundbreaking TabPFN v2 model has been featured in Nature, establishing a new benchmark in tabular machine learning. Following its release, we have significantly enhanced our model capabilities, achieving over 2.5 million downloads and receiving more than 5,500 stars on GitHub. We are witnessing a rapid uptick in adoption from both research and industry sectors as we build the next generation of tabular foundation models and commercialize them with enterprises across Europe and the United States.Our Team: Our team consists of a highly selective group of over 20 engineers and researchers, chosen from a pool of more than 5,000 applicants. We have backgrounds from industry giants such as Google, Apple, Amazon, Microsoft, G-Research, Jane Street, Goldman Sachs, and CERN. We are led by the creators of TabPFN and receive guidance from eminent AI researchers including Bernhard Schölkopf and Turing Award winner Yann LeCun. Meet our talented team here.What's Next: Supported by top-tier investors and leaders from Hugging Face, DeepMind, and Silo AI, we are on a rapid growth trajectory. This is an exceptional time to join us and help shape the future of structured data AI. Explore our manifesto for further insight.Core Areas of ImpactAs a member of our engineering team, you will contribute to the development of a novel class of AI models. Our latest innovation, TabPFN, significantly surpasses existing methods by orders of magnitude, and we are just getting started. This is a unique opportunity to:Engage in groundbreaking advancements in AI, rather than just incremental enhancements.Influence the future of how organizations globally manage their most critical data.Join us at an opportune moment: we have secured substantial funding (with announcements imminent), achieved strong initial traction (over 100,000 downloads), and are expanding swiftly.At Prior Labs, we prioritize collaboration and integration of research into practical applications. Our Research Engineers play a critical role in bridging the gap between innovative research and real-world implementation.
Mirelo AI
Join Mirelo AI, where we are pioneering the future of creative tools by transforming silent video content into immersive sound, speech, and music.Our team is at the forefront of developing advanced generative AI models that bring life to video content, enabling creators across gaming and video platforms to enhance their storytelling. Recently, we secured a strong $41 million Seed funding round, led by prestigious firms including Andreessen Horowitz and Index Ventures, propelling our rapid expansion in Product, Engineering, Go-to-Market, and Growth.About the RoleAs a Training Infrastructure Engineer, you will play a crucial role in optimizing our training stack. Your responsibilities will include profiling GPU behavior, debugging training pipelines, enhancing throughput, selecting optimal parallelism strategies, and building robust infrastructure for efficient model training at scale. You will collaborate on cluster management, model training, and the development of efficient data pipelines for video and audio processing.
Location: Remote / Berlin (Office available) | Language Requirement: Fluent GermanAt Deepslate, we are pioneering the development of advanced Speech to Speech Voice AI models that closely emulate human speech. We believe in making this technology accessible to everyone.While industry leaders like OpenAI and Google have excelled with text and image processing, the realm of voice technology remains a complex challenge with its multitude of languages, dialects, accents, and nuanced intonations.This is where Deepslate steps in.Supported by top-tier investors from the technology and AI sectors, alongside a significant German VC fund, we are well-funded and poised for rapid growth.Join us in shaping the future of communication.We are not merely creating another standalone platform; we are the intelligence engine that drives a multitude of applications. Our models are embedded in various systems, whether integrated by CRM providers, utilized in other Voice AI platforms, or directly implemented in enterprise solutions through our partners.Your RoleAs a pivotal member of our team, you will research, train, and enhance our proprietary Voice AI model. Your objective is to create a sophisticated speech-to-speech model that captures the intricate nuances of human communication. You will address challenging tasks such as real-time emotion recognition and natural speech patterns to construct a superior voice model from the ground up.What You'll Do:Your contributions will blend AI research with scalable software engineering, establishing the benchmarks for how effectively our models perform in real-world applications.Research & Train: Design, implement, and assess our proprietary deep learning models. You will spearhead the creation of new features, including emotion detection, and incorporate them into our core system architecture.High-Performance ML Systems: Optimize model performance for speed and efficiency, ensuring the capability to process large datasets and real-time tasks at scale.Software Engineering: Develop clean, production-ready code. We seek researchers who excel in software engineering, adept at transforming complex research into reliable architectures.Collaboration: Work closely with cross-functional teams to align research objectives with practical applications and ensure cohesive integration of advancements.This role is designed for those passionate about pushing the boundaries of AI and voice technology.
About Langfuse Langfuse builds an open-source LLM Engineering Platform that helps teams develop AI applications. The platform includes tools for tracing, evaluation, and prompt management. As part of ClickHouse, Langfuse focuses on improving AI observability with a strong data-driven approach. Langfuse is the largest open-source solution in its field. The platform is trusted by 19 of the Fortune 50, has over 2,000 customers, and handles more than 26 million SDK downloads each month. The infrastructure processes over a billion trace events monthly, supporting high reliability and performance for clients. Role Overview: Senior Cloud Infrastructure Engineer This position focuses on maintaining and improving the cloud infrastructure that supports Langfuse’s services. The role involves managing operations on AWS ECS Fargate and ClickHouse Cloud, with attention to performance and cost efficiency. The Senior Cloud Infrastructure Engineer will help scale systems to match customer growth. In addition to cloud operations, the role covers public self-hosted infrastructure, giving teams flexibility in how they deploy Langfuse. Location Remote within Europe.
At sensmore, we are revolutionizing the automation of the world’s largest machinery with cutting-edge intelligence. Our innovative Physical AI empowers heavy machinery, such as wheel loaders, to adapt instantly to changing environments and undertake new tasks without prior training.By integrating advanced robotics into our platform, we are launching intelligence and automation products that significantly enhance productivity and safety for our clients in the mining, construction, and related sectors.Join us in redefining the automation landscape for heavy industries and making a substantial impact.
sensmore builds automation systems for heavy machinery, applying intelligent robotics to help equipment such as wheel loaders adapt to changing tasks and environments. Their Physical AI platform connects robotics with real-world industrial needs, aiming to boost productivity and safety across sectors like mining and construction. This PhD Research Internship centers on advancing industrial automation, blending research and engineering in a practical setting. The position is based in Berlin or Potsdam and focuses on Vision-Language Models (VLM) and Vision-Language-Action (VLA) systems for robotics. Role overview The internship targets general purpose AI, with an emphasis on developing scalable VLA systems that enable robots to perceive, reason, and act in complex industrial environments. The work combines multi-modal perception, including video, radar, and lidar, with practical robotics. Interns will contribute to embodied AI research for heavy industry, working at the intersection of method development and hands-on engineering. There are opportunities to publish research and influence the direction of industrial autonomy at sensmore. Key responsibilities Research and method development: Design and implement new approaches for Vision-Language-Action systems in industrial contexts. Investigate scalable architectures for multi-modal reasoning and action generation. Advance methods in embodied AI and robotic autonomy. Multi-modal learning and data systems: Lead the design and evaluation of large-scale multi-modal datasets, including video, radar, lidar, and sensor fusion. Develop self-supervised or weakly supervised pipelines for generating VLA datasets. Explore data-centric strategies to improve robustness and generalization. Model development and optimization: Build, adapt, and extend advanced models to achieve project objectives.
Join sensmore as a Robotics Engineer Intern, where you'll engage in cutting-edge projects focused on VLM and VLA models. This is an incredible opportunity to apply your academic knowledge in a practical setting, contributing to innovative robotics solutions.Your role will involve collaborating with a talented team of engineers and researchers, using advanced technologies to develop and refine robotic systems. You'll gain hands-on experience that will enhance your skills and prepare you for a successful career in robotics.
Mirelo AI
Mirelo AI is at the forefront of innovation, crafting the future of creative tools through advanced technology that generates realistic sound, speech, and music from video.Our pioneering foundational generative AI models breathe life into silent video content, producing custom, hyper-realistic audio tailored for gaming, video platforms, and content creators. By empowering storytellers worldwide, we enable the transformation of their narratives.After successfully securing a $41 million Seed round co-led by Andreessen Horowitz and Index Ventures, with additional support from Atlantic, we are rapidly scaling our teams across Product, Engineering, Go-to-Market, and Growth.Role OverviewAs a Research Scientist at Mirelo, you will be central to the development of next-generation multimodal video-to-audio models. This position entails hands-on research and development work, supported by an impressive H100/200-per-engineer ratio, allowing you to explore and innovate new multimodal models and extend the frontiers of music, sound, and speech generation. Your collaboration across research and engineering will be key as you conduct focused ablation studies, derive actionable insights, and guide the team with clear next steps. From data curation to deployment, you will actively contribute to shaping the entire lifecycle of the models that drive our products and partnerships.Key ResponsibilitiesDesign, implement, and train large-scale multimodal generative models for audio generation (including diffusion and autoregressive models).Investigate innovative modeling concepts for audio generation across music, sound, and speech, drawing inspiration from the language and image domains.Develop and test new capabilities through post-training methods (e.g., fine-grained control, in/out-painting, editing).Conduct thorough ablation studies, derive actionable insights, and effectively communicate findings to guide future research directions.Engage in all stages of model development, including data curation, experimentation, evaluation, and deployment.Ideal Candidate ProfileProven hands-on experience in training large-scale generative models within a dynamic research environment.In-depth understanding of advanced methods and machine learning research in at least one domain: image, language, video, or audio (specific audio expertise is a plus).Strong proficiency in PyTorch and transformer architectures, alongside a comprehensive knowledge of the modern deep learning ecosystem.Solid grasp of statistical methods, data analysis, and model evaluation techniques.
Doctolib
Join Our Mission at Doctolib! Your Role At Doctolib, we thrive in a dynamic engineering environment as we create pioneering products and features designed to enhance the lives of doctors and patients alike. We are on the lookout for an inspiring Engineering Manager - AI to join our tech teams in Berlin. You will be part of a dedicated team of innovators who are committed to revolutionizing healthcare while actively contributing to the growth of our fast-evolving company from day one! For more insights into our work, check out our technical blog! About Our Technology Environment Our solutions are developed on a fully cloud-native platform that accommodates web and mobile app interfaces, supports multiple languages, and is tailored to meet the specific requirements of various countries and healthcare specialties. We are modularizing our platform within a distributed architecture through reusable components. Our technology stack includes Rails, TypeScript, Java, Python, Kotlin, Swift, and React Native. We utilize AI ethically across our products to empower both patients and healthcare professionals. Discover our AI vision here and learn more about our inaugural AI hackathon here! Who You Are If you feel this job description aligns with your skills, even if you don't meet every requirement listed, we warmly encourage you to apply. In this role, you will be both a technical leader and a people manager, providing coaching and support to your team members while building an exceptional engineering organization at Doctolib. Your responsibilities will include: Leading a high-performing and collaborative team of 5+ members to achieve business objectives collectively. Partnering with talent and people teams to attract, develop, and retain top talent. Building, leading, and empowering a user-centric team that prioritizes exceptional internal and external user satisfaction. Collaborating with your product partner to define a mission and vision for your team, aligning on a clear roadmap to realize it. Taking ownership of the end-to-end engineering process and ensuring the team meets its objectives.
Superhuman Platform Inc.
Superhuman embraces a dynamic hybrid working model for this position, offering team members the ideal balance of focused work and in-person collaboration that nurtures trust, innovation, and a vibrant team culture.About SuperhumanSuperhuman is at the forefront of AI productivity, empowering individuals to reach their superhuman potential. As the proud home of Grammarly, our suite of applications integrates seamlessly with over 1 million platforms, enhancing productivity through intelligent features. Our offerings include Grammarly's writing assistance, Coda's collaborative spaces, and Go, an AI assistant that proactively provides contextual support. Since our inception in 2009, we have transformed the workflows of more than 40 million users, 50,000 organizations, and 3,000 educational institutions globally. Discover more at superhuman.com.The OpportunityIn pursuit of our ambitious goals, we seek a Site Reliability Engineer (SRE) to strengthen our infrastructure team. This pivotal role involves developing software to enhance the reliability of our backend systems, collaborating closely with engineers, and strategizing for future scalability. You will engage with our existing production engineering teams in the EU as we transition away from the “you build it, you own it” approach.The engineers and researchers at Superhuman are given the freedom to innovate and drive breakthroughs, subsequently influencing our product roadmap. As we expand our interfaces, algorithms, and infrastructure, the complexity of our technical challenges continues to grow. Learn more about our technical endeavors on our technical blog.As an SRE, your responsibilities will include:Scaling our Kubernetes-based control plane that processes billions of events daily.Enhancing our automation systems that respond to workload demands.Deploying machine learning systems company-wide.
About Anthropic Anthropic is committed to building AI systems that are reliable, interpretable, and steerable. The team’s mission centers on making AI safe and beneficial for both users and society. Researchers, engineers, policy experts, and business leaders collaborate closely to advance this goal as the company grows. Role Overview: Infrastructure Engineer (Sandboxing Team) The Infrastructure Engineer will join the Sandboxing team within Anthropic’s Research organization in Berlin. This role focuses on designing and scaling the systems that let researchers safely run and experiment with AI-generated code and interactions inside controlled environments. As Anthropic’s AI models become more capable, secure execution infrastructure grows in importance. The work involves building distributed systems that must operate reliably at scale while maintaining strict security boundaries. Contributions in this position directly support Anthropic’s mission to develop safe, beneficial, and trustworthy AI systems. Key Responsibilities Design, build, and maintain distributed backend systems for secure sandboxed execution environments. Scale infrastructure to support expanding research and product needs, focusing on reliability and performance. Implement and manage serverless architectures and container orchestration systems. Collaborate with research teams to gather requirements and translate them into effective infrastructure solutions. Develop monitoring, alerting, and observability systems to ensure high operational standards. Participate in on-call rotations and incident response to help maintain system reliability. Enhance infrastructure automation and tooling to improve developer productivity. Work with security teams to ensure sandboxing infrastructure meets required isolation standards. What Makes a Strong Candidate More than 5 years of experience building and managing backend infrastructure at scale. Deep understanding of distributed systems design and implementation. Strong operational background, including troubleshooting complex production issues. Proficiency with cloud platforms, especially GCP/GCS; experience with AWS or Azure is a plus. Familiarity with containerization technologies and practices.
Join praxipal, where we're revolutionizing healthcare administration with our AI-powered workforce, tackling the global shortage of medical staff.We believe that medical assistants should focus on patient care rather than mundane tasks like scheduling, messaging, or invoicing. Enter Luna, our AI receptionist, who automates communication and is already embraced by over 100 medical practices.As we expand Luna's capabilities to manage all aspects of patient communication, we're on a mission to streamline administrative processes directly within existing practice systems.Backed by one of Europe’s leading investors and recognized as one of Germany's fastest-growing healthtech startups, our team comprises talents from renowned organizations such as Palantir, Amazon, and SAP.We are currently seeking a Senior Software Engineer (Infra & Platform) to construct the reliable, secure, and scalable infrastructure that powers Luna.The RoleLuna operates in real-time, automating critical workflows involving sensitive patient data. Therefore, reliability, latency, and security are not just goals; they are essential features of our product.In this hands-on, technical role, you will establish and maintain the platform that ensures Luna's reliability in daily practice operations. This includes managing cloud infrastructure, CI/CD processes, identity and access management, observability, and Luna’s real-time voice stack. Your work will help minimize risks, streamline deployments, facilitate quick failure diagnostics, and embed security as a core principle.This position is on-site in Berlin and requires fluency in German.
GetYourGuide
Transform the Travel ExperienceJoin GetYourGuide in reshaping the way people explore the world. We connect millions with unique and memorable activities, driven by our dedication to making every journey extraordinary—including yours.Are you ready to unlock your potential alongside a community of fellow adventurers? Explore your next opportunity at our Berlin headquarters or one of our global offices, from New York to Bangkok. Visit getyourguide.careers to take the first step.Team MissionAt the AI Platform team, our goal is to empower Mission Teams across GetYourGuide to independently build and operate robust LLM features, democratizing AI development while ensuring the product quality our users expect.As the Engineering Manager for our AI Platform team, you will spearhead the vision, strategy, and execution necessary to enable and scale AI-powered capabilities across our product range. You will oversee the development and evolution of a centralized AI infrastructure designed to make LLM development systematic, reliable, and scalable. Your team will create evaluation systems, abstraction layers, and safety measures, enabling Mission Teams to deploy AI features with confidence and speed, ensuring GetYourGuide delivers trustworthy, high-quality AI experiences to travelers worldwide.Your ResponsibilitiesLead hiring and performance management processes for your Engineering team, empowering your engineers to excel in this dynamic and rapidly evolving field.Drive impactful projects and outcomes by managing the team’s roadmap and portfolio allocation.Tackle complex technical challenges that span multiple services and business domains, ensuring Mission Teams can create production-grade LLM features confidently.Promote best practices for LLM development across the technology landscape.Balance the pace of innovation with production standards to ensure the platform supports speed without compromising quality, security, or reliability.Establish strategic partnerships with key stakeholders across product and engineering leadership, as well as vendor relationships, to align platform capabilities with business objectives.Your Skill SetYou possess a strong sense of ownership, ensuring that technical and product plans align with stakeholder interests, team goals, and organizational objectives.You approach complex problems with an open mind and a collaborative spirit.
Join PPRO, where our mission is to facilitate seamless access to local payment methods, empowering global commerce. We envision a world where anyone can purchase goods and services using their preferred payment method. Our partnerships with industry leaders like Ant Group, PayPal, and Stripe enable us to tap into new markets and connect with a broader customer base, accelerating growth for all.Our strength is rooted in our diverse and multicultural team, comprising over 50 nationalities across more than 10 international locations. United by a common goal, we strive to deliver exceptional products and services to our partners and customers. While we focus on innovating global commerce, we embody principles of #chooseaction, #beopen, #thinkcustomer, #gofurther, and #wintogether.Purpose of the Role:As an Infrastructure Engineer II, you will join the Developer Enablement domain, tasked with ensuring that all cross-functional development teams have access to a robust, scalable, reliable, cost-effective, and highly secure infrastructure for their applications. Your expertise will be pivotal in implementing and maintaining our infrastructure framework, paving the way for development teams to thrive. At PPRO, we operate as a cloud-native FinTech, utilizing AWS for our payment services and GCP for analytics.
About the RoleWe are on the lookout for two highly skilled backend engineers to join our dynamic team in Berlin. You will be instrumental in advancing the development and implementation of our technology, enhancing our capacity to create core backend functionalities, and empowering other teams to utilize this backend for their projects.If you have a passion for creating exceptional products, thrive in a fast-paced environment, and are eager to contribute to the success of an ambitious startup, this is the perfect opportunity for you.Our Tech Stack:Python / FastAPIPostgreSQL / MongoDB / ElasticCloud LLM providers + bare metal GPUsGCPAs an early team member, you will assume significant responsibilities from day one, with the chance to influence our technological architecture and infrastructure.Join one of the most ambitious AI companies today, collaborating with a world-class team of experts in their fields, and experience an incredibly stimulating work environment. In This Role You WillLead the delivery of complex projects, coordinating with cross-functional teams, and ensuring timely completion of essential components.Develop a deep understanding of Reliant's vision and integrate this into your architectural and process decisions in daily operations.Offer insights to refine UI/UX concepts and assist in the planning of backend and infrastructure development.Take proactive responsibility for long-term code quality and architecture. About YouThe ideal candidate will have a proven track record in backend or full-stack development, with a focus on rapid innovation and effective systems design that supports growth opportunities for both the product and the team.To thrive in this role, you should possess the following skills and experiences:5+ years of demonstrated experience in building backend services or distributed systems.Ability to design, develop, test, and maintain backend services and APIs that support product features, including writing robust, production-quality code.Engage in system and service design; contribute to architectural discussions and make informed trade-offs regarding scalability, latency, cost, and reliability.Navigate ambiguity effectively; translate vague or high-level requirements into actionable implementation plans.Collaborate with cross-functional partners including product management, frontend, UX, data, and operations to define requirements, plan implementations, and deliver features.Ensure high code quality by writing, reviewing, and improving unit/integration tests; actively participate in code reviews.
At SumUp, our mission is to empower small businesses by providing them with accessible and user-friendly tools for payments, finance, and customer relationships. We are a dynamic team grounded in our core values: Founder’s Mentality, Team First, and We Care. We foster an organization that is people-oriented, disciplined, and consistently innovating from within. Agility is at the heart of what we do, and we strive to create an environment that nurtures it. As part of our Global Operations team, you will play a pivotal role in enhancing the customer experience in fintech. The Senior AI/ML Engineer position is essential for the development and deployment of cutting-edge AI solutions that streamline support processes and significantly enhance the experience for our merchants and support agents. You will be a central figure in building and maintaining the ML infrastructure and tools that support our core products utilizing advanced AI/ML models. This role is vital for sustaining our growth, particularly as we expand complex applications such as our AI Assistant and translation tools.
Sign in to browse more jobs
Create account — see all 2,218 results

