About the job
Be a Part of the Revolution in E-Commerce with Whatnot!
Whatnot stands as the leading live shopping platform across North America and Europe, where you can buy, sell, and explore the items you cherish. We are transforming the landscape of e-commerce by merging community engagement, shopping, and entertainment into a unique experience tailored just for you. As a remote-first team, we are driven by innovation and firmly rooted in our core values. With operational hubs in the US, UK, Germany, Ireland, and Poland, we are collaboratively crafting the future of online marketplaces.
From fashion and beauty to electronics and collectibles like trading cards, comic books, and live plants, our live auctions cater to a diverse audience.
And this is just the beginning! As one of the fastest-growing marketplaces, we are on the lookout for innovative, forward-thinking problem solvers in all areas of our business. Stay updated with the latest from Whatnot through our news and engineering blogs, and join us in empowering individuals to transform their passions into successful ventures while fostering community through commerce.
The Role
We are seeking passionate builders—intellectually curious, entrepreneurial engineers who are ready to pioneer the future of AI and ML at Whatnot. You will be responsible for designing and scaling the foundational infrastructure that supports machine learning and self-hosted large language model applications throughout the organization. Collaborating closely with machine learning scientists, you will facilitate the deployment of cutting-edge models into production, creating entirely new product experiences. Your work will involve constructing systems that ensure advanced machine learning is reliable and efficient at scale—from low-latency model serving to distributed training and high-throughput GPU inference.
Your Responsibilities:
Lead the infrastructure that powers AI and ML models across vital business domains—enhancing growth, trust and safety, fraud detection, seller tools, and more.
Prototype, deploy, and operationalize innovative ML architectures that significantly influence user experience and marketplace dynamics.
Design and scale inference infrastructure capable of managing large models with minimal latency and maximal throughput.
Construct distributed training and inference pipelines utilizing GPUs, as well as model and data parallelism.
Push the boundaries of your expertise and explore new technologies and methodologies.

