Join Our Team!Toss Securities is rapidly growing under the mission of "Innovating every investment experience for our customers", boasting over 7.4 million registered users and 4 million active monthly users. We are currently leading in foreign stock transaction volumes. Our diverse range of investment products, including stocks, bonds, and options, expands our customers' choices. At the core of this growth is our robust and trustworthy data platform. Toss Securities is at a pivotal moment, needing to design a data architecture capable of handling large-scale real-time trading data, user behavior data, and regulatory compliance data.We are seeking a Head of Data Engineering to design and lead this structural transformation. This role is not just about operations; it is key to defining the data future of Toss Securities and enabling our organization to work data-driven.Your Responsibilities:Design and build an on-premise distributed architecture aligned with our mid-to-long-term business strategy, resolving data silos through Hadoop enhancements and transitioning to a Kafka-centered streaming-first approach.Construct and operate large-scale batch and streaming pipelines based on Spark/Flink and Kafka, ensuring reliable data processing through high-availability ETL/ELT design and performance optimization.Establish and manage data standards (layering, naming, permissions), quality management (DQ rules, SLA, lineage), and regulatory compliance frameworks based on metadata and personal information (PII).Coordinate data interests across services, risk, accounting, AI, and backend teams, establishing and executing a comprehensive data strategy including integration strategies and ownership definitions.Design the goals, structure, and processes of the data organization, leading a growing team through coaching and decision-making while addressing technical debt and fostering a trust-based environment.Oversee the design and operation of ML platforms and infrastructure for LLM/recommendation services, collaborating with service teams to build model deployment, operation, monitoring standards, and automated pipelines.Ideal Candidate:10+ years of experience in data engineering or platform architecture.Experience designing and operating large-scale clusters (Hadoop, Kafka).Proficiency in real-time streaming and batch data processing architecture design.Experience in building data governance, quality, and permission management systems.Leadership experience in engineering organizations (5-50 team members) is preferred.Excellent coordination and communication skills across diverse organizational stakeholders.Additional Preferred Qualifications:Experience with financial data in securities, banking, or fintech.Experience handling data within regulatory environments (e.g., PIPA, Financial Transaction Act).Experience in building semantic layers or data meshes.Experience with real-time transaction or advertising/shopping service data.Experience designing and operating large-scale model training, serving, and MLOps/LLMOps pipelines (e.g., Kubeflow, Argo, H100/H200 GPU clusters, vLLM/Triton).Experience with feature stores for real-time recommendations, model optimization, profiling (e.g., BentoML, ONNX, torchserve), and LLM fine-tuning/RAG operations.
Mar 9, 2026