companyToss Securities logo

Head of Data Engineering

On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Senior Level Manager

Qualifications

10+ years of experience in data engineering or platform architecture, with a strong background in large-scale cluster design and operations, including Hadoop and Kafka. Proven capabilities in real-time streaming and batch data processing architecture design, as well as data governance and quality management system building. Leadership experience in managing engineering teams of 5-50 members is preferred, along with excellent coordination and communication skills across various organizational departments.

About the job

Join Our Team!

  • Toss Securities is rapidly growing under the mission of "Innovating every investment experience for our customers", boasting over 7.4 million registered users and 4 million active monthly users. We are currently leading in foreign stock transaction volumes. Our diverse range of investment products, including stocks, bonds, and options, expands our customers' choices. At the core of this growth is our robust and trustworthy data platform. Toss Securities is at a pivotal moment, needing to design a data architecture capable of handling large-scale real-time trading data, user behavior data, and regulatory compliance data.
  • We are seeking a Head of Data Engineering to design and lead this structural transformation. This role is not just about operations; it is key to defining the data future of Toss Securities and enabling our organization to work data-driven.

Your Responsibilities:

  • Design and build an on-premise distributed architecture aligned with our mid-to-long-term business strategy, resolving data silos through Hadoop enhancements and transitioning to a Kafka-centered streaming-first approach.
  • Construct and operate large-scale batch and streaming pipelines based on Spark/Flink and Kafka, ensuring reliable data processing through high-availability ETL/ELT design and performance optimization.
  • Establish and manage data standards (layering, naming, permissions), quality management (DQ rules, SLA, lineage), and regulatory compliance frameworks based on metadata and personal information (PII).
  • Coordinate data interests across services, risk, accounting, AI, and backend teams, establishing and executing a comprehensive data strategy including integration strategies and ownership definitions.
  • Design the goals, structure, and processes of the data organization, leading a growing team through coaching and decision-making while addressing technical debt and fostering a trust-based environment.
  • Oversee the design and operation of ML platforms and infrastructure for LLM/recommendation services, collaborating with service teams to build model deployment, operation, monitoring standards, and automated pipelines.

Ideal Candidate:

  • 10+ years of experience in data engineering or platform architecture.
  • Experience designing and operating large-scale clusters (Hadoop, Kafka).
  • Proficiency in real-time streaming and batch data processing architecture design.
  • Experience in building data governance, quality, and permission management systems.
  • Leadership experience in engineering organizations (5-50 team members) is preferred.
  • Excellent coordination and communication skills across diverse organizational stakeholders.

Additional Preferred Qualifications:

  • Experience with financial data in securities, banking, or fintech.
  • Experience handling data within regulatory environments (e.g., PIPA, Financial Transaction Act).
  • Experience in building semantic layers or data meshes.
  • Experience with real-time transaction or advertising/shopping service data.
  • Experience designing and operating large-scale model training, serving, and MLOps/LLMOps pipelines (e.g., Kubeflow, Argo, H100/H200 GPU clusters, vLLM/Triton).
  • Experience with feature stores for real-time recommendations, model optimization, profiling (e.g., BentoML, ONNX, torchserve), and LLM fine-tuning/RAG operations.

About Toss Securities

Toss Securities is dedicated to transforming the investment landscape for our customers. With a vision to innovate every investment experience, we have achieved significant growth, including over 7.4 million registered users. As we continue to expand our services and offerings, the backbone of our operation is a powerful data platform that supports complex trading, regulatory, and risk management needs.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.