Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Experience Level
Entry Level
Qualifications
Proficiency in programming languages such as Python or Java. Experience with data warehousing solutions and ETL processes. Familiarity with big data technologies including Hadoop, Spark, or Kafka. Strong understanding of SQL and database management. Ability to work in a collaborative team environment. Excellent problem-solving skills and attention to detail.
About the job
Join our dynamic team as a Data Engineer specializing in log platforms. In this role, you will work with cutting-edge technologies to design, build, and maintain robust data pipelines. Your expertise will help us manage and analyze extensive log data, enabling improved decision-making and operational efficiency.
About TossCareers
TossCareers is a leading company in the tech industry, dedicated to innovation and excellence. We strive to create an inclusive and engaging work environment that fosters growth and creativity. Join us and be part of a team that is shaping the future through technology.
Join Us and Engage in Exciting Work! After completing a comprehensive onboarding process to familiarize yourself with the Toss data environment, you will be part of the Data Warehouse Team, undertaking the following responsibilities: Develop a data quality platform that enhances table consistency, advances DQ rules, and establishes health check metrics. We aim to create a reliability management platform allowing all data users to work without questioning, 'Can I trust this data?'.Enhance the GraphRAG pipeline. Build a knowledge graph construction pipeline that extracts entities by parsing ontology YAML, SQL, and code, followed by vector embedding for indexing in Elasticsearch, making Toss's data assets easily navigable for everyone.Design and operate MSA architectures. Split necessary services for the ontology platform into microservices, ensuring each is designed, implemented, and operated reliably.Develop AI agent infrastructure. Create a multi-agent workflow execution environment based on open-source agent frameworks like CrewAI. Establish an MCP Tool Registry and develop integration infrastructure with external MCP servers.Build an early warning platform. Create a monitoring system that detects anomalies in data lineage, code, and trends, automatically performing alerts and analyses to identify issues before they escalate.Develop a lineage tracking engine. Create a system that automatically analyzes the extent of impacts by parsing SQL to extract column-wise influence relationships, determining how far changes propagate.
About the Team You Will JoinThe Product Owner for Toss Securities' AI Data Platform focuses on generating diverse investment information content through AI, enabling customers to make informed investment decisions.Your team is dedicated to gathering all types of data and creating a pipeline that allows AI to generate content tailored to customer needs.We collaborate with various internal silos to build the necessary infrastructure for seamless data and machine learning service delivery, fundamentally transforming the investment experience through enhanced search and recommendation capabilities.Additionally, you will belong to a PO/PM chapter where you can share and solve product management challenges with other Product Owners and Managers.Your ResponsibilitiesDefine challenges in the customer investment journey and hypothesize AI-driven solutions.Plan AI-based content and experiences to help customers become more comfortable with investing.Refine and structure diverse data sources, including investment data, to make them usable.Design products that assist in customer investment decision-making using the latest AI technologies, including RAG, LLM, and ML modeling.Collaborate with ML Engineers and Data Engineers to develop prototypes, enhance model performance, and bring products to market.What We Are Looking ForNo specific years of experience required; we value depth of experience over years worked.Experience in developing or planning data and AI-driven products (search/recommendation/ML/unstructured data/personalization/ETL) is essential.Experience in simplifying and standardizing complex data pipelines to quickly supply necessary data is highly preferred.Ability to clearly define customer problems and connect them with technical solutions.Strong communication skills to collaborate effectively with MLEs and DEs.Resume TipsClearly outline the flow of problem definition, solution derivation, collaboration with stakeholders, and the resulting outcomes for each product, service, or project.Include insights and achievements from the processes rather than just listing tasks.The Journey to Joining Toss SecuritiesApplication Submission > Job Interview > Cultural Fit Interview > Reference Check > Compensation Negotiation > Final Offer and OnboardingPlease NoteIf any false information is found in your resume or if disciplinary actions are confirmed in your employment history, hiring may be canceled.Hiring may be canceled for candidates who fall under Toss Securities' hiring restrictions or disqualification criteria.
Join Our Dynamic TeamThe Data Engineer (AI) position is part of the AI Data Platform Team at Toss Securities.The AI Data Platform Team comprises Data Engineers, Machine Learning Engineers, Server Engineers, and Product Operation Managers, fostering collaboration across various roles.Our mission is to develop a unique data moat for Toss Securities through the integration of diverse securities domain data and AI technologies, providing essential insights for investors.We utilize external LLMs and conduct training and evaluation of our internally developed models while leveraging various data platform technologies.Your ResponsibilitiesProactively identify and lead projects to solve business challenges at Toss Securities, overseeing the entire process from data architecture design to development and operation.Build and manage a securities data platform that integrates, processes, and serves global market data.Establish and maintain a knowledge graph platform for real-time domain data.Create and operate data pipelines that underpin AI service products.Develop and manage a feature store for personalized recommendation services in real-time.Ensure data integrity by designing, developing, and operating data quality verification and monitoring systems.We Seek Candidates WhoHave over 5 years of experience in data engineering.Can comprehend requirements and analyze technical trade-offs to determine the optimal data architecture in a given environment.Possess a solid understanding and experience in large-scale distributed processing and data platforms.Have experience sharing knowledge with peers and junior engineers, contributing to the technical growth of the entire team.Are interested in leveraging AI beyond mere tools, understanding its principles to innovate engineering productivity.Can coordinate with colleagues across various functions and provide constructive feedback.Are eager to take on new challenges and proactively learn and grow.Preferred ExperiencesExperience with Kafka-based stream processing and large-scale distributed data processing (Hadoop/ClickHouse/ElasticSearch).Experience building and operating data pipelines using Airflow, Docker, and Kubernetes.Experience in monitoring and managing data integrity and quality.Stay up-to-date with the latest trends in AI/data technologies and have an interest in automation and productivity enhancement.
# Join Our Team- The AI Platform team is on a mission to create a platform that enables everyone to use AI technology quickly and reliably. We technically support the use of AI across Toss.- We are developing the necessary tools and platforms for new AI systems such as Retrieval-Augmented Generation, Agents, and Assistants to be rapidly experimented with and reliably operated.- The platform we create is not just a simple toolkit; it is designed with scalability in mind, allowing AI technology to be implemented by more teams effectively.- Addressing unresolved problems means that collaboratively defining and structuring technical directions is crucial.- **Want to learn more about Toss’s data organization?** [→ *Toss Data Division Wiki*](https://recruit-data-division.oopy.io/)# Responsibilities- Integrate LLM-based components such as Retrieval, Generation, and Vector Search into a platform that various teams can reuse.- Provide features that integrate both SaaS and self-hosted LLMs and ensure stable operations.- Design the foundation for creating and experimenting with Agent systems more easily, including prompts, tools, and context configurations.- After experiments, ensure stable operation in the service by systematizing and tooling the serving and operational flow of RAG and Agents.- Develop a foundation to quantitatively evaluate the performance and quality of Agents and provide it as a platform.- Design a common environment and experience to enable rapid experimentation and application of AI systems not only within the team but also across other teams.- Structure unformatted technical elements and create directions that can expand into broader problems.# We Are Looking For- Individuals with experience applying technologies such as LLM, RAG, and Agents to real-world problems.- Those who can technically define unstructured problems and systematically solve them.- Candidates who have collaborated with multiple teams to develop and operate technology as if it were a product.- Those who can quickly adapt to new technological trends and integrate them naturally within the team.- Individuals interested in simplifying complex AI systems into a consistent and straightforward user experience.# Preferred Qualifications- Experience designing RAG components such as Retrieval, Generation, and Vector Search independently and integrating them at the system level.- Experience selecting and operating various LLM serving structures (OpenAI API, HuggingFace, vLLM) tailored to service situations.- Experience structuring various purpose-driven Agents and applying them in service operations is highly welcomed.- Experience proactively designing experimental environments or tools based on the requirements of platform users (internal developers, model engineers, etc.).- Experience creating and operating common platforms or RAG-based systems that can be extended across multiple projects or domains is especially desirable.# Resume Tips- If your past projects had significant impacts on the organization, please detail them.- Rather than just listing languages, platforms, frameworks, or technologies used, provide context about the project's objectives, the methods you employed, and how you solved the problems.- Include experiences where you resolved critical issues during platform operations or optimized performance/resources.- If you have contributed to open source by fixing bugs or enhancing functionalities, please share those experiences.# The Journey to Join Toss- Application submission > Job interview > Cultural fit interview > Reference check > Compensation negotiation > Final acceptance- The job interview will feature in-depth technical interviews focused on ML system design.# A Message for Future Colleagues“We don’t just serve rapidly evolving AI models; we build systems that ensure these models operate stably and are continuously improved.”- The AI Platform team is responsible for the serving, experimenting, and operating infrastructure of various AI technologies such as LLM-based services, RAG systems, and search infrastructure to ensure smooth operation in production environments.- We efficiently manage GPU resources and clusters, utilizing vLLM, Triton, Model Registry, etc., to automate experiments and deployments.
Join Our Dynamic Team!The Data Engineer for Workflow Platform is an integral member of Toss Bank's Data Division, specifically within the Data Platform team.This team comprises three key areas: Data Infrastructure & Hadoop, Streaming Platform, and Workflow Platform.We operate various Data Platforms, including Hadoop, Kafka, CDC, and Airflow.Our mission is to ensure the reliability and scalability of the enterprise data infrastructure, ensuring all data is securely collected and processed.Your Responsibilities:Design and operate a large-scale data workflow execution platform in an on-premise Kubernetes environment.Optimize resources to ensure the stable execution of large workflows across various data organizations, enhancing platform performance and reliability.Collaborate with enterprise data engineers to improve the execution quality of the overall data pipeline and enhance developer experience.Monitor workflow execution status, design and improve systems for automated fault detection, alerts, and recovery procedures.Safely manage workflow executions in accordance with internal control standards of the financial sector, advancing a systematic history management system.Continuously review and implement new technologies and open-source solutions to enhance the performance and scalability of the workflow platform.We Are Looking For:Experience operating an Airflow-based workflow orchestration system with proven improvements in stability, scalability, and execution efficiency.Background in developing Python-based data workflows and platform services.Understanding of container technologies (Docker, Kubernetes, etc.) and experience in automating service deployment and configuration using tools like Helm.Ability to understand company environments and communicate effectively with various teams during service development.A keen interest in improving operational efficiency and optimization in large-scale workflow environments.Desire to enhance platform user experience to facilitate easier and safer pipeline development and operations for in-house data engineers.A proactive approach to analyzing, modifying, and improving open-source solutions at the code level to solve issues.Resume Submission Tips:Clearly outline impactful projects you have worked on in your career.Focus on experiences related to data platforms, particularly with Airflow, Kubernetes, and Python.
About the Team You'll Join The Data Analytics Engineer at Toss Securities is part of the Data Warehouse Team within the Data Division. Your responsibilities will be focused on Data Platform and Data Mart tasks. While your primary focus will vary, you will also engage in cross-functional projects. The Platform tasks involve maintaining and optimizing ETL/Pipeline Tools to effectively manage the DW Mart tables. You will explore and implement new methods to reduce DW operation time with limited resources. Our goal is to maximize data utilization across the organization using tables managed by the DW team. The current team consists of approximately 7 members with varying experience levels ranging from 2 to 14 years, coming from diverse backgrounds including portals, banking, gaming, and startups. Curious about the Data Division? The Data Division at Toss Securities aims to become the world's leading securities firm in data handling, contributing through data technology, services, and data-driven decision-making. We foster close collaboration among various data professionals and enjoy our work. Regular Tech Weekly sessions allow us to share expertise, and you can freely engage with different teams to learn from each other. Your Responsibilities Experience and contribute to an efficient DW environment within a rapidly growing agile organization. Design data marts and develop and automate DW Data Workflows based on the Hadoop Ecosystem and open-source solutions. Identify and implement methods for structuring and automating numerous DW/Mart tables. Process large volumes of data swiftly and effectively to create and manage various features. Establish Data Quality Checks and Governance within the data marts. Experience in deriving and establishing system requirements for large data processing and analysis is a plus. Ideal Candidate At least 5 years of experience as a Data Engineer is essential. You should possess a fundamental understanding of RDBMS, Hadoop Ecosystem, and Data Warehousing. Proven experience in leading the design, construction, and operation of data marts is required. You should be capable of installing, operating, and troubleshooting Airflow, DBT, and Django, with the ability to modify open-source tools to develop features needed for securities DW. Experience in simplifying complex problems or automating repetitive tasks using data models is critical. Extensive experience in efficiently processing big data using Spark is highly desirable. Intermediate proficiency in Python and advanced skills in SQL are required. Resume Tips If you have resolved critical issues while operating platforms or optimized performance and system resource usage, please include those experiences. Be specific about impactful projects you have worked on. If you have addressed bugs or issues while using open-source tools, or developed or enhanced features, please detail those experiences. Highlight the results of any improvements made in actual services, quantified if possible (please exclude sensitive information if necessary). Join Toss Securities Application Submission > Job Interview > Cultural Fit Interview > Reference Check > Compensation Negotiation >...
Join our dynamic team as a Data Engineer specializing in log platforms. In this role, you will work with cutting-edge technologies to design, build, and maintain robust data pipelines. Your expertise will help us manage and analyze extensive log data, enabling improved decision-making and operational efficiency.
Join Our Dynamic TeamAs a Data Engineer at Toss, you will be part of the Data Platform team.The Data Platform team operates within the Data Division, supporting and managing the data and platforms necessary for all Toss services.Our team comprises members with 2 to 18 years of experience from diverse backgrounds, including portals, banking, gaming, and startups.We encourage team members to pursue various interests and collaborate freely on skills and knowledge sharing.Your ResponsibilitiesDevelop and maintain stable and efficient data pipelines (ingestion, loading, streaming).Contribute to data-driven Toss services through real-time distributed processing of large datasets.Create and manage tools to ensure a reliable and efficient data experimentation and analysis environment for your colleagues.Develop various data service applications for data analysis and platform operation.Who You AreYou have experience in developing data pipelines for large-scale data processing (collection, processing, analysis).You are familiar with large-scale distributed systems (Hadoop, HBase, Kafka, Spark, Flink, etc.).You possess software development skills for data application development (Java, Scala, Python, etc.).You have intermediate or advanced programming skills (web/client/server programming).Experience in developing services related to recommendations/advertising/machine learning is a plus.Please Highlight These Experiences in Your ResumeDetail the projects you worked on, the technologies you used, and how you solved challenges, rather than just listing languages or frameworks.Experience using platforms similar to Toss is beneficial, but we prioritize growth potential and problem-solving abilities over specific technologies.Include any experience resolving critical failures while operating platforms or optimizing for performance and resource usage.Share experiences where you identified and resolved bugs in open-source software or contributed enhancements.Your Journey to Joining TossApplication submission > Technical interview > Cultural fit interview > Reference check > Compensation discussion > Final offer
Role Overview Tosscareers is hiring an AIOps Platform Engineer in Seoul. This role focuses on building and improving AIOps platforms that support IT operations. The work centers on integrating AI with operations to improve performance and reliability. What You Will Do Work with teams from different disciplines to design and develop AIOps solutions Implement platforms that help automate and optimize IT operations Contribute technical expertise to projects that connect AI tools with operational workflows Support the stability and efficiency of IT systems through AI-driven approaches Location This position is based in Seoul.
Join Our Data Platform Team!As a Data Engineer at Toss, you will be part of our Data Platform Team.The team consists of Data Engineers and Data Analytics Engineers.We are responsible for building platforms and data pipelines essential for analyzing the services provided by Toss.Your ResponsibilitiesDevelop and operate OLAP (Online Analytical Processing) based data pipelines.Design and optimize systems for reliable operation of large-scale data analysis and real-time/batch data pipelines.Develop and manage batch and streaming pipelines to load various types of data generated at Toss.Continuously improve data models and processing logic based on service requirements.We Are Looking For Candidates Who HaveExperience operating services in a Kubernetes (K8s) based environment.Experience in designing and operating data streaming pipelines using Kafka and Kafka Connect.Experience in processing large volumes of data using Apache Spark (Batch/Structured Streaming).Additional Skills That Would Be a PlusExperience in operating and tuning MPP/OLAP engines like StarRocks or ClickHouse.Experience in building Data Lakehouses using Open Table Formats such as Apache Iceberg, Hudi, or Delta Lake.Experience in real-time data processing using streaming frameworks like Kafka Streams or Apache Flink.Experience in designing and operating ETL pipelines based on Airflow.Your Journey to Join TossApplication Submission > Job Interview > Cultural Fit Interview > Reference Check > Compensation Negotiation > Final Offer
Job ResponsibilitiesStay abreast of cutting-edge AI research trends, including LLM, Agentic AI, Diffusion, and Inference acceleration, and explore integration opportunities with our proprietary NPU.Lead groundbreaking AI research aimed at publication in top-tier academic conferences.Forge research collaborations with prestigious global institutions and manage associated projects.
About the Team You'll Join The ML Engineer (Platform) at Toss Securities is part of the ML Platform Team within the Product Division. The mission of the ML Platform Team is to create an optimal machine learning platform that enables efficient and stable development and operation of various AI/ML services at Toss Securities. Your Responsibilities Upon Joining Develop and enhance the Gateway system, the gateway for ML services. Develop and operate a Gateway system based on FastAPI that handles enterprise-level LLM API requests. Design and implement authentication, routing, traffic control, fault isolation (Circuit Breaker, Fallback), large-scale TPS processing, and load balancing strategies from both application and infrastructure perspectives in the FastAPI-based Gateway application. Manage and serve ML services. Directly operate a machine learning model serving system in a Kubernetes environment. Design and improve the LLM serving architecture to operate stably under large traffic conditions. Monitor latency, error rates, resource usage, and analyze and resolve operational issues for the models in service. Identify root causes of failures and implement structural improvements, including operational policies and architecture. Develop and manage a common ML platform for the company. Develop and manage a common platform for efficiently operating the training and serving of internal ML/LLM models based on Kubeflow. Continuously monitor and optimize the performance and resources of workloads executed on the platform. Build infrastructure for LLM-based services. Operate LLM services using various serving frameworks such as vLLM, SGLang, and Triton. Manage the environment to ensure stable operation of training and serving workloads on high-performance GPU clusters like H100/B300. Build and operate a large-scale data training environment for finance domain-specific LLMs. We Are Looking for Candidates Who: Are proficient in one or more programming languages such as Python, Go, Java, or Kotlin, and have experience designing and developing API servers in production environments. Have experience developing or operating API Gateways (Nginx, Kong, etc.) or LLM Routers (LiteLLM, Envoy AI Gateway, etc.), with a background in handling high-volume traffic and incident response. Have experience operating serving logs and event pipelines integrated with Kafka, Elasticsearch, and Kibana. Have experience defining monitoring metrics for model serving and configuring and operating dashboards using Prometheus and Grafana. Have experience operating ML/LLM model serving using KServe, BentoML, vLLM, SGLang, etc. Have experience directly managing MLOps components (Kubeflow, KServe, Airflow, Argo CD, MLflow, etc.) in Kubernetes environments and debugging and resolving issues. Can design and apply long-term improvement plans through root cause analysis beyond short-term responses to issues that arise during service operations. Additional Preferred Experience: Experience in related fields or technologies will be a plus.
Join RWS as an AI Data Specialist in vibrant Seoul, South Korea. In this pivotal role, you will be responsible for managing and curating high-quality datasets to support our innovative AI projects. Your expertise will directly contribute to enhancing our AI capabilities and driving impactful solutions.
Join our innovative team as a Frontend Platform Engineer at Toss, where we revolutionize the way people manage their finances through cutting-edge technology and seamless user experiences.
Join our innovative team as a Frontend Platform Engineer, where you will play a pivotal role in developing and enhancing our cutting-edge frontend technologies. You'll collaborate closely with cross-functional teams to design, implement, and maintain high-quality user interfaces that elevate our platform's performance and user experience. If you're passionate about creating seamless web applications and have a knack for problem-solving, we want to hear from you!
# About the Team- The Data Mart Platform Team is dedicated to building a standardized Data Warehouse for various Toss products, aiming to prevent data silos and enhance overall data maturity across the organization.- Responsibilities include enhancing centralized DW quality management processes, standard monitoring, integrating product data with the enterprise data mart, designing efficient pipelines, and creating standardized marts.- **Interested in learning more about Toss's Data Organization?** [→ *Toss Data Division Wiki*](https://recruit-data-division.oopy.io/) # Responsibilities- After completing an onboarding process to familiarize yourself with Toss's DW standards, you will work as part of the Data Mart Platform Team.- Maintain and manage an agile and manageable enterprise DW standard, taking responsibility for DW quality management from an enterprise perspective in collaboration with DAEs (Domain Analytics Engineers) from various product domains (development and execution of standard management monitoring).- Plan and execute systems and processes to enhance data reliability, improving table consistency, advancing DQ rules, and establishing health check metrics.- Develop enterprise-level marts, managing the integration of standard marts from different domains and ensuring efficient data pipeline improvements.- Identify and execute tasks to enhance data discoverability across the organization.- Develop a platform to measure data maturity across various Toss domains and initiate projects to enhance the productivity of DAEs.- The data development environment is based on Hadoop, Airflow, Python, and SQL (Impala). # Desired Qualifications- Understanding of database normalization and the fundamental characteristics of Data Warehouses (Subject-Oriented, Integrated, Non-Volatile, Time-Variant).- Ability to clearly define key concepts as a DW data modeler and propose efficient data structures based on diverse data perspectives.- High-level understanding of DW standard management and the capability to propose and lead improvement initiatives at the enterprise level.- Strong comprehension of data governance aspects, including data quality and compliance, with the ability to suggest actionable plans.- Proficient in SQL, capable of writing efficient and readable queries.- Basic Python skills (enough to work with Airflow) are acceptable, but understanding modules and PySpark code written by others is preferred.- Experience with large-scale data processing and designing metrics from an AARRR perspective is a plus. # Application Tips- Please specify any relevant experience with DW construction projects and mart design, detailing your contributions.- Mention specific challenges you have addressed regarding data maturity.- Outline your contributions and lessons learned while solving data-related issues. # Joining Toss - Application Submission > Job Interview > Cultural Fit Interview > Reference Check > Compensation Discussion > Final Acceptance and Onboarding # A Note to Future Colleagues > "Our team strives for better service every day." - I was drawn to the thrilling risks associated with financial data and saw that my growth could contribute to the company's success, which is why I joined Toss. - The most stressful aspect of my previous company was being led by predetermined objectives, but Toss offers more autonomy than I expected, along with a dedicated and ambitious team focusing on "better service every day."
Key ResponsibilitiesManage and operate the testing infrastructure for our software and hardware solutions.Conduct operations and maintenance on bare-metal equipment utilized for evaluations.Support the evaluation infrastructure to ensure successful product validation.
About the Team You Will Join The Data Engineer (Search) at Toss Securities is part of the AI Tribe within the AI Intelligence Silo. This Silo is a collaborative team consisting of Data Engineers, Machine Learning Engineers, Server Engineers, Frontend Engineers, Product Owners, and Product Designers, all working towards creating AI-driven information services leveraging securities domain data. Our focus is not just on presenting information but on rapidly experimenting with how to process and present data in a way that assists investors effectively. Search acts as the primary entry point connecting various securities data and AI services, with the Data Engineer (Search) being responsible for search/indexing functionalities that can be utilized across AI-based data services. The role centers around designing and operating the data and indexing aspects that constitute our search services, concentrating on stably designing and managing the data flow and infrastructure for search indexing rather than just enhancing algorithms or ranking models as seen in larger portals or e-commerce. Key Responsibilities Design and manage the indexing pipeline for Toss Securities search services, including stocks, autocomplete, news, and community features. Architect and reliably operate real-time/big data pipelines for search indexing. Gain insights into Elasticsearch-based search indexes and enhance the indexing structure and performance from a data perspective. Collaborate on data integrity management and re-indexing strategies to ensure stable data delivery for search. Gradually expand your responsibilities into areas beyond search, such as Graph search and ingestion of new data sources. Who We Are Looking For A candidate with over 3 years of experience in Data Engineering. Strong programming skills are preferred. Experience in designing or operating real-time or batch-based data pipelines is a plus. Experience in collecting and processing diverse data sources for service utilization is beneficial. Familiarity with big data processing platforms such as Spark, Hadoop, Impala is an advantage. A passion and curiosity for learning new domains and technologies are highly valued. A preference for collaborative environments where feedback and growth are encouraged is ideal. Additional Preferred Experience Experience using or managing search service infrastructures like Elasticsearch, Lucene, or Solr is advantageous. A genuine interest or experience in search domains such as search engines, recommendations, or ranking systems is a plus. Resume Tips Please detail your experience in designing/developing/operating data pipelines, ETL, streaming, etc. Highlight your role in projects and what you learned and improved through those experiences. If you have experience with search or Elasticsearch, even on a smaller scale, please describe the problems you solved. Your Journey to Joining Toss Securities Application submission > Job interview > Cultural fit interview > Reference check > Compensation discussion > Final acceptance and onboarding
Key ResponsibilitiesDevise and execute comprehensive verification strategies for block/IP/SoC, establishing test benches to facilitate effective verification at various levels.Create and implement functional testing protocols based on the established verification test plans.Lead the design verification process to successful completion, adhering to defined metrics for functional and code coverage.Analyze, troubleshoot, and rectify functional discrepancies in the design, working closely with the design team.Engage in collaboration with cross-disciplinary teams, including Design, Modeling, Emulation, and Silicon Validation, to ensure superior design quality.
Who We AreJoin us in setting global standards for video understanding AI! Twelve Labs is dedicated to developing cutting-edge AI models specifically for video content, enabling efficient processing of vast amounts of video data. Our technology offers advanced capabilities for search, analysis, summarization, and generating insights from video.Our models are utilized by the largest sports leagues worldwide, quickly and accurately selecting highlights from extensive game footage, providing a hyper-personalized viewing experience. In South Korea, integrated control centers partner with us to efficiently analyze CCTV footage for rapid crisis response. Major broadcasters and studios across the globe leverage our models to create content for billions of viewers.Headquartered in San Francisco with an office in Seoul, Twelve Labs is a Deep Tech startup recognized for four consecutive years as one of the Top 100 AI Startups by CB Insights. We have secured over $110 million in funding from leading venture capital firms and corporations, including NVIDIA, NEA, Index Ventures, Databricks, and Snowflake. Our AI models are uniquely available through Amazon Bedrock, and we thrive on innovation and collaboration with exceptional colleagues worldwide.At Twelve Labs, we operate on core values that include:Honesty and reflection about ourselves and our teamsResilience and humility, embracing failure and feedbackA commitment to continuous learning and enhancing team capabilitiesIf you enjoy tackling challenging problems and growing through the journey, the opportunity awaits you here at Twelve Labs.About the TeamOur ML Data team operates on the belief that data determines AI model performance. We build high-quality data for training and evaluating multimodal AI models end-to-end. This includes gathering, filtering, processing, and labeling various types of multimodal data such as video, images, and audio. We collaborate with diverse teams to design datasets that unlock new model capabilities and develop evaluation datasets that reflect real user experiences. We also develop and continually enhance internal tools to perform these processes efficiently.The ML Data team plays a pivotal role in the development of Twelve Labs' world-class video understanding models through a meticulously designed data pipeline.About the RoleAs a Software Engineer specializing in Data, you will design and develop pipelines for multimodal (video, image, audio) data that fundamentally enhance model performance through data quality. If you have experience designing and operating distributed systems for handling unstructured multimodal datasets, you can make a significant impact in this position. The rigorously refined and accurately labeled data forms the foundation of all model development at Twelve Labs, and you will have the opportunity to influence model quality more than any other engineering role.We are looking for someone to help us build data infrastructure that elevates our video understanding technology to the next level.In this Role, You WillBuild data engines capable of collecting, preprocessing, refining, filtering, and labeling large multimodal (video, image, audio) datasets for LLM/VLM training.Design and develop data systems that efficiently manage and visualize petabyte-scale video, image, and audio data.Create libraries and services that deliver tangible impact beyond just eye-catching features.Collaborate closely with various teams to define project priorities and goals, leading technical initiatives from planning through development and operations.
Jun 9, 2025
Sign in to browse more jobs
Create account — see all 473 results
Tailoring 0 resumes…
Tailoring 0 resumes…
We'll move completed jobs to Ready to Apply automatically.