company

Senior Data Engineer - Real-Time Streaming

gsstech-groupChennai, Tamil Nadu, India
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Senior

Qualifications

To excel in this role, candidates should possess a strong professional background in data engineering, specifically with an emphasis on real-time data processing. Candidates should have a comprehensive understanding of distributed systems, event-driven architecture, and the capacity to work collaboratively in a dynamic environment.

About the job

We are seeking a talented Senior Data Engineer with extensive knowledge of real-time data streaming and distributed data processing to architect, develop, and enhance state-of-the-art data platforms. This pivotal role is essential for advancing event-driven architecture and real-time analytics within critical banking systems, particularly in risk and compliance domains.

In this position, you will work synergistically with data architects, platform engineers, and business stakeholders to create low-latency, high-throughput data pipelines that empower sophisticated analytics and informed decision-making.

Key Responsibilities

  • Design, develop, and maintain robust real-time streaming pipelines utilizing Apache Kafka, PySpark, and Flink
  • Construct scalable and fault-tolerant event-driven data architectures
  • Handle high-volume streaming data ensuring low latency and high reliability
  • Integrate diverse data sources into centralized data platforms (Data Lake / Lakehouse)
  • Enhance data pipelines for performance, scalability, and cost-effectiveness
  • Uphold data quality, governance, and compliance in line with banking regulations
  • Collaborate with cross-functional teams to convert business needs into technical solutions
  • Monitor and debug streaming jobs and production pipelines

Required Skills & Experience

  • 5+ years of experience in Data Engineering
  • Demonstrated proficiency in:
    • PySpark / Spark Streaming
    • Apache Kafka (Producers, Consumers, Kafka Streams)
    • Apache Flink or other real-time processing frameworks
  • Proven experience in building real-time / near real-time data pipelines
  • Strong understanding of distributed systems and event-driven architecture
  • Proficiency in Python / Java / Scala
  • Experience with data lakes, ETL/ELT pipelines, and big data ecosystems
  • Familiarity with cloud platforms (AWS / Azure / GCP) is advantageous
  • Knowledge of banking, risk, or compliance data systems is highly preferred

Preferred Qualifications

  • Experience in the financial services or banking domain
  • Exposure to data governance, regulatory reporting, or compliance systems
  • Understanding of CI/CD pipelines and DevOps practices for data platforms

About gsstech-group

Gsstech-group is a leading technology solutions provider specializing in advanced data engineering and analytics for the banking sector. Our mission is to empower clients through innovative data-driven solutions that enhance operational efficiency and decision-making.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.