company

Data Engineer (PySpark) at gsstech-group | Bengaluru, Karnataka

gsstech-groupBengaluru, Karnataka, India
On-site Full-time

Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.


Unlock Your Potential

Generate Job-Optimized Resume

One Click And Our AI Optimizes Your Resume to Match The Job Description.

Is Your Resume Optimized For This Role?

Find Out If You're Highlighting The Right Skills And Fix What's Missing

Experience Level

Experience

Qualifications

Skills & QualificationsAbility to work in a collaborative team environment. Strong communication skills for effective data storytelling. Detail-oriented approach to ensure data accuracy.

About the job

Join our dynamic team at gsstech-group as a talented Data Engineer within the Data Engineering Chapter, working closely with the Group Operations team at ENBD. In this role, you will be instrumental in developing scalable data pipelines, conducting data analyses, and providing top-tier data solutions that align with our enterprise data models.

Key Responsibilities

  • Engage with the Group Operations Team daily to clarify business and data requirements.
  • Execute Impact Assessments for both new and existing data modifications.
  • Carry out Technical Data Mapping and Data Profiling.
  • Design, implement, and sustain ETL pipelines for efficient data extraction, transformation, and loading.
  • Create data solutions that integrate with the AECB application following established data models.
  • Enhance and maintain data pipelines utilizing PySpark on contemporary data platforms.
  • Ensure the quality, consistency, and integrity of data across various systems.
  • Conduct unit testing, debugging, and deployment of data solutions.
  • Utilize modern tools and AI technologies (e.g., Claude) to boost development efficiency and minimize operational errors.
  • Collaborate effectively with cross-functional teams including business analysts, architects, and QA.

Required Skills & Qualifications

  • Proven expertise in PySpark and distributed data processing.
  • Experience with Informatica BDM Development (Big Data Management).
  • Deep understanding of ETL/ELT concepts and data pipeline architecture.
  • Proficiency in data mapping, data profiling, and impact analysis.
  • Experience with large-scale data systems and cloud/data platforms.
  • Strong SQL skills and a solid grasp of data warehousing principles.
  • Familiarity with the banking/financial domain is a plus.
  • Knowledge of AI-assisted development tools (e.g., Claude) is advantageous.
  • Excellent problem-solving and analytical abilities.

Preferred Qualifications

  • Experience with AECB data/reporting systems.
  • Exposure to big data ecosystems (Hadoop/Spark clusters).
  • Understanding of data governance and compliance standards.

About gsstech-group

gsstech-group is a pioneering tech company dedicated to delivering innovative data solutions for enterprises. We are committed to fostering a collaborative work environment and empowering our employees to push the boundaries of technology.

Similar jobs

Tailoring 0 resumes

We'll move completed jobs to Ready to Apply automatically.