Data Modeler
Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Unlock Your Potential
Generate Job-Optimized Resume
One Click And Our AI Optimizes Your Resume to Match The Job Description.
Is Your Resume Optimized For This Role?
Find Out If You're Highlighting The Right Skills And Fix What's Missing
Qualifications
About Capco
Capco, a Wipro company, is a leading global technology and management consulting firm renowned for its innovative solutions in the financial services sector. Recognized as the Consultancy of the Year at the British Bank Awards and ranked among the Top 100 Best Companies for Women in India 2022, Capco prides itself on its commitment to diversity and excellence. With a presence in 32 cities worldwide, we partner with over 100 clients, delivering transformative projects that drive change in banking, finance, and energy industries.
Similar jobs
Search for Data Analyst Python Pyspark At Capco India
1,203 results
Job Title: Data Analyst (Python + Pyspark)About UsCapco, a Wipro company, is a leading global technology and management consulting firm. Recognized as Consultancy of the Year at the British Bank Awards and ranked among the Top 100 Best Companies for Women in India 2022 by Avtar & Seramount, we operate in over 32 cities worldwide, supporting more than 100 clients in banking, finance, and energy sectors. We pride ourselves on our capability to execute and deliver transformative solutions.Why Join Capco?Engage in dynamic projects with leading international and local banks, insurance companies, payment service providers, and other pivotal industry players. Contribute to projects that are set to revolutionize the financial services landscape.Make an ImpactBring innovative thinking and delivery excellence to help our clients transform their businesses. Together with our clients and industry partners, we create disruptive advancements that are reshaping the energy and financial services sectors.#BeYourselfAtWorkCapco fosters an inclusive and open culture that values diversity and creativity.Career AdvancementAt Capco, we promote a flat organizational culture that empowers everyone to take charge of their career development.Diversity & InclusionWe believe our competitive edge stems from a diverse workforce and varied perspectives.Job Description:Role: Data Analyst / Senior Data AnalystLocation: Bangalore/PuneResponsibilities include defining and sourcing data necessary for delivering insights and use cases, determining data mapping across multiple datasets, and creating processes to highlight key trends.
Capco
Position: Senior Data Analyst (Python + PySpark)About UsCapco, a subsidiary of Wipro, is a premier global technology and management consulting firm. We have been honored with the British Bank Award for Consultancy of the Year and recognized as one of the Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With a presence in over 32 cities worldwide, we serve more than 100 clients in the banking, financial services, and energy sectors. Our reputation is built on our expertise in delivering transformative solutions.WHY JOIN CAPCO?You will engage in dynamic projects with leading international and local banks, insurance companies, payment service providers, and other influential industry stakeholders. These projects are set to revolutionize the financial services landscape.MAKE AN IMPACTAt Capco, you will utilize innovative thinking, excellence in delivery, and thought leadership to drive business transformation for our clients. Collaborating with industry partners, we deliver groundbreaking work that reshapes the energy and financial services sectors.#BEYOURSELFATWORKWe foster a diverse, inclusive, and creative work environment at Capco.CAREER ADVANCEMENTCapco offers a flat organizational structure, presenting every employee with opportunities for growth and career development.DIVERSITY & INCLUSIONWe strongly believe that a diverse workforce enhances our competitive edge.Job Description:Role: Senior Data AnalystLocation: Bangalore/PuneResponsibilities:Define and acquire source data necessary for effective insights and use cases.Establish data mapping to integrate multiple datasets from diverse sources.Develop methods to visualize and communicate analytical findings.
About the Role Capco is seeking a Data Engineer with expertise in Scala to join the team in Bengaluru, India. This position focuses on building and maintaining data solutions for clients in the financial services sector. What You Will Do Design and implement data pipelines using Scala. Develop solutions that support business decision-making and add measurable value. Collaborate with colleagues and stakeholders to deliver reliable data systems. About Capco Capco is a global management and technology consultancy dedicated to serving the financial services industry.
Join Capco, a leading global technology and management consulting firm, as a Data Expert specializing in Data Analysis and Data Modeling. In this role, you will engage in transformative projects with major banks, insurance companies, and payment service providers, driving innovation in the financial services industry. Your work will not just be about data; it will be about making an impact by delivering excellence and thought leadership, helping our clients navigate their business transformations effectively. We promote a diverse, inclusive, and creative work environment where you can be your authentic self and take charge of your career development.
As a Data Business Analyst at Capco, you will play a crucial role in harnessing data to drive business decisions and strategy. You will collaborate with various teams to analyze complex data sets, derive insights, and present findings to stakeholders. Your analytical skills will empower the organization to optimize performance and enhance operational efficiency.
As a Solution Architect at Capco, you will play a pivotal role in designing innovative technology solutions that address our clients’ unique challenges. You will collaborate with cross-functional teams to deliver exceptional results, ensuring that our solutions are scalable, efficient, and aligned with business goals. Your expertise in various technologies and architectures will guide the development process, and your leadership will inspire teams to achieve excellence.
Join our innovative team as a Software Developer where you will leverage your expertise in Python, PySpark, and AWS to build cutting-edge solutions. You will be working on exciting projects that challenge your skills and foster your professional growth.
Role Overview Capco is hiring a Java Backend Developer in Bengaluru. This role focuses on building and maintaining backend systems that support financial products and services. Collaboration with colleagues from various disciplines is a regular part of the work. What You Will Do Design, develop, and maintain Java-based backend applications Work closely with cross-functional teams to deliver solutions that address client needs Contribute technical expertise to improve and support financial platforms Location Bengaluru, India. Other Capco offices in Pune and Noida may also be relevant for this role.
About Us Capco, a Wipro company, is a leading global technology and management consulting firm. We have been honored with the Consultancy of the Year at the British Bank Awards and recognized as one of the Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With a presence in 32 cities worldwide, we serve over 100 clients in the banking, financial services, and energy sectors. Our expertise lies in delivering profound transformation and execution excellence. WHY JOIN CAPCO? Join us to work on exciting projects with leading banks, insurance firms, payment service providers, and influential industry players that are set to revolutionize the financial services landscape. MAKE AN IMPACT We encourage innovative thinking, deliver excellence, and provide thought leadership to assist our clients in transforming their businesses. Together with our clients and industry partners, we are delivering transformative solutions that are reshaping the energy and financial services sectors. #BEYOURSELFATWORK Capco fosters a respectful, open culture that cherishes diversity, inclusivity, and creativity. CAREER ADVANCEMENT At Capco, we believe in meritocracy, allowing everyone the opportunity to grow alongside the company and take charge of their career trajectory. DIVERSITY & INCLUSION We understand that a diverse workforce and varied perspectives provide us with a competitive edge. Job Description: We are looking for a skilled Technical Lead – Full Stack Java Developer with substantial experience to spearhead the design and implementation of scalable, high-performance software solutions. This role requires hands-on expertise in full-stack Java development, robust leadership skills, and a proven success record in delivering intricate enterprise applications.
Join Capco, a global management and technology consultancy dedicated to the financial services industry, as a Java Full Stack Developer. You will be part of a dynamic team that drives innovative solutions and enhances development processes. Your expertise in both front-end and back-end technologies will be pivotal in delivering robust applications that meet client needs. Collaborate with cross-functional teams to design, develop, and implement high-quality software solutions.
gsstech-group
Join our innovative team as a Data Engineer within the Data Engineering Chapter, supporting the Group Operations Team. As a vital contributor, you will partner with both business and technical stakeholders to define data needs, execute impact assessments, and create robust, scalable data pipelines utilizing cutting-edge technologies such as PySpark.Key ResponsibilitiesEngage with the Group Operations Team to capture and evaluate data requirementsConduct impact assessments, technical data mapping, and data profilingArchitect and implement data extraction, transformation, and loading (ETL) pipelinesEnhance and fine-tune data pipelines using PySpark as part of the bank's modern technological ecosystemCraft data solutions that align with AECB application data modelsGuarantee data quality, integrity, and consistency across all systemsParticipate in unit testing, deployment, and production supportUtilize advanced AI tools (e.g., Claude) to boost development efficiency and minimize operational errorsCollaborate in an agile environment to foster continuous improvement initiatives
Join our dynamic Group Risk team as a Data Engineer, where you will play a pivotal role in constructing and managing comprehensive data pipelines essential for IFRS9 reporting. This position requires close collaboration with business stakeholders to identify data requirements, perform impact assessments, and deliver exceptional data solutions utilizing cutting-edge technologies like PySpark and Informatica BDM.Key ResponsibilitiesWork alongside the Group Risk Team to gather and comprehend business and data needs.Conduct impact assessments and technical data mapping for both new and existing data sources.Perform data profiling to guarantee data quality, consistency, and completeness.Design, develop, and sustain ETL pipelines utilizing PySpark and Informatica BDM.Create scalable data transformation workflows in alignment with IFRS9 data models.Ensure precise data extraction, transformation, and loading (ETL) into reporting systems.Engage in unit testing, validation, and deployment of data pipelines.Enhance data processing performance and resolve production issues.Utilize modern tools (e.g., AI-assisted tools like Claude) to boost productivity, minimize errors, and refine development workflows.Maintain thorough documentation for data flows, mappings, and processes.Required Skills & QualificationsProven experience in PySpark for extensive data processing.Hands-on experience with Informatica BDM (Big Data Management).Strong grasp of ETL concepts, data warehousing, and data modeling.Experience in data profiling, data mapping, and impact analysis.Familiarity with IFRS9 or the Risk/Banking domain is highly advantageous.Knowledge of distributed data processing frameworks and big data ecosystems.Excellent SQL skills and experience with relational databases.Sound understanding of data quality and governance principles.Preferred SkillsExperience with cloud platforms (AWS / Azure / GCP).Familiarity with AI-assisted development tools (e.g., Claude, GitHub Copilot).Understanding of CI/CD pipelines in data engineering workflows.Soft SkillsStrong analytical and problem-solving abilities.Exceptional communication and stakeholder management skills.Capacity to thrive in a fast-paced, collaborative environment.
Join us at gsstech-group as a Data Engineer, where your expertise in PySpark and Informatica BDM will play a crucial role in enhancing our Risk & Compliance platforms within a transformative Trade Transformation initiative. Collaborate with a dedicated Data Engineering team to deliver scalable, high-quality data solutions that meet our strategic objectives.Key Responsibilities:Engage with the Data Engineering team to evaluate the impacts of transformation initiatives on Risk & Compliance platforms.Convert business requirements and impact assessments into actionable data engineering tasks.Architect, develop, and implement robust data solutions that adhere to enterprise architecture standards.Build and optimize data pipelines leveraging PySpark and Informatica BDM.Ensure prompt delivery of development tasks while maintaining exceptional quality and compliance with SLAs.Collaborate with cross-functional teams, including business, risk, and compliance stakeholders.Utilize cutting-edge tools and technologies, including AI-assisted development tools, to drive productivity.Participate in code reviews, testing, and deployment activities.
Join our dynamic team at gsstech-group as a talented Data Engineer within the Data Engineering Chapter, working closely with the Group Operations team at ENBD. In this role, you will be instrumental in developing scalable data pipelines, conducting data analyses, and providing top-tier data solutions that align with our enterprise data models.Key ResponsibilitiesEngage with the Group Operations Team daily to clarify business and data requirements.Execute Impact Assessments for both new and existing data modifications.Carry out Technical Data Mapping and Data Profiling.Design, implement, and sustain ETL pipelines for efficient data extraction, transformation, and loading.Create data solutions that integrate with the AECB application following established data models.Enhance and maintain data pipelines utilizing PySpark on contemporary data platforms.Ensure the quality, consistency, and integrity of data across various systems.Conduct unit testing, debugging, and deployment of data solutions.Utilize modern tools and AI technologies (e.g., Claude) to boost development efficiency and minimize operational errors.Collaborate effectively with cross-functional teams including business analysts, architects, and QA.Required Skills & QualificationsProven expertise in PySpark and distributed data processing.Experience with Informatica BDM Development (Big Data Management).Deep understanding of ETL/ELT concepts and data pipeline architecture.Proficiency in data mapping, data profiling, and impact analysis.Experience with large-scale data systems and cloud/data platforms.Strong SQL skills and a solid grasp of data warehousing principles.Familiarity with the banking/financial domain is a plus.Knowledge of AI-assisted development tools (e.g., Claude) is advantageous.Excellent problem-solving and analytical abilities.Preferred QualificationsExperience with AECB data/reporting systems.Exposure to big data ecosystems (Hadoop/Spark clusters).Understanding of data governance and compliance standards.
Job Title: Data Engineer (PySpark)________________________________________About the RoleWe invite you to join our dynamic data engineering team as a proficient Data Engineer specializing in PySpark and the Cloudera Data Platform (CDP). In this pivotal role, you will be tasked with architecting, developing, and sustaining robust data pipelines that guarantee exceptional data quality and accessibility throughout the organization. Your expertise in big data ecosystems, cloud-native technologies, and sophisticated data processing methodologies is essential.The ideal candidate will possess extensive hands-on experience in data ingestion, transformation, and optimization on the Cloudera Data Platform, complemented by a strong history of applying data engineering best practices. You will collaborate closely with fellow data engineers to devise solutions that foster significant business insights.Key ResponsibilitiesDesign and develop scalable ETL pipelines using PySpark on CDP, ensuring data integrity.Manage data ingestion processes from diverse sources (e.g., relational databases, APIs, file systems) to the data lake or warehouse on CDP.Utilize PySpark for processing, cleansing, and transforming vast datasets to meet analytical and business needs.Optimize performance by fine-tuning PySpark code and Cloudera components to enhance resource utilization.Establish data quality checks and validation routines to maintain data accuracy throughout the pipeline.Automate workflows using orchestration tools like Apache Oozie or Airflow within the Cloudera ecosystem.Monitor pipeline performance, troubleshoot issues, and maintain the Cloudera Data Platform and associated processes.Collaborate with data engineers, analysts, product managers, and other stakeholders to understand data requirements.Document data engineering processes, code, and pipeline configurations thoroughly.QualificationsBachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related discipline.3+ years of experience as a Data Engineer, focusing on PySpark and the Cloudera Data Platform.
Capco
Join Capco, a leading global technology and consulting firm, as a Hadoop Administrator. In this role, you will be responsible for managing and maintaining our Hadoop ecosystem, ensuring optimal performance and reliability. You will collaborate with cross-functional teams to support data processing needs and help drive data-driven decision-making across the organization.
Join Capco as a Learning & Development Analyst or Senior Analyst, where you will play a pivotal role in shaping the learning experience within our organization. Your primary focus will be on designing, implementing, and evaluating effective training programs that foster professional growth and enhance organizational performance.
Capco
Join Capco as a Data Modeler and leverage your expertise to transform data into actionable insights. In this role, you'll collaborate with top-tier clients in banking, finance, and energy sectors, shaping the future of financial services through innovative data models. You will be integral to developing strategies that enhance data usability and drive decision-making. Your analytical prowess will contribute to projects that redefine industry standards and foster digital transformation.
Role Overview Capco is hiring a Senior Data Privacy & Security Specialist in Bengaluru, India. This role focuses on developing and maintaining data protection strategies that align with regulatory requirements. The position involves working closely with teams across the organization to help secure sensitive information and support compliance efforts. What You Will Do Shape and refine data privacy and security practices to meet evolving regulations. Collaborate with colleagues from different departments to implement effective safeguards. Advise on compliance matters and help protect confidential data throughout the company. Who We’re Looking For Deep knowledge of data privacy and security principles. Experience working with regulatory frameworks and compliance standards. Strong communication skills and a collaborative approach. Capco values professionals who care about privacy and data protection, and who are ready to make a meaningful impact.
Capco
Role overview Capco seeks a Hadoop Administrator based in Bengaluru, India. This position is responsible for managing and enhancing the company’s big data infrastructure. The main focus is on keeping Hadoop clusters stable, available, and running efficiently. What you will do Deploy and configure Hadoop clusters to support business operations Monitor cluster health and track performance metrics Troubleshoot technical issues to reduce downtime Collaborate with teams to ensure the data ecosystem aligns with business requirements Impact This role supports Capco and its clients in using big data analytics to uncover insights and drive new solutions.
Sign in to browse more jobs
Create account — see all 1,203 results

