68 Job openings found

2 Opening(s)
5.0 Year(s) To 8.0 Year(s)
1.00 LPA TO 20.00 LPA
Please find the details below   Position : Sr.Data Engineer  developer Location : Permanent Work From Home Experience requirement : 5-8 Years  Joining Date : Immediate to 15 days    Job Description   Strong hands-on coding experience creating data pipelines in Python (5+ years) Experience in creating data pipelines using PySpark in Databricks (2+ years) Strong hands-on coding experience in SQL (2+ years) Experience with managing and ...
2 Opening(s)
2.0 Year(s) To 6.0 Year(s)
1500000.00 LPA TO 4000000.00 LPA
Job Summary Hands-on experience with writing Spark or Map-Reduce jobs and proficient understanding of distributed computing principles. Proficient in more than one of Python, Java, Scala & Shell-Scripting. Job Responsibilities: Understand & provide innovative solutions to business and product requirements using Big Data Architecture Take ownership of end-to-end data-pipeline including system design and integrating ...
4 Opening(s)
5.0 Year(s) To 10.0 Year(s)
1.00 LPA TO 17.00 LPA
  Please find the details below   Position : Data Engineer Location :Bangalore (Hybrid office TUE and THU) Experience requirement : 5+ Years  Joining Date : Immediate to 30 days    Job Description  5+ years of experience delivering solutions utilizing the entire Microsoft BI (SSIS,SQL) 5+ years of experience working in a data warehouse environment and a strong understanding of dimensional data modelling concepts  Additional ...
5 Opening(s)
7.0 Year(s) To 10.0 Year(s)
Not Disclosed by Recruiter
SKILL TITLEAzure Data Bricks KEY SKILLS ( MANDATORY)Azure Data Bricks (Mim realvent exp 5+ yrs) JOB DESCRIPTION (DETAILED)Should have 10+ years of experience, minimum 8+ in Data engineering and Analytics with Microsoft cloud technologies.Expertise to Implementing Data warehousing Solutions. experience as Data Engineer in Azure Data Bricks with design and solutioning.Strong Programming experience ...
3 Opening(s)
4.0 Year(s) To 15.0 Year(s)
Not Disclosed by Recruiter
Experience in data engineering and Azure data services. Proficiency in coding and building ETL pipelines in Synapse, Azure Data Factory, and Spark. Experience with APIM/API, Dataverse, Logic Apps, Function Apps, SQL, Python, and C#. Proficiency in writing notebooks in Python and/or PySpark. Proficiency in using Azure DevOps or Similar tools for managing work backlogs Experience ...
4 Opening(s)
5.5 Year(s) To 7.5 Year(s)
11.00 LPA TO 15.00 LPA
Project Role : Application Developer  Project Role Description : Design, build and configure applications to meet business process and application requirements.                                                            ...
2 Opening(s)
4.0 Year(s) To 10.0 Year(s)
0.00 LPA TO 25.00 LPA
Must to Have • We are looking for good resource who has very good experience on Azure Data Lake, Azure Data Factory, Azure Data Bricks (Pyspark) and Azure Synapse • Resource should have knowledge about source systems like Sales Force/ Oracle ERP/SAP Ariba etc and should have worked on the integrations with ...
5 Opening(s)
3.0 Year(s) To 7.0 Year(s)
Not Disclosed by Recruiter
Data Engineer   Job Description: - Having experience in Analysis, Design, Development, Testing, Customisation, Bug fixes, Enhancement, Support and Implementation using Python, spark programming. - Worked on AWS environments such as lambda, server-less applications, EMR, Athena, Glue, IAM policies, roles, S3,CFT and Ec2. - Developed Python and Pyspark programs for data analysis. ...
7 Opening(s)
12.0 Year(s) To 22.0 Year(s)
40.00 LPA TO 60.00 LPA
Experience: 12-22 years Job Location: Any Tech Mahindra Location India No of Positions: 7 to 8 Notice Period: 0-60 Days Only Budget: up to 60 LPA   Job Description: 12 to 22 years of experience in Data Analytics spread across on-premise & cloud implementations Experience in one of the cloud platforms – Google Cloud, Azure, AWS or Snowflake ...
1 Opening(s)
2.0 Year(s) To 3.0 Year(s)
8.00 LPA TO 12.00 LPA
Analyse and organize raw data Build data systems and pipelines Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop, implement and optimize stored procedures and functions using T-SQL Analyse existing SQL queries for performance improvements Develop procedures and scripts for data migration Provide timely scheduled management reporting Research ...

Drop Your CV

We will consider your Profile for future Jobs

Submit Profile