2 Opening(s)
2.0 Year(s) To 6.0 Year(s)
1500000.00 LPA TO 4000000.00 LPA
Job Summary
Hands-on experience with writing Spark or Map-Reduce jobs and proficient understanding of distributed computing principles. Proficient in more than one of Python, Java, Scala & Shell-Scripting.
Job Responsibilities:
Understand & provide innovative solutions to business and product requirements using Big Data Architecture
Take ownership of end-to-end data-pipeline including system design and integrating ...
4 Opening(s)
5.0 Year(s) To 10.0 Year(s)
1.00 LPA TO 17.00 LPA
Please find the details below
Position : Data Engineer
Location :Bangalore (Hybrid office TUE and THU)
Experience requirement : 5+ Years
Joining Date : Immediate to 30 days
Job Description
5+ years of experience delivering solutions utilizing the entire Microsoft BI (SSIS,SQL)
5+ years of experience working in a data warehouse environment and a strong understanding of dimensional data modelling concepts
Additional ...
2 Opening(s)
4.0 Year(s) To 5.0 Year(s)
Not Disclosed by Recruiter
Job Location: Remote. Company Profile-:
Blue Rose Technologies(BRT) is a 13+ yrs of old premier global consulting, outsourcing and IT solutions company, helping more than 50 clients succeed in a converging world. With operations in more than 15 countries, we go the extra mile for our clients and accelerate their ...
1 Opening(s)
7.0 Year(s) To 12.0 Year(s)
Not Disclosed by Recruiter
Python,
SQL,
Spark,
Pandas,
Airflow,
Azure Data Factory, or equivalent,
Azure (preferred),
AWS or GCP,
PostgreSQL,
MS SQL Server, NoSQL (MongoDB, etc.),
S3, Delta Lake, Snowflake, Redshift, BigQuery, Prometheus, Grafana, ELK, etc
Cloud Technologies: Familiarity with cloud platforms such as AWS, Azure, or GCP.
Programming Languages: Proficiency in Python
Problem-Solving: Strong analytical and problem-solving skills.
Communication: Excellent communication and collaboration skills.
1 Opening(s)
5.0 Year(s) To 15.0 Year(s)
20.00 LPA TO 25.00 LPA
Responsibilities:
• Responsible for the architecture planning, design, and core code implementation of the company's monitoring system platform, including product technology pre-research, technology architecture selection, core code writing, design document writing, etc.• Participate in the R&D of the company's big data computing platform, responsible for data quality and stability management• Responsible ...
50 Opening(s)
3.0 Year(s) To 10.0 Year(s)
18.00 LPA TO 25.00 LPA
Experience – 3+ Shift - 2-10 pm Package - based on the exp and Current CTC
If Required Need to Visit the client Office .
Requirement 1: GCP Data Enginer (With ETL
Experience : 4+ Years
Tech stack: BigQuery , any ETL tool (Informatica, Talend, DataStage), Dataflow, Dataproc
• 3-5 years Experience in Data ...
5 Opening(s)
3.0 Year(s) To 7.0 Year(s)
Not Disclosed by Recruiter
Data Engineer
Job Description: - Having experience in Analysis, Design, Development, Testing, Customisation, Bug fixes, Enhancement, Support and Implementation using Python, spark programming. - Worked on AWS environments such as lambda, server-less applications, EMR, Athena, Glue, IAM policies, roles, S3,CFT and Ec2. - Developed Python and Pyspark programs for data analysis. ...
2 Opening(s)
4.0 Year(s) To 10.0 Year(s)
0.00 LPA TO 25.00 LPA
Must to Have
• We are looking for good resource who has very good experience on Azure Data Lake, Azure Data Factory, Azure Data Bricks (Pyspark) and Azure Synapse
• Resource should have knowledge about source systems like Sales Force/ Oracle ERP/SAP Ariba etc and should have worked on the integrations with ...
3 Opening(s)
4.0 Year(s) To 8.0 Year(s)
13.00 LPA TO 15.00 LPA
Scala, Spark, Apache Sqoop, Cloudera, Unix Shell scripting, CAAS.
Experience: 4-6 years
At least four years of data engineering experience working with sophisticated data pipelines using Spark , Scala , Apache
Sqoop, Cloudera , Unix Shell scripting and CAAS.
Scala, Spark, Apache Sqoop, Cloudera, Unix Shell scripting, CAAS.
Experience: 4-6 years
At least four years of ...
1 Opening(s)
2.0 Year(s) To 3.0 Year(s)
8.00 LPA TO 12.00 LPA
Analyse and organize raw data
Build data systems and pipelines
Combine raw information from different sources
Explore ways to enhance data quality and reliability
Identify opportunities for data acquisition
Develop, implement and optimize stored procedures and functions using T-SQL
Analyse existing SQL queries for performance improvements
Develop procedures and scripts for data migration
Provide timely scheduled management reporting
Research ...