1 Opening(s)
5.0 Year(s) To 12.0 Year(s)
0.00 LPA TO 0.00 LPA
Key Responsibilities:? Design, construct, install, test, and maintain highly scalable and robust data managementsystems.? Understand and apply data warehousing concepts to design and implement robust datawarehouse tables in line with business requirements.? Build complex ETL/ELT processes for large-scale data migration and transformation acrossplatforms and Enterprise systems such as: Oracle ERP, ...
1 Opening(s)
9.0 Year(s) To 11.0 Year(s)
28.00 LPA TO 32.00 LPA
Notice period
Immediate to 15 days
Shift
1 pm to 9 PM
Mode of Work
Hybrid
Mandatory Skill combination: Mandatory skill Kafka, Multiple Kafka connector, Streaming Data processing.
In some cases, Databricks and PySpark knowledge is required.
JD:
Hands-on experience working on Kafka connect using schema registry in a very high-volume environment.
Complete understanding of Kafka config properties (acks, timeouts, buffering, ...
3 Opening(s)
7.0 Year(s) To 8.0 Year(s)
1.00 LPA TO 24.00 LPA
Please find the details below
Position : Data Scientist
Location : Bangalore
Experience requirement : 7-8 Years
Joining Date : Immediate to 15 days
Job Description
Develop, maintain, and optimize forecasting Machine Learning (ML) models for better accuracy. Collaborate with other data scientists and data engineers to ensure that specifications are translated into flexible, scalable, and maintainable solutions.
Research and stay ...
2 Opening(s)
8.0 Year(s) To 10.0 Year(s)
20.00 LPA TO 21.00 LPA
Responsibilities
Roles And Responsibilities Of Cloudera & Spark Developer
Secure data management and portable cloud-native data analytics delivered in an open, hybrid data platform. Whether you're powering business-critical AI applications or real-time analytics at scale, Cloudera Data Platform enables your business to do anything with your data, anywhere, securely.
What is Cloudera ...
2 Opening(s)
5.0 Year(s) To 8.0 Year(s)
1.00 LPA TO 20.00 LPA
Please find the details below
Position : Sr.Data Engineer developer
Location : Permanent Work From Home
Experience requirement : 5-8 Years
Joining Date : Immediate to 15 days
Job Description
Strong hands-on coding experience creating data pipelines in Python (5+ years)
Experience in creating data pipelines using PySpark in Databricks (2+ years)
Strong hands-on coding experience in SQL (2+ years)
Experience with managing and ...
4 Opening(s)
3.5 Year(s) To 5.5 Year(s)
7.00 LPA TO 11.00 LPA
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have Skills : Microsoft Azure Analytics Services Good to Have Skills : No Technology Specialization Job Requirements : Key Responsibilities : A Function as the Junior Data Architect ...
4 Opening(s)
5.5 Year(s) To 7.5 Year(s)
11.00 LPA TO 15.00 LPA
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. ...
1 Opening(s)
13.0 Year(s) To 17.0 Year(s)
80.00 LPA TO 90.00 LPA
Requirements: 12 to 15 years of experience in developing platforms in the Data engineering space. Minimum of 4-5 years of experience in managing Data Engineering teams. Hands-on experience with SQL, ETL pipelines, Make changes to our diagnosing any problems across the entire technical stack. Design and develop a real-time events pipeline for Data ingestion ...
1 Opening(s)
5.0 Year(s) To 12.0 Year(s)
Not Disclosed by Recruiter
Experience Range: 5 to 12 yrs (min 4 years relevant)
Responsibilities:
Enable Model tracking, model experimentation, Model automation
Develop ML pipelines to support
Develop MLOps components in Machine learning development life cycle using
Model Repository (either of): MLFlow, Kubeflow Model Registry
Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS.
Work ...
1 Opening(s)
6.0 Year(s) To 10.0 Year(s)
Not Disclosed by Recruiter
Data Analyst having -
7+ years of experience
Unstructured problem-solving mindset.
Willing to work in US Shift. (2 – 11 PM).
SQL (strong)
Banking knowledge & SAS (preferred)
Credit Risk knowledge
Python (basics)
PySpark (optional)
People who have worked on Risk data will have an advantage.
Writing Proc SQL Queries to fetch required data from database (Teradata, My SQL, snowflake) tables as perrequirements. ...