Lead I - Software Engineering - Spark & Databricks Developer

Lead I - Software Engineering - Spark & Databricks Developer

2 Nos.
118269
Full Time
5.0 Year(s) To 8.0 Year(s)
18.00 LPA TO 20.00 LPA
IT Software - Client Server
IT-Hardware/Networking
Job Description:

Experience – 5 To 8 Years
Salary – 18 Lac To 20 Lac
NP – 0 To 30 Days
India – Noida
Job Description:
We are seeking skilled and motivated Spark & Databricks Developers to join our dynamic
team for a long-term project. The ideal candidate will have strong hands-on experience
in Apache Spark, Databricks, and GitHub-based development workflows.
Key Responsibilities:
 Design, develop, and optimize big data pipelines using Apache Spark.
 Build and maintain scalable data solutions on Databricks.
 Collaborate with cross-functional teams for data integration and transformation.
 Manage version control and code collaboration using GitHub.
 Ensure data quality, performance tuning, and job optimization.
 Participate in code reviews, testing, and documentation activities.
Must-Have Skills:
 5–8 years of experience in Data Engineering or related roles
 Strong hands-on expertise in Apache Spark (Batch & Streaming)
 Proficiency in Databricks for developing and managing data workflows
 Experience with GitHub (version control, pull requests, branching strategies)
 Good understanding of Data Lake and Data Warehouse architectures
 Strong SQL and Python scripting skills
 In-depth knowledge of Python programming
Good-to-Have Skills:
 Experience with Azure Data Lake, AWS S3, or GCP BigQuery

 Familiarity with Delta Lake and Databricks SQL
 Exposure to CI/CD pipelines and DevOps practices
 Experience with ETL tools or data modeling
 Understanding of data governance, security, and performance tuning best practices
Must-Haves :
 5–8 years of experience in Data Engineering or related roles Strong hands-on expertise
in Apache Spark (Batch & Streaming) Proficiency in Databricks for developing and
managing data workflows Experience with GitHub (version control, pull requests,
branching strategies) Good understanding of Data Lake and Data Warehouse
architectures Strong SQL and Python scripting skills In-depth knowledge of Python
programming
 Nice to Haves :
 Experience with Azure Data Lake, AWS S3, or GCP BigQuery Familiarity with Delta Lake and
Databricks SQL Exposure to CI/CD pipelines and DevOps practices Experience with ETL tools
or data modeling Understanding of data governance, security, and performance tuning best
practices

Key Skills :
Company Profile

B2B, online platform where curated talent sourcing firms fulfill open requisitions posted by enterprises

Apply Now

  • Interested candidates are requested to apply for this job.
  • Recruiters will evaluate your candidature and will get in touch with you.

Similar Jobs