Skip to content
  • info@digitalxnode.com
  • GF 27, TDI Center, Near Jasola Apollo Metro Station 110025
  • Home
  • Company

    Simplifying IT for a complex world.

    • About Us
    • Help & FAQs
    • Partners
    • Why Choose Us
  • Our Services
  • Blogs
  • Recruitment
    • FTE 
    • Staff Augmentation
    • Jobs
  • Bench Resources
Contact
  • Home
  • Company

    Simplifying IT for a complex world.

    • About Us
    • Help & FAQs
    • Partners
    • Why Choose Us
  • Our Services
  • Blogs
  • Recruitment
    • FTE 
    • Staff Augmentation
    • Jobs
  • Bench Resources

Lead Data Engineer (Lead Software Engineer)

  • Home
  • Blog Details
  • April 9 2026
  • admin

We are seeking a highly skilled Lead Data Engineer to drive the design, development, and optimization of large-scale data platforms. This role involves leading complex, enterprise-wide initiatives, building modern data pipelines, and shaping best practices for data engineering across the organization.

Key Skills

Data Engineering, Python, SQL, Apache Spark, Hadoop, Airflow, AWS, Azure, GCP, REST APIs, CI/CD, Docker, Kubernetes, Data Lakehouse, ETL/ELT, Agile Key Responsibilities

Technical Leadership & Architecture

  • Lead large-scale, high-impact technology initiatives across teams
  • Define and implement best practices for data engineering and platform architecture
  • Review and evaluate complex system designs aligned with business and enterprise goals
  • Mentor team members and provide technical leadership

Data Engineering & Pipeline Development

  • Design, build, and maintain scalable data pipelines (ETL/ELT) for structured and unstructured data
  • Develop metadata-driven ingestion frameworks, validation layers, and reusable components
  • Ensure high performance, reliability, and scalability of data systems

Distributed Computing & Lakehouse Engineering

  • Build and optimize Apache Spark pipelines for batch and streaming workloads
  • Work with modern data lakehouse technologies (Iceberg, Delta Lake, Hudi)
  • Implement Medallion architecture (Bronze/Silver/Gold layers)

Data Quality & Observability

  • Implement data quality frameworks (e.g., Great Expectations, Deequ)
  • Build monitoring systems with SLAs/SLOs, anomaly detection, and lineage tracking
  • Ensure robust validation during migrations and onboarding processes

API & Microservices Development

  • Develop RESTful APIs using Python frameworks (FastAPI, Flask)
  • Enable secure and governed data access across platforms

Cloud & DevOps

  • Design and deploy pipelines on cloud platforms (AWS, Azure, GCP)
  • Build CI/CD pipelines using tools like Jenkins, GitHub Actions, or Azure DevOps
  • Implement infrastructure as code (Terraform, Helm) and secure engineering practices

Orchestration & Workflow Management

  • Build and manage workflows using tools like Airflow or Autosys
  • Design resilient pipelines with retries, alerts, and dependency handling

Collaboration & Delivery

  • Work with cross-functional Agile teams including Product, Architecture, and Business stakeholders
  • Analyze requirements, propose solutions, and contribute to technical roadmaps
  • Independently deliver complex engineering solutions

Required Qualifications

  • Bachelor’s degree in Engineering, Computer Science, or related field
  • 5+ years of experience in software/data engineering or equivalent practical experience

Technical Skills & Experience

Core Skills

  • Strong hands-on experience with Python, SQL, and Bash scripting
  • Experience with big data technologies: Apache Spark, Hadoop, Hive
  • Expertise in building scalable data pipelines and distributed systems

Data Platforms & Storage

  • Experience with data lakehouse architectures and storage formats (Parquet, ORC)
  • Knowledge of optimization techniques (partitioning, clustering, compaction, Z-ordering)

Streaming & Advanced Processing

  • Experience with streaming frameworks such as Spark Structured Streaming or Apache Flink

APIs & Integration

  • Working knowledge of REST APIs, object storage, and data access layers

Cloud & DevOps

  • Hands-on experience with AWS, Azure, or GCP
  • Familiarity with CI/CD tools, containerization (Docker, Kubernetes), and automation

Data Governance & Quality

  • Experience with governance tools (Collibra, Alation, Purview)
  • Understanding of compliance standards (SOX, PCI) and data validation practices

Additional (Good to Have)

  • Experience with GenAI applications in data engineering (metadata extraction, anomaly detection, automation)
  • Domain exposure to financial services, treasury, or risk management

Key Competencies

  • Strong problem-solving and analytical thinking
  • Leadership and mentoring capabilities
  • Excellent communication and stakeholder management skills
  • Ability to work in fast-paced Agile environments

Education

  • UG: B.Tech / B.E. or equivalent in any specialization
  • PG: Any Postgraduate (preferred)

Technology: AWS Azure Data Engineering ETL/ELT GCP Hadoop Python REST APIs
Job Type: Full Time
Job Location: Ahmedabad Bangalore Gurgaon Mumbai Pune
Work Mode: Onsite
Experience: 5 to 8 Years
Work Shift: India

Apply for this position

Allowed Type(s): .pdf, .doc, .docx
Back to listings
Previous Post
Full Stack Developer...
Next Post
Network Administrator /...

DigitalXnode is one of the leading companies operating in the converged domain of Technology, Finance, and Consulting.

 

Company

Partner
About Us
Why Choose Us

Solution

Consulting
Financial Services
Digital Marketing

Useful Links

Hot Jobs
Recruitment
Job Listing
Candidate Registration
Contact Us

© 2026 DigitalXNode. All Rights Reserved. | Developed by ASML Intl.

Privacy Policy
Terms & Conditions