Weekday Ai

Senior Data Engineer

  • Weekday Ai
Salary ? Salary range shown is either directly from the job description or estimated based on typical salaries for similar roles in this industry. This estimate aims to give a general idea of the expected compensation for the position.
$75000 - $120000

Job Description

This role is for one of the Weekday's clients

Min Experience: 5 years

Location: Remote (India)

JobType: full-time

We are looking for a hands-on Data Engineer with strong expertise in PySpark, Databricks, SQL, and Microsoft Azure to design, build, and optimize scalable data platforms. In this role, you will be responsible for developing robust ETL/ELT pipelines that support both batch and real-time data processing, enabling reliable analytics and business intelligence across the organization. You will work closely with data architects, analysts, and business stakeholders to understand evolving data needs and translate them into efficient, cloud-native solutions. The ideal candidate combines deep technical capability in distributed data processing with a strong understanding of performance optimization, governance, and modern data architecture principles. This role requires ownership, problem-solving ability, and a commitment to building scalable, secure, and high-quality data systems within an Azure ecosystem.

Requirements

Key Responsibilities

  • Design, develop, and optimize scalable ETL/ELT pipelines using PySpark and Databricks
  • Build and maintain data ingestion frameworks for batch and real-time streaming workflows
  • Write high-performance SQL queries for data transformation and optimization
  • Implement and manage data solutions on Azure Cloud (Azure Data Lake, Azure Data Factory, Azure Synapse, etc.)
  • Develop and maintain Delta Lake architectures within Databricks
  • Collaborate with cross-functional teams to gather and translate data requirements
  • Ensure adherence to data quality, governance, and security standards
  • Monitor pipeline performance and troubleshoot data processing issues
  • Implement CI/CD practices and version control for data engineering workflows
  • Participate in architecture discussions and recommend scalable design improvements

What Makes You a Great Fit

  • Strong hands-on experience with PySpark and distributed data processing frameworks
  • Advanced SQL development skills with performance tuning expertise
  • Practical experience working with Databricks (notebooks, workflows, jobs, Delta Lake)
  • Solid understanding of Azure cloud services for data engineering
  • Experience designing and managing scalable data lakes and modern data pipelines
  • Strong knowledge of ETL concepts, data modeling, and transformation strategies
  • Familiarity with Git, CI/CD processes, and Agile methodologies
  • Strong analytical and troubleshooting skills
  • Effective collaboration and communication abilities