Data Engineer

  • Weekday
  • Verified

Job Description

This role is for one of the Weekday's clients

We are seeking a Data Engineer to design, develop, and optimize scalable data pipelines, ensuring seamless data flow across our systems. You will collaborate with data scientists, analysts, and engineers to build robust data solutions that drive decision-making and business growth.

Key Responsibilities

  • ETL Development: Design, develop, and maintain scalable ETL (Extract, Transform, Load) pipelines for efficient data processing.
  • Data Architecture: Optimize data architectures for reliability, scalability, and performance.
  • Collaboration: Work closely with data scientists, analysts, and software engineers to integrate data solutions into products and services.
  • Data Quality & Monitoring: Implement monitoring and validation processes to ensure data integrity and quality.
  • Cloud & Databases: Work with cloud-based data platforms (AWS, GCP, Azure) and database technologies (SQL, NoSQL, data warehouses).
  • Documentation: Develop and maintain comprehensive documentation for data workflows and processes.
  • Best Practices: Contribute to the continuous improvement of data engineering best practices.

Requirements

Required Qualifications

  • Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
  • Experience: 3+ years as a Data Engineer or in a similar role.
  • Programming: Proficiency in SQL, Python, or Scala for data processing.
  • Data Pipeline Tools: Hands-on experience with Apache Airflow, Kafka, or dbt.
  • Cloud Expertise: Knowledge of cloud-based data solutions like AWS Redshift, Google BigQuery, or Azure Synapse.
  • Data Modeling: Strong understanding of data modeling, warehousing, and database optimization.
  • Project Management: Ability to work independently and manage multiple projects in a fast-paced startup environment.

Preferred Qualifications

  • Experience working in a startup or consulting environment.
  • Familiarity with containerization and orchestration tools such as Docker and Kubernetes.
  • Exposure to machine learning pipelines and data science workflows.
  • Understanding of data privacy and security best practices.