Principal Data Engineer / Architect

Job Description

What You'll Do:
As a pivotal member of the team, you will lead the design and development of a robust data architecture that guides data modeling, integration, processing, and delivery standards enabling modern data product development at Scribd.

You will also serve as a data and analytics solution architect, leading architecture initiatives encompassing data warehousing, data pipeline development, data integrations, and data modeling. You will shape Scribd’s data strategy, guiding stakeholders in how they consume and act on data.

We’re looking for someone with proven proficiency in architecting, designing and development experience with batch and real time streaming infrastructure and workloads. Your expertise will help establish standards for data modeling, integration, processing, and delivery and also help translate business requirements into technical specifications.

At Scribd, we leverage deep data insights to inform every aspect of our business, from product development, experimentation, to understanding our subscriber engagement and tracking key performance indicators. You'll join a data engineering team tackling complex challenges within a rich domain encompassing three distinct brands – Scribd, Everand, and Slideshare – all serving a massive user base with over 200 million monthly visitors and 2 million paying subscribers. You'll have the opportunity to make a real impact as we are heavily investing in improving our core data layer and this exciting new role puts you right at the forefront of this initiative.

Based on the project, this might involve cross-functional work with the Data Science, Analytics, and other Engineering and Business teams to design cohesive data models, database schemas and data storage solutions, consumption strategies and patterns. Almost everything you will be working on will be to increase the "customer satisfaction" for internal customers of Scribd data.

Required Skills:
• 7+ years of experience in data engineering, with a strong background in data architecture, data modeling, and data management, building and scaling robust data systems for complex business domains.
• Expertise in Scala or Python, with a deep understanding and hands-on experience in Spark for designing, optimizing, and scaling large-scale data processing pipelines, and proficiency in at least one SQL dialect.
• Experience with data lake technologies (e.g., Databricks, Delta Lake), data storage formats (Parquet, Avro), query engines (such as Photon, Spark SQL), and both real-time streaming and batch processing, or equivalent technologies and frameworks.

Desired Skills:
• Experience and working knowledge of streaming platforms, typically based around Kafka.
• Strong grasp of AWS data platform services and their strengths/weaknesses.
• Hands on experience in implementing data pipelines for data ingestion and transformation to support analytics and ML pipelines
• Strong experience communicating asynchronously using collaboration tools like Jira, Slack, etc.
• Experience using automation and CI/CD tooling like Git, GitHub,Docker,Jenkins, Terraform, etc.
• Experience developing standards for database design and implementation of various strategic data architecture initiatives around data quality, data management policies/standards, data governance, privacy and metadata management
• Working experience integrating with BI frameworks like Qlik, ThoughtSpot, Looker, Tableau, etc.