Point Click Care

Principal Software Data Engineer

Job Description

PointClickCare is searching for a Principal Software Data Engineer who will contribute to all phases of the software development life cycle, and will play a crucial role in designing, developing, and maintaining large scale Data Platform and data pipelines based on microservices architecture, while also encouraging and optimizing the daily execution of technical excellence across an empowered team.
This is a hands-on leadership role, requiring the ability to enhance and implement batch and real-time data solutions already in progress, mentor other team members, and deliver both business and technical objectives through ambiguity and uncertainty.
This is an opportunity to shape the future of our data ecosystem. You’ll work with a passionate team and modern technologies to drive innovation that impacts the entire organization. The ideal candidate thrives as an individual contributor, while making a significant technical impact and elevating the team’s capabilities.

To succeed as a Principal Software Data Engineer at PointClickCare, you need to be collaborative, adventurous and passionate. Collaborative means that you’re enthusiastic about jumping in to help achieve the team’s top priorities, no self-promoting politicians allowed. Adventurous means that you’re not afraid to dive into uncharted technical territory and get your own hands dirty, while supporting and driving delivery of complex features through a dedicated Scrum team. Passionate means that you’re eager to learn and share knowledge that drives the team forward, and excited to be part of a movement that is positively impacting the lives of seniors and their caregivers all over North America.

What your day-to-day will look like:
-Lead and guide the design and implementation of scalable distributed systems based on Java microservices
-Engineer and optimize data pipelines using solutions like Apache Hudi, Apache Trino, Azure ADLS
-Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset
-Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies
-Drive adoption of best practices in data governance, observability, and performance tuning for data workloads
-Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment
-Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response
-Foster a culture of quality through peer reviews, providing constructive feedback and seeking input on your own work

What qualifications we’re looking for:
-Principal Software Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on data pipelines (batch and streaming)
-Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor
-Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines
-Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads
-Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations)
-Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools
-Solid foundation in data governance and performance optimization, ensuring reliability and scalability across batch and streaming environments
-Proven experience with Lakehouse architectures and related technologies, including Apache Hudi, Azure ADLS Gen2, HDFS, and other big data technologies (Trino, Databricks, Spark)
-Strong collaboration and communication skills, with the ability to influence stakeholders and evangelize modern data practices within your team and organization.