Kpler

Senior Data Engineer

Job Description

Kpler is seeking a highly skilled data engineer to play a critical role in the evolution of our maritime analytics platform. This role is focused on enriching and scaling our maritime models by migrating and refactoring complex business logic from database-centric implementations into robust, production-ready streaming applications. You will own the end-to-end delivery of these migrations — from translating domain logic and validating functional parity, to performance tuning and operational readiness — while establishing repeatable engineering patterns that improve data quality, system reliability, and long-term maintainability across the platform.


Responsibilities
  • Analyse, understand and decompose complex database-centric business logic into well-defined, scalable components.
  • Design and build event-driven architectures and streaming workflows to replace legacy database processing.
  • Implement, deploy and operate production-grade streaming jobs using technologies such as Kafka, Spark and Flink (Scala/Java).
  • Improve the scalability, reliability and maintainability of data processing pipelines through thoughtful design and performance tuning.
  • Collaborate closely with domain experts and other stakeholders to validate outputs and ensure a smooth, low-risk migration away from the current Datastore.
  • Ensure functional parity and high data quality by validating streaming outputs against legacy implementations, including testing, reconciliation and backfills where required.
  • Contribute to engineering excellence through code reviews, clear documentation, and the establishment of best practices and reusable components for streaming development.


  • Required skills & experience
  • A minimum of 5 years’ relevant industry experience in data engineering or distributed systems.
  • Strong experience designing and operating large-scale data pipelines and processing high-volume datasets in production environments.
  • Hands-on experience with event streaming and distributed processing technologies such as Kafka, Spark and/or Flink.
  • Strong programming skills in Scala and/or Java, with solid proficiency in SQL.
  • Experience building, operating and scaling data platforms in cloud environments, with working knowledge of AWS services such as IAM and S3.
  • Exposure to running services in Kubernetes and delivering changes through GitOps-style workflows, including pull request–based deployments, environment promotion and reproducible releases.
  • Working knowledge of software engineering best practices, including test-driven development, clean code principles, pair programming and common design patterns.
  • Comfortable working in an Agile environment, with a focus on iterative delivery, collaboration and continuous improvement.
  • Ability to communicate technical concepts and risks clearly to non-technical stakeholders.
  • Proficiency in written and spoken English to support effective collaboration in a global team.
  • Bachelor’s degree in Computer Science, Data Engineering or a related field, or equivalent practical experience.


  • Nice to have
  • Experience migrating from stored procedures or RDBMS-centric business logic to distributed, event-driven or streaming-based systems.
  • Experience applying test-driven development practices when building streaming or event-driven applications.
  • Familiarity with domain modelling for maritime or shipping data.