Mandatory English proficiency level: C1 or above ( Your English level needs to be high, as you will have frequent contact with stakeholders and clients in English)
We are seeking a skilled Senior Data Engineer to join our AI Enablement team. The ideal candidate will need extensive experience in designing and implementing data pipelines using Databricks, as well as working with GCP BigQuery. This role is crucial for ensuring that our data architecture supports our AI initiatives and enables data-driven decision-making across the organization.
Responsibilities:
-Design, develop, and maintain data pipelines using Databricks (PySpark or Scala).
-Work with GCP, especially BigQuery, for data modeling and optimization.
-Ensure data ingestion, transformation, and quality for AI models.
-Collaborate with data scientists and analysts to provide clean, reliable data.
-Document processes and architectural decisions.
-Familiarity with machine learning frameworks and libraries (eg: TensorFlow, PyTorch).
-Excelent English level : C1 or upper
Required Qualifications:
-Proven experience as a Data Engineer.
-Strong expertise in Databricks, PySpark, SQL.
-Hands-on experience with BigQuery and data warehousing concepts.
-Solid knowledge of ETL, data governance, and data quality.
-Excellent communication and ability to work in collaborative environments in English.
Nice to Have:
-Experience with Delta Lake (lakehouse architecture).
-Knowledge of vector search, embeddings, and CDC.
-Understanding of data security and compliance.
-Experience with data visualization tools (Looker, Tableau, Power BI).