Dlocal

Senior Data Engineer

Job Description

What’s the opportunity? 

We are searching for an accountable, multitalented Data Engineer to facilitate the operations of
our data solutions. This Data Engineer position will be responsible for sustaining and improving
our Business Intelligence platform, performance, scalability and processes. During various
aspects of this process, you should collaborate with coworkers to ensure that your approach
meets the needs of each request.


What will I be doing?
  • Sustaining and improving our BI Platform
  • Boost Amazon Redshift performance
  • Design and build the Medallion Architecture in order to optimize data pipelines for different tiers (Bronze, Silver and Gold) to support efficient data processing and analysis
  • Implement and maintain data pipelines, considering internal and external schemas, and ensuring data quality and consistency
  • Detecting and correcting errors in our BI Platform and process in order to guarantee reliability
  • Proactively addressing future BI Platform needs
  • Develop and maintain data documentation of the BI Platform Architecture, processes, and procedures
  • Ensure timeliness, resilience, availability and accessibility of data present in our BI Platform
  • Evaluate and test new tools with a focus on efficiency, cost reduction and quality
  • Remaining up-to-date with industry standards and technological advancements that will improve the quality of our deliverables
  • Contribute to the definition of standards and best practices for the development of data solutions
  • Work closely with the Data teams to provide better solutions
  • Support the team with new tools adoption
  • Support ad-hoc requests related with data extraction
  • Work with leadership in defining the company's BI data strategy and in actions of Data Governance

  • What skills do I need?
  • Bachelor's degree in Data Engineering, Big Data Analytics, Computer Engineering, or related field
  • Proven experience as a Data Engineer, Data Platform Engineering or similar
  • Proficiency in SQL and Python and one or more programming languages (Java, C++, etc.)
  • Strong knowledge of Amazon Redshift optimization/tuning techniques
  • Familiarity with Hadoop or suitable equivalent
  • Strong analytical and problem-solving skills
  • Experience in Big Data tools and platforms, such as Hadoop, Spark, Docker, Kubernetes, EKS, Terraform and DevOps CI/CD
  • Background in big data solutions within cloud infrastructures such as AWS, Azure Cloud, Google Cloud
  • Experience with data warehousing tools such as Redshift
  • Experience with AWS (Lambda, EKS, EC2, Spark (EMR), S3, DynamoDB, Glue, Athena, AWS Cost Monitoring)
  • Solid knowledge of process orchestration tools such as Airflow
  • Knowledge of transformation tools DBT (Data Build Tool), Pentaho Data Integration
  • Ability to work independently
  • Team player
  • Excellent communication (written and spoken English) and collaboration skills