Blue Coding

Senior Data Engineer (VM)

Job Description

What are we looking for?

In this opportunity, we are looking for a Senior Data Engineer to work with one of our foreign clients, a market leader in the consultation, design, procurement, implementation, and ongoing managed services for technology services for mid to large global enterprises. As a Telecom Managed Service company, they partner with over 300 service providers globally to help customers with technology design and find the best solutions to meet their needs.

Our client is building a modern, cost-efficient data warehouse on Amazon Redshift, AWS Glue, and S3—replacing legacy ETL processes. We need a hands-on Data Engineer who can design, build, and optimize our cloud data pipelines while guiding best practices for a small team. This role is critical in migrating from SQL Server to Redshift, implementing scalable ETL with AWS Glue, and enabling self-service analytics. You’ll work closely with BI and business teams to turn raw data into reliable insights.

If you are independent, a great communicator, a problem solver, and have a strong attention to detail, this is a great fit for you! Our jobs are fully remote – as long as you have the skills and can get the work done well, you can work anywhere in the listed countries you want.


Here are some of the exciting day-to-day challenges you will face in this role:
  • Design & implement a modern data warehouse using Amazon Redshift, ensuring scalability and performance.
  • Develop and maintain ETL pipelines using AWS Glue (Python/PySpark), moving data from SQL Server (RDS) and other sources.
  • Structure our S3 data lake for efficient storage, partitioning, and integration with Redshift.
  • Define data models (star schema, dimensional modeling) to support reporting and analytics.
  • Establish data governance—documentation, quality checks, and monitoring for pipelines.
  • Collaborate with BI teams to ensure the warehouse meets reporting needs (CRM, dashboards).
  • Optimize costs by tuning Redshift clusters, managing Glue job efficiency, and automating workflows.
  • Troubleshoot data infrastructure issues and optimize performance across systems.
  • Contribute to the long-term vision for the data platform, including new tools, workflows, and process automation.
  • Maintain documentation of data systems, flows, and integration strategies.
  • Support project planning and estimation in coordination with business and technical teams

  • You will shine if you have:
  • Bachelor's degree in Computer Science or a related field, or equivalent work experience.
  • 5+ of related work experience in data engineering, data architecture or a similar role.
  • Strong SQL & Python skills (PySpark is a plus).
  • Experience building ETL pipelines (preferably in AWS Glue, Airflow, or similar).
  • Knowledge of data modeling (e.g., star schema, slowly changing dimensions).
  • Ability to lead technical decisions without heavy oversight (small team environment).
  • Budget-conscious mindset—experience optimizing cloud costs (Redshift, Glue). 
  • Excellent problem-solving and analytical skills.
  • Strong organizational skills and attention to detail.
  • Effective time management skills.

  • Here are some of the perks we offer you:
  • Salary in USD
  • Long-term
  • Flexible schedule (within US Time zones)
  • 100% Remote