Job Description
Data Engineer
$100k - $140k
Who We Are.
Wynd Labs is an early-stage startup that is on a mission to make public web data accessible for AI through contributions to Grass.
Grass is a network sharing application that allows users to share their unused bandwidth. Effectively, this is a residential proxy network that directly rewards individual residential IPs for the bandwidth they provide. Grass will route traffic equitably among its network and meter the amount of data that each node provides to fairly distribute rewards.
In non-technical terms: Grass unlocks everyone's ability to earn rewards by simply sharing their unused internet bandwidth on personal devices (laptops, smartphones).
This project is for those who lead with initiative and seek to challenge themselves and thrive on curiosity.
We operate with a lean, highly motivated team who revel in the responsibility that comes with autonomy. We have a flat organizational structure, the people making decisions are also the ones implementing them. We are driven by ambitious goals and a strong sense of urgency. Leadership is given to those who show initiative, consistently deliver excellence and bring the best out of those around them. Join us if you want to set the tone for a fair and equitable internet.
The Role.
We are seeking a Data Engineer with expertise in building and maintaining robust data pipelines and integrating scalable infrastructure. You will join a small, talented team and play a critical role in designing and optimizing our data systems, ensuring seamless data flow and accessibility. Your contributions will directly support our mission to position Grass as a key player in the evolution of data-driven innovation on the internet.
Who You Are.
- Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related technical field.
- Extensive experience with database systems such as Redshift, Snowflake, or similar cloud-based solutions.
- Advanced proficiency in SQL and experience with optimizing complex queries for performance.
- Hands-on experience with building and managing data pipelines using tools such as Apache Airflow, AWS Glue, or similar technologies.
- Solid understanding of ETL (Extract, Transform, Load) processes and best practices for data integration.
- Experience with infrastructure automation tools (e.g., Terraform, CloudFormation) for managing data ecosystems.
- Knowledge of programming languages such as Python, Scala, or Java for pipeline orchestration and data manipulation.
- Strong analytical and problem-solving skills, with an ability to troubleshoot and resolve data flow issues.
- Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes) technologies for data infrastructure deployment.
- Collaborative team player with strong communication skills to work with cross-functional teams.
What You'll Be Doing.
- Designing, building, and optimizing scalable data pipelines to process and integrate data from various sources in real-time or batch modes.
- Developing and managing ETL/ELT workflows to transform raw data into structured formats for analysis and reporting.
- Integrating and configuring database infrastructure, ensuring performance, scalability, and data security.
- Automating data workflows and infrastructure setup using tools like Apache Airflow, Terraform, or similar.
- Collaborating with data scientists, analysts, and other stakeholders to ensure efficient data accessibility and usability.
- Monitoring, troubleshooting, and improving the performance of data pipelines and infrastructure to ensure data quality and flow consistency.
- Working with cloud infrastructure (AWS, GCP, Azure) to manage databases, storage, and compute resources efficiently.
- Implementing best practices for data governance, data security, and disaster recovery in all infrastructure designs.
- Staying current with the latest trends and technologies in data engineering, pipeline automation, and infrastructure as code.
Why Work With Us.
- Opportunity. We are at at the forefront of developing a web-scale crawler and knowledge graph that allows ordinary people to participate in the process, and share in the benefits of AI development.
- Culture. We’re a lean team working together to achieve a very ambitious goal of improving access to public web data and distributing the value of AI to the people. We prioritize low ego and high output.
- Compensation. You’ll receive a competitive salary and equity package.
- Resources and growth. We’re well-capitalized, with backing from leading venture funds like Polychain, Tribe, NLH, Hack, BH Digital, and more. We keep a lean team, and this is a rare opportunity to join. You’ll learn a lot and grow as our company scales.