We are looking for a seasoned Principal ML OPS Engineer to architect, build, and optimize ML inference platform. The role demands an individual with significant expertise in Machine Learning engineering and infrastructure, with an emphasis on building Machine Learning inference systems. Proven experience in building and scaling ML inference platforms in a production environment is crucial. This remote position calls for exceptional communication skills and a knack for independently tackling complex challenges with innovative solutions.
What you will be doing:
Architect and optimize our existing data infrastructure to support cutting-edge machine learning and deep learning models.
Collaborate closely with cross-functional teams to translate business objectives into robust engineering solutions.
Own the end-to-end development and operation of high-performance, cost-effective inference systems for a diverse range of models, including state-of-the-art LLMs.
Provide technical leadership and mentorship to foster a high-performing engineering team.
Requirements:
Proven track record in designing and implementing cost-effective and scalable ML inference systems.
Hands-on experience with leading deep learning frameworks such as TensorFlow, Keras, or Spark MLlib.
Solid foundation in machine learning algorithms, natural language processing, and statistical modeling.
Strong grasp of fundamental computer science concepts including algorithms, distributed systems, data structures, and database management.
Ability to tackle complex challenges and devise effective solutions. Use critical thinking to approach problems from various angles and propose innovative solutions.
Worked effectively in a remote setting, maintaining strong written and verbal communication skills. Collaborate with team members and stakeholders, ensuring clear understanding of technical requirements and project goals.