Extreme Networks

Edge AI Staff Engineer (9745)

Job Description

Job Description:

We are seeking a talented Edge AI Staff Engineer with specialized expertise in GPU/TPU acceleration to join our team. The ideal candidate will have extensive hands-on experience in local Large Language Models (LLM) inference with embedded GPU/TPU architectures. As Staff Engineer specializing in Edge AI, you will play a crucial role in shaping the future Edge AI solution, leveraging the power of GPU/TPU acceleration and enterprise grade, large scale edge compute.
 
The successful candidate will combine technical excellence with effective leadership, creating a positive impact on both projects and team dynamics.



Key Responsibilities
  • High-Level Design and Architecture

  • Influence the Edge AI strategy by providing expert advice on design and architecture.
  • Make critical decisions regarding technical directions, scalability, and system performance.
  • Develop and optimize AI inference models for deployment on edge devices with embedded GPU/TPU accelerators, focusing on local Low Latency Model (LLM) inference.
  • Implement and fine-tune low-latency model inference pipelines to meet real-time performance requirements.
  • Collaborate with cross-functional teams to integrate AI inference solutions into edge computing platforms and applications.
  • Collaborate with the GPU Hardware Design Team to design and optimize GPUs that power next-generation devices.
  • Conduct performance profiling and optimization to maximize the efficiency of GPU/TPU acceleration for local LLM inference.
  • Work on micro-architecture development, ensuring efficient execution of graphics, compute, and AI workloads within energy and area constraints.
  • Stay current with advancements in GPU/TPU technologies and edge AI frameworks, incorporating them into solution designs as appropriate.
  • Provide technical expertise and support to project teams, ensuring successful implementation and deployment of edge AI solutions.

  • Team Leadership:
  • Lead and inspire a team of engineers, providing guidance, setting goals, and ensuring collaboration.
  • Oversee project planning, execution, and delivery, ensuring alignment with business objectives.
  • Manage all phases of technical projects, from conception to completion.
  • Develop project specifications, track progress, and control costs.
  • Foster a positive work environment, encouraging professional growth and knowledge sharing.

  • Qualifications:
  • Bachelor’s degree in computer science, Engineering, or a related field; Master’s degree preferred.
  • 5+ years of hands-on experience in AI model development and deployment, with a focus on edge computing and local LLM inference.
  • Strong programming skills in languages such as Python and C++
  • Proficiency in LLM frameworks (e.g., vLLM, Text generation inference, OpenLLM, Ray Serve and HuggingFace Transformers) and deep learning libraries.
  • Extensive experience with GPU/TPU acceleration for AI inference, including optimization techniques (tensor, pipeline, data, sharded data parallelism) and performance tuning,
  • Hands on experience with one or more GPU frameworks: CUDA, Vulkan, OpenCL 
  • Deep knowledge of GPU memory layout, familiarity with NVIDIA Jatison, ARM Mali or relevant SoC configurations.
  • Knowledge of parallel computation, memory scheduling, and structural optimization
  • Excellent problem-solving and analytical skills, with a passion for innovation and continuous learning.

  • Additional Skills (Preferred):
  • Experience with edge device hardware and software integration.
  • Familiarity with edge computing architectures and IoT platforms.
  • Experience with edge AI applications in domains such as robotics, autonomous vehicles, or industrial automation.