Apogee Global Rms

Senior AI Engineer

  • Apogee Global Rms

Job Description

Apogee Global RMS is looking for a Senior AI Engineer who will lead the design, development, and deployment of advanced GenAI and LLM‑powered solutions across cloud platforms. This role requires deep expertise in modern AI architectures, hands‑on experience with large‑scale model development, and the ability to build secure, scalable, cloud‑native systems. Qualified professional will collaborate closely with engineering, data, and product teams to deliver high‑impact AI capabilities that accelerate innovation across the organization.

Key Responsibilities:

  • Architect and implement GenAI and LLM‑based systems, including fine‑tuning, evaluation, and optimization
  • Build scalable pipelines for data processing, model training, inference, and continuous improvement
  • Deploy and manage AI workloads across AWS, Azure, or GCP using cloud‑native services
  • Integrate embeddings, vector search, and RAG pipelines into production environments
  • Optimize GPU/accelerator utilization for distributed training and high‑performance inference
  • Develop automation for CI/CD, MLOps, observability, and model lifecycle management
  • Ensure cloud security, compliance, and cost‑efficient resource allocation
  • Evaluate emerging AI frameworks, tools, and cloud services to drive innovation
  • Partner with cross‑functional teams to translate business needs into technical solutions

Requirements

  • 5–8+ years of experience in AI/ML engineering or cloud‑based AI development
  • Strong proficiency with Python and AI/ML frameworks (PyTorch, TensorFlow, JAX)
  • Hands‑on experience with LLMs, transformers, embeddings, and fine‑tuning techniques
  • Expertise with cloud platforms (AWS, Azure, or GCP) and services supporting AI workloads
  • Experience with vector databases (Pinecone, Weaviate, FAISS) and RAG architectures
  • Strong understanding of MLOps, CI/CD, containers, and orchestration (Docker, Kubernetes)
  • Familiarity with GPU optimization, distributed compute, and model performance tuning

Preferred Skills:

  • Experience building GenAI‑Powered applications and automation systems
  • Knowledge of data engineering, ETL pipelines, and feature stores
  • Familiarity with observability tools (Prometheus, Grafana, CloudWatch)
  • Understanding of cloud security best practices and zero‑trust principles
  • Background in real‑time inference and scalable microservices

Benefits

What We Offer:

  • High‑impact role shaping the future of GenAI and cloud‑native AI systems
  • Opportunity to work with cutting‑edge LLMs, cloud platforms, and emerging AI technologies
  • Collaborative, innovative, and growth‑driven environment

How to Apply

For applications and inquiries, please email our Talent Team at : [email protected]