AI engineer (Python)

  • Airslate
  • Verified

Job Description

About GenAI team:

The team is focused on developing agentic AI implementations for both product-facing features and the core of our marketing engine platform. Our work spans the entire spectrum of Large Language Model (LLM) activities, including prompt engineering, retrieval-augmented generation (RAG), and fine-tuning—all tailored to enhance our core marketing engine and product features.

We leverage extensive natural language datasets and have access to cutting-edge LLM technologies, enabling us to push the boundaries of AI-driven marketing solutions. Team members collaborate closely with our core machine learning research group, working on advanced enterprise applications of LLMs.

This role offers the opportunity to engage in both foundational AI research and its practical deployment, ensuring innovative and impactful solutions in the marketing and automotive sectors.

This position merges cutting-edge AI development with scalable cloud solutions, providing a rare chance to innovate in a rapidly evolving landscape.
You’ll collaborate across diverse databases and modern orchestration tools, refining prompt engineering and performance strategies at every turn. The variety of challenges ensures continuous learning and growth while pushing the boundaries of AI-driven applications.

And now, we are looking for a AI Engineer who is prepared to contribute to the next chapter of our company's growth.


What you'll be working on:
  • Develop and maintain AI-driven applications leveraging LLMs, RAG (Retrieval-Augmented Generation), and AI agent frameworks;
  • Design, implement, and optimize APIs and microservices, ensuring scalability and performance;
  • Work with various database systems, including relational, NoSQL, and vector databases;
  • Integrate and optimize message queue brokers for distributed system architectures;
  • Collaborate on prompt engineering, evaluation strategies, and fine-tuning LLM-based applications;
  • Build and maintain cloud-native applications using AWS and containerization tools;
  • Design and build new services from scratch for AI agents ecosystem;
  • Improve engineering standards, tooling, and processes;
  • Actively participate in code reviews;
  • Increase system’s performance and scalability;
  • Implement integrations with other products APIs.

  • What we expect from you:
  • Strong proficiency in Python and experience with modern frameworks (FastAPI, Flask, Django);
  • Experience with tools and best practices in the Python ecosystem (Poetry, virtual environments, dependency management);
  • Experience with LLM-based solutions, including prompt engineering, retrieval strategies, and evaluation;
  • Solid understanding of data design and database management (SQL, NoSQL, VectorDB);
  • Experience designing scalable RESTful APIs and working with asynchronous programming (asyncio, FastAPI, Celery);
  • Hands-on experience with RAG (Retrieval-Augmented Generation) and search pipelines;
  • Knowledge of message queue brokers (RabbitMQ, Kafka, Redis Streams);
  • Experience with Git, GitHub, and CI/CD tools for automated testing and deployment;
  • Proficiency in cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes);
  • High level of self-awareness, problem-solving, and proactivity.

  • What helps you rock:
  • Familiarity with LangChain, LangGraph, or other AI agent orchestration frameworks;
  • Hands-on experience with search and retrieval systems (Elasticsearch, Weaviate, FAISS, Vespa);
  • Understanding of ML Ops practices (model deployment, monitoring, scaling);
  • Experience optimizing LLM inference performance (quantization, distillation, caching);
  • Exposure to NLP frameworks (Hugging Face, spaCy, NLTK, OpenAI APIs);
  • Knowledge of workflow orchestration tools (Kubeflow, Airflow, Prefect) for AI pipelines;
  • Experience with data pipelines and feature stores (Kafka, DVC, Feast);
  • Understanding of security & compliance requirements for AI applications (data privacy, access control);
  • Familiarity with knowledge graphs and reasoning engines (Neo4j, RDF, SPARQL).