Dacodes

AI Systems Engineer (Agents, RAG & Production AI)

Apply Now

Job Description

About DaCodes

DaCodes is a leading software and digital product company that partners with ambitious organizations to design, build, and scale custom technology solutions. We combine deep engineering expertise with a product-minded consulting approach to deliver high-impact outcomes across industries. Our teams work at the intersection of AI, cloud infrastructure, and modern software development to solve complex business challenges.

Role Overview

We are looking for a Senior AI Engineer to join our team and lead the design, development, and deployment of AI-powered products and platforms. This role sits at the intersection of applied AI research, fullstack engineering, and solution architecture. You will be responsible for building intelligent systems that include conversational AI, autonomous agents, retrieval-augmented generation pipelines, and AI-assisted development workflows. You will work closely with product, design, and engineering teams, as well as directly with clients, to translate complex business needs into production-ready AI solutions.

Requirements

Key Responsibilities

  • Design and implement end-to-end AI solutions including chatbots, RAG pipelines, autonomous agents, and intelligent automation workflows.
  • Build fullstack platforms and prototypes rapidly using AI-assisted development tools such as Lovable, v0, Cursor, and Kiro.
  • Architect and develop AI agent systems using frameworks like LangChain, LangGraph, Strands Agents, and CrewAI, applying agentic design patterns (ReAct, Plan-and-Execute, Reflection, Tool-use).
  • Design and maintain advanced prompt engineering strategies: chain-of-thought, few-shot, system prompts, structured outputs, prompt chaining, and production prompt libraries with version control.
  • Build and operate document processing pipelines using Unstructured.io, LlamaParse, Amazon Textract, or Azure Document Intelligence for RAG ingestion.
  • Integrate and manage vector databases (Pinecone, Qdrant, Weaviate, Chroma) for semantic search and knowledge retrieval, including advanced patterns like hybrid search, reranking, and query decomposition.
  • Deploy, manage, and optimize AI workloads on AWS (Bedrock, SageMaker, Lambda) and Azure (AI Foundry, OpenAI Service, Cognitive Services).
  • Implement AI security best practices: prompt injection prevention, jailbreak mitigation, PII detection/redaction, input sanitization, and output validation using guardrails frameworks (NeMo Guardrails, Guardrails AI).
  • Define and enforce best practices for spec-driven development and AI-assisted engineering workflows across teams.
  • Implement evaluation frameworks, LLM observability, and tracing pipelines (LangSmith, LangFuse, Helicone, OpenTelemetry) for AI systems in production.
  • Design API layers for AI services including REST/GraphQL endpoints, webhook integrations, and rate limiting/retry strategies for LLM providers.
  • Collaborate with clients and stakeholders to scope AI initiatives, identify technical feasibility, and propose architectured solutions.
  • Mentor junior engineers and contribute to internal knowledge-sharing on AI technologies and methodologies.
  • Stay current with the rapidly evolving AI landscape and proactively recommend new tools, models, and patterns.

Required Qualifications

  • 5+ years of professional software engineering experience, with at least 2 years focused on AI/ML applications.
  • Strong proficiency in Python and/or TypeScript for AI and backend development.
  • Hands-on experience building AI agents using LangChain, LangGraph, CrewAI, Strands Agents, or equivalent orchestration frameworks.
  • Strong understanding of agentic design patterns: ReAct, Plan-and-Execute, Reflection, Tool-use, and multi-step reasoning architectures.
  • Advanced prompt engineering skills: chain-of-thought, few-shot, system prompts, structured outputs (JSON mode, Pydantic parsers), prompt chaining, and production prompt library management.
  • Production experience with RAG architectures: document ingestion, chunking strategies, embedding models, vector stores, reranking (Cohere Rerank, cross-encoders), hybrid search, and retrieval pipelines.
  • Experience building document processing pipelines using tools like Unstructured.io, LlamaParse, Amazon Textract, or Azure Document Intelligence.
  • Working knowledge of vector databases: Pinecone, Qdrant, Weaviate, Chroma, or pgvector.
  • Experience deploying AI solutions on AWS (Bedrock, SageMaker) and/or Azure (AI Foundry, OpenAI Service).
  • Proven ability to build fullstack applications and rapid prototypes using AI-assisted tools (Lovable, v0, Cursor, Kiro).
  • Understanding of AI security principles: prompt injection prevention, jailbreak mitigation, PII detection/redaction, and output validation.
  • Experience implementing guardrails frameworks (NeMo Guardrails, Guardrails AI) for safe and controlled LLM behavior in production.
  • Experience with spec-driven or AI-assisted development practices including structured prompting, code generation validation, and iterative refinement.
  • Familiarity with CI/CD pipelines, containerization (Docker), and infrastructure as code (Terraform, CDK, or Pulumi).

Preferred Qualifications

  • Experience with multi-agent architectures and agent-to-agent communication patterns.
  • Familiarity with automated prompt evaluation, benchmarking methodologies, and regression testing for prompt libraries.
  • Knowledge of model routing and gateway patterns: LiteLLM, Portkey, or custom gateway layers for multi-provider failover and cost optimization.
  • Experience with semantic caching strategies (GPTCache, Redis semantic cache) and prompt caching (Anthropic, OpenAI) for cost and latency optimization.
  • Deep familiarity with platform-specific AI APIs: OpenAI Assistants/Responses API, Anthropic tool-use and computer-use patterns, AWS Bedrock Agents.
  • Understanding of token budgeting, model selection heuristics, and cost optimization strategies across LLM providers.
  • Experience designing and executing AI testing methodologies: unit testing prompts, integration testing agents, and regression suites for AI behavior.
  • Experience with streaming architectures for real-time AI responses (SSE, WebSockets).
  • Hands-on experience with model fine-tuning and distillation techniques.
  • Exposure to MLOps practices: model versioning, A/B testing, drift detection, and monitoring.
  • Working knowledge of MCP (Model Context Protocol) servers and tool integration patterns.
  • Experience with event-driven AI architectures using Kafka or SQS for async processing pipelines.
  • Experience with speech-to-text, text-to-speech, or multimodal AI applications.
  • Contributions to open-source AI projects or active participation in the AI developer community.
  • Background in consulting or client-facing delivery environments.


Technology Stack

-AI Frameworks: LangChain, LangGraph, CrewAI, Strands Agents, Haystack, Semantic Kernel

-Prompt Engineering:
DSPy, PromptFlow, chain-of-thought, few-shot, structured outputs, prompt libraries

-Guardrails&Security:
NeMo Guardrails, Guardrails AI, PII redaction, prompt injection prevention

-Document Processing:
Unstructured.io, LlamaParse, Amazon Textract, Azure Document Intelligence

-Cloud AI Services:
AWS Bedrock, AWS SageMaker, Azure AI Foundry, Azure OpenAI Service

-Vector Databases:
Pinecone, Qdrant, Weaviate, Chroma, pgvector

-LLM Providers:
OpenAI, Anthropic (Claude), AWS (Titan, Nova), Google (Gemini), Meta (Llama), Mistral

-AI Dev Tools:
Lovable, v0, Cursor, Kiro, GitHub Copilot, Claude Code

-Observability & Tracing:
LangSmith, LangFuse, Helicone, RAGAS, DeepEval, OpenTelemetry, Weights & Biases

-Model Routing & Caching:
LiteLLM, Portkey, GPTCache, Redis semantic cache

-Languages:
Python, TypeScript/JavaScript, SQL

-Cloud Platforms:
AWS, Azure, GCP (secondary)

-Infrastructure:
Docker, Terraform, CDK, Kubernetes, Serverless (Lambda, Azure Functions)

-Databases:
PostgreSQL, DynamoDB, MongoDB, Redis, Neo4j

-Event & Streaming:
Kafka, SQS, SSE, WebSockets

Benefits

🚀 Integration with global brands and disruptive startups.

🏡 Remote work / Home office

📍 If a hybrid or on-site modality is required, you will be informed from the first session.

⏳ Schedule aligned with your assigned project/team.

📅 Monday to Friday work schedule.

🎉 Day off on your birthday.

🏥 Major medical insurance (applies to Mexico).

🛡️ Life insurance (applies to Mexico).

🌎 Multicultural teams.

🎓 Access to courses and certifications.

📢 Meetups with special guests from the IT industry.

📡 Virtual integration events and interest groups.

📢 English classes.

🏆 Opportunities within our different business lines.

🏅 Proudly certified as a Great Place to Work.