top of page

How We Integrate Contexts Into The Vibe Coding Process

Developing our Co-exec RAG system taught me the importance of utilizing multiple AI models for research, coding, and debugging, highlighting that this approach is essential for delivering production-ready code. The Context-Anchored Development workflow resolved our hallucination issues.


When my boss assigned me the task of creating "Co-exec," our internal chatbot functioning as a RAG system over company data, I initially believed that my all-in-one AI coding assistant was the ultimate solution. I expected to describe the requirements and receive a flawless pipeline. The outcome? A generic, hallucinated architecture that failed to work. This was my realization: true AI engineering isn't about finding a single perfect tool; it's about coordinating a team of specialized models.


Heads up for some tools that might be new to our readers:

Tool

For Engineers

For Non-Technical Readers

Their Job in Our Project

ChatGPT/DeepSeek

Large language models trained on broad internet data

Your strategic consultant

Helps design the overall chatbot architecture and strategy

Agno

An AI agent framework for building conversational AI

Your conversation designer

Provides the chatbot's brain—how it thinks, talks, and uses tools

Supabase

A backend-as-a-service with database and storage

Your digital filing system

Stores all company documents and helps the chatbot find answers

Windsurf

An AI-powered code editor/IDE

Your project manager

Assembles all the verified work into a working system

MCP

Model Context Protocol—shares info between AI tools

Your shared project management system

Keeps all AI assistants on the same page about our project

Building My Specialist AI Team

The breakthrough came when I stopped asking one model to do everything and started building a workflow with specialized agents:


  1. DeepSeek & ChatGPT (The Architects): I use these broad models for high-level strategy and research. They're excellent for questions like: "What's the optimal chunking strategy for mixed PDFs and Slack export data?" or "Compare hybrid search approaches for a dense+sparse retrieval pipeline." They provide the conceptual framework and trade-offs.


  2. Platform-Specific Chatbots (The Specialists): This is the critical step most engineers miss. I now go straight to the source:

    1. Agno's Documentation Chatbot: For precise implementation: "Show me the correct pattern for streaming tool execution results back to the Agno agent interface."

    2. Supabase AI Assistant: For database precision: *"Write a query that performs a vector similarity search filtered by metadata->>'department' and ordered by recency."


  3. Windsurf/Cursor (The Assembler): Only after I gather verified patterns and code anchors from the specialists do I bring the task back to my coding assistant. My prompt transforms from a vague vibe into a precise directive.


This workflow turns my primary AI from an error-prone guesser into a highly accurate assembler of pre-validated components. The difference is stark when you compare the outputs. Consider a simple task: fetching a file ID from Supabase Storage.


supabase

Windsurf's generated code contains three subtle errors: unnecessary parentheses, an invalid third parameter to list(), and improper promise chaining.


The Solution:

Now, when I use Windsurf, I anchor it with the verified pattern:

CONTEXT: Here is the correct Supabase Storage API pattern from their official docs:
[paste the correct code above from Supabase]

TASK: Using this exact pattern, write the `get_document_id` function for our 
'company-documents' bucket. It must:
1. Accept a filename parameter
2. Use the correct .list() method shown above
3. Return the matching document's ID or null if not found
4. Handle errors according to our standard logging pattern

The AI now replicates the verified pattern perfectly, injecting only the specific logic I need. This eliminates syntax hallucinations and produces production-ready code on the first attempt.


The Glue: Model Context Protocol (MCP)

Switching between multiple models introduced a new problem: context fragmentation. I was wasting cycles constantly re-explaining our schema, our error formats, and our project goals to each separate AI.

This is where Model Context Protocol (MCP) became our project's backbone. As my teammate Trac wrote previously, MCP allows you to serve structured context to any MCP-compatible AI tool.

For Co-exec, we deployed local MCP servers that gave every AI in our chain the same foundational knowledge:

  • Our exact Supabase documents table schema

  • The JSON structure of our Agno tool definitions

  • Our embedding model configuration and dimensions


Suddenly, when I asked a research question in ChatGPT or an implementation question in Windsurf, they both started from the same ground truth.


The "Context-Aware Directing" in Action: Building Co-exec

MPC

Here’s how this multi-model, MCP-connected workflow translated into building our production Co-exec system:


Phase 1: Research & Design (ChatGPT/DeepSeek)

My Prompt: "We're building a RAG system for internal company docs. Sources: PDFs, Confluence, Slack threads. Constraints: Must use Supabase pgvector and Agno agents. What's a resilient data pipeline design?"

Outcome: A proposed architecture with separate chunking strategies per data type and a hybrid retrieval approach.


Phase 2: Specification & Validation (Specialist Bots)

  • I took the hybrid retrieval concept to the Supabase AI Assistant to get the exact SQL function for combined keyword and vector search.

  • I took the streaming response requirement to the Agno Doc Bot to get the proper callback implementation.


Phase 3: Context Synchronization (MCP)

I updated our project's MCP server with the finalized database schema and Agno agent interface that resulted from Phase 2.


Phase 4: Assembly & Integration (Windsurf)

Final Prompt: "Using the attached MCP context (schema, agent interface) and the validated SQL function and Agno callback pattern, write the complete CompanyKnowledgeAgent. It must initialize the retriever, format contexts, handle errors with our standard logging, and integrate with the Agno session manager."


The New AI Engineering Mindset


The lesson from the Co-exec project is fundamental: As AI Engineers, our value is shifting from writing code to orchestrating intelligence.

  • Your general-purpose AI (ChatGPT, DeepSeek) is your Chief Architect - use it for strategy, research, and high-level design.

  • Platform-specific (Supabase, Agno) AIs are your Staff Engineers - they provide guaranteed, in-context implementations of their own platforms.

  • MCP is your shared documentation system - it ensures every "team member" operates from the same blueprint.

  • Your code-assistant AI (Windsurf, Cursor) is your Senior Developer - it expertly assembles validated components when given precise, context-rich directives.


The future of AI-assisted development isn't about finding one model to rule them all. It's about building your own personal AI team, giving each member the right context and the right task, and orchestrating their collaboration to build something greater than any one of them could alone.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page