Standard RAG pipeline implementation.

Orchestrates the complete RAG workflow:

  1. Ingest: Chunk documents and store with embeddings
  2. Retrieve: Find relevant documents using vector search
  3. Generate: Create responses augmented with retrieved context
const pipeline = new StandardRAGPipeline({
llmProvider: openAIProvider,
vectorStore: new InMemoryVectorStore(openAIProvider, 'text-embedding-3-small'),
chunker: new RecursiveTextChunker({ chunkSize: 1000, chunkOverlap: 200 }),
embeddingModel: 'text-embedding-3-small',
generationModel: 'gpt-4-turbo',
defaultSearchLimit: 5,
defaultMinScore: 0.7
});

// Ingest documents
await pipeline.ingest([
{ id: '1', content: 'Product documentation...' },
{ id: '2', content: 'FAQ content...' }
], { chunk: true });

// Query with RAG
const result = await pipeline.query('How do I reset my password?');
console.log(result.response);

Implements

  • RAGPipeline

Constructors

Methods

Properties

Constructors

Methods

  • Ingest documents into the pipeline

    Parameters

    • documents: Document[]

      Documents to ingest

    • options: IngestOptions = {}

      Ingestion options

    Returns Promise<IngestResult>

    Ingestion result with statistics

  • Retrieve relevant documents for a query

    Parameters

    • query: string

      Search query

    • options: RetrievalOptions = {}

      Retrieval options

    Returns Promise<VectorSearchResult[]>

    Matching documents with scores

  • Query with retrieval-augmented generation

    Parameters

    • query: string

      User query

    • options: RetrievalOptions = {}

      Retrieval options

    • OptionalsystemPrompt: string

      Optional system prompt for generation

    Returns Promise<RAGResult>

    Generated response with context and token usage

  • Get statistics about the pipeline

    Returns Promise<RAGPipelineStatistics>

    Pipeline statistics

Properties

config: RAGPipelineConfig

Configuration for this pipeline