Skip to main content

AI Providers

AI Providers implement the LLMProvider interface from @stratix/core, enabling your AI agents to communicate with different Large Language Model services.

Available Providers

Anthropic (Claude)

Package: @stratix/ai-anthropic

Anthropic's Claude models, known for their strong reasoning and long context windows.

Installation

npm install @stratix/ai-anthropic

Or using the CLI:

stratix add ai-anthropic

Supported Models

  • claude-3-opus-20240229 - Most capable, best for complex tasks
  • claude-3-sonnet-20240229 - Balanced performance and cost
  • claude-3-haiku-20240307 - Fastest, most cost-effective
  • claude-3-5-sonnet-20241022 - Latest Sonnet with improved capabilities

Usage

import { AnthropicProvider } from '@stratix/ai-anthropic';
import { AIAgent } from '@stratix/core';

// Create provider
const provider = new AnthropicProvider({
apiKey: process.env.ANTHROPIC_API_KEY!
});

// Use with an agent
const agent = new AIAgent({
id: 'assistant',
name: 'Assistant',
description: 'Helpful assistant',
llmProvider: provider,
model: 'claude-3-5-sonnet-20241022',
systemPrompt: 'You are a helpful assistant.',
});

// Execute agent
const result = await agent.execute({
messages: [
{ role: 'user', content: 'Hello!', timestamp: new Date() }
]
});

Features

  • ✅ Chat completion
  • ✅ Streaming responses
  • ✅ Tool/function calling
  • ✅ Cost tracking
  • ❌ Embeddings (not supported by Anthropic)

OpenAI

Package: @stratix/ai-openai

OpenAI's GPT models, including GPT-4 and GPT-3.5.

Installation

npm install @stratix/ai-openai

Or using the CLI:

stratix add ai-openai

Supported Models

  • gpt-4-turbo - Latest GPT-4 with vision
  • gpt-4 - Original GPT-4
  • gpt-3.5-turbo - Fast and cost-effective
  • text-embedding-3-small - Embeddings model
  • text-embedding-3-large - Higher quality embeddings

Usage

import { OpenAIProvider } from '@stratix/ai-openai';
import { AIAgent } from '@stratix/core';

// Create provider
const provider = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!
});

// Use with an agent
const agent = new AIAgent({
id: 'assistant',
name: 'Assistant',
description: 'Helpful assistant',
llmProvider: provider,
model: 'gpt-4-turbo',
systemPrompt: 'You are a helpful assistant.',
});

Features

  • ✅ Chat completion
  • ✅ Streaming responses
  • ✅ Tool/function calling
  • ✅ Embeddings
  • ✅ Cost tracking
  • ✅ Vision (GPT-4 Turbo)

Provider Interface

All AI providers implement the LLMProvider interface:

interface LLMProvider {
readonly name: string;
readonly models: string[];

chat(params: ChatParams): Promise<ChatResponse>;
streamChat(params: ChatParams): AsyncIterable<ChatChunk>;
embeddings(params: EmbeddingParams): Promise<EmbeddingResponse>;
}

Choosing a Provider

Use CaseRecommended ProviderModel
Complex reasoningAnthropicclaude-3-opus-20240229
Balanced performanceAnthropicclaude-3-5-sonnet-20241022
Fast responsesAnthropicclaude-3-haiku-20240307
Vision tasksOpenAIgpt-4-turbo
EmbeddingsOpenAItext-embedding-3-large
Cost-effectiveOpenAIgpt-3.5-turbo

Creating a Custom Provider

To create your own LLM provider:

import type {
LLMProvider,
ChatParams,
ChatResponse,
ChatChunk,
EmbeddingParams,
EmbeddingResponse
} from '@stratix/core';

export class CustomProvider implements LLMProvider {
readonly name = 'custom';
readonly models = ['model-1', 'model-2'];

constructor(private config: { apiKey: string }) {}

async chat(params: ChatParams): Promise<ChatResponse> {
// Call your LLM API
const response = await fetch('https://api.example.com/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.config.apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: params.model,
messages: params.messages,
temperature: params.temperature
})
});

const data = await response.json();

return {
content: data.content,
usage: {
promptTokens: data.usage.prompt_tokens,
completionTokens: data.usage.completion_tokens,
totalTokens: data.usage.total_tokens
},
finishReason: 'stop'
};
}

async *streamChat(params: ChatParams): AsyncIterable<ChatChunk> {
// Implement streaming
}

async embeddings(params: EmbeddingParams): Promise<EmbeddingResponse> {
// Implement embeddings
}
}

Next Steps