Parallel LLM Requests
Quick Start
{
"flows": ["./src/flows"],
"tools": ["./src/tools"],
"agent": "./src/agent.ts",
"llm": {
"name": "MindedChatOpenAI",
"properties": {
"model": "gpt-4o",
"numParallelRequests": 3,
"logTimings": true
}
}
}import { Agent } from '@minded-ai/mindedjs';
import memorySchema from './agentMemorySchema';
import config from '../minded.json';
import tools from './tools';
const agent = new Agent({
memorySchema,
config, // Parallel configuration is automatically applied
tools,
});Configuration Options
MindedChatOpenAI (Recommended)
AzureChatOpenAI
ChatOpenAI
Configuration Parameters
Parameter
Type
Default
Description
Performance Notes
Monitoring Performance
Advanced Usage
Dynamic Configuration
Manual Instantiation with createParallelWrapper
How It Works
MindedChatOpenAI (Backend Processing)
Other LLM Providers (Client-Side Processing)
Best Practices
Troubleshooting
No Performance Improvement
Increased Costs
Rate Limiting
Last updated