Debugging

MindedJS provides comprehensive debugging capabilities to help you understand and troubleshoot your agent's behavior during development and production. This guide covers various debugging techniques and tools available in the SDK.

Debug Logging

Set the log level to debug in your environment:

LOG_LEVEL=debug

Debugging Logical Conditions

You can debug and breakpoint on logical conditions during development:

// Listen to condition evaluation events
import { AgentEvents } from 'mindedjs';

agent.on(AgentEvents.ON_LOGICAL_CONDITION, async ({ edge, state, condition }) => {
  console.log('[Debug] Evaluating condition:', condition);
  console.log('[Debug] Current memory:', state.memory);
});

agent.on(AgentEvents.ON_LOGICAL_CONDITION_RESULT, async ({ condition, result, executionTimeMs }) => {
  console.log('[Debug] Result:', result);
  console.log('[Debug] Execution time:', executionTimeMs, 'ms');
});

LLM Debug Callback Handler

A good practice is to periodically inspect the actual messages being sent to the LLM to ensure they match your expectations. You can breakpoint and view the final prompt messages after compilation using LLMDebugCallbackHandler.

Using the LLM Debug Callback Handler

The LLMDebugCallbackHandler can be imported directly from the main package:

import { Agent, LLMDebugCallbackHandler } from '@minded-ai/mindedjs';

// Create the debug handler
const debugHandler = new LLMDebugCallbackHandler();

// Configure your agent with the debug handler
const agent = new Agent({
  config: {
    ...config,
    llm: {
      ...config.llm,
      properties: {
        ...config.llm.properties,
        callbacks: [debugHandler],
      },
    },
  },
  tools,
  memorySchema,
});

Advanced Usage - Custom Debug Handler

You can extend the LLMDebugCallbackHandler to add your own debugging logic:

import { LLMDebugCallbackHandler } from '@minded-ai/mindedjs';

class CustomDebugHandler extends LLMDebugCallbackHandler {
  async handleLLMEnd(output: any, ...args: any[]): Promise<void> {
    // Log token usage if available
    const tokenUsage = output.llmOutput?.tokenUsage;
    console.log(`Token usage: ${tokenUsage.totalTokens}`);
  }
}

The LLMDebugCallbackHandler extends LangChain's BaseCallbackHandler and can override different mathods in langchain flow. See the LangChain documentation.

Last updated